You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by anand sharma <an...@gmail.com> on 2012/08/09 12:16:19 UTC

namenode instantiation error

Hi, i am just learning the Hadoop and i am setting
the development environment with CDH3 pseudo distributed mode without any
ssh cofiguration  in CentOS 6.2 . i can run the sample programs as usual
but when i try and run namenode this is the error it logs...

[hive@localhost ~]$ hadoop namenode
12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.2-cdh3u4
STARTUP_MSG:   build = file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
-r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May
 7 14:01:59 PDT 2012
************************************************************/
12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
processName=NameNode, sessionId=null
12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
NameNodeMeterics using context
object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
12/08/09 20:56:57 INFO namenode.FSNamesystem:
dfs.block.invalidate.limit=1000
12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
FSNamesystemMetrics using context
object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem initialization
failed.
java.io.FileNotFoundException:
/var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
at
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
at
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
at
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
12/08/09 20:56:57 ERROR namenode.NameNode: java.io.FileNotFoundException:
/var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
at
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
at
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
at
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)

12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
************************************************************/

Re: namenode instantiation error

Posted by shashwat shriparv <dw...@gmail.com>.
I think you need to install and configure ssh



On Thu, Aug 9, 2012 at 4:30 PM, anand sharma <an...@gmail.com> wrote:

> Thanks all for reply, yes the user has access to that directory and i have
> already formatted the namenode; just for simplicity i am not using ssh as i
> am doing things for the first time.
>
> On Thu, Aug 9, 2012 at 3:58 PM, shashwat shriparv <
> dwivedishashwat@gmail.com> wrote:
>
>> format the filesystem
>>
>> bin/hadoop namenode -format
>>
>> then try to start namenode :)
>>
>>
>> On Thu, Aug 9, 2012 at 3:51 PM, Mohammad Tariq <do...@gmail.com>wrote:
>>
>>> Hello Anand,
>>>
>>>     Is there any specific reason behind not using ssh??
>>>
>>> Regards,
>>>     Mohammad Tariq
>>>
>>>
>>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
>>> wrote:
>>> > Hi, i am just learning the Hadoop and i am setting the development
>>> > environment with CDH3 pseudo distributed mode without any ssh
>>> cofiguration
>>> > in CentOS 6.2 . i can run the sample programs as usual but when i try
>>> and
>>> > run namenode this is the error it logs...
>>> >
>>> > [hive@localhost ~]$ hadoop namenode
>>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>>> > /************************************************************
>>> > STARTUP_MSG: Starting NameNode
>>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>>> > STARTUP_MSG:   args = []
>>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
>>> > STARTUP_MSG:   build =
>>> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
>>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
>>> May
>>> > 7 14:01:59 PDT 2012
>>> > ************************************************************/
>>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
>>> > processName=NameNode, sessionId=null
>>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>>> > NameNodeMeterics using context
>>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
>>> entries
>>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
>>> (auth:SIMPLE)
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> > dfs.block.invalidate.limit=1000
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> isAccessTokenEnabled=false
>>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>>> > FSNamesystemMetrics using context
>>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
>>> initialization
>>> > failed.
>>> > java.io.FileNotFoundException:
>>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>>> denied)
>>> > at java.io.RandomAccessFile.open(Native Method)
>>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>>> > at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>>> > at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
>>> java.io.FileNotFoundException:
>>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>>> denied)
>>> > at java.io.RandomAccessFile.open(Native Method)
>>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>>> > at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>>> > at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> >
>>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>>> > /************************************************************
>>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/
>>> 127.0.0.1
>>> > ************************************************************/
>>> >
>>> >
>>>
>>
>>
>>
>> --
>>
>>
>> ∞
>> Shashwat Shriparv
>>
>>
>>
>


-- 


∞
Shashwat Shriparv

Re: namenode instantiation error

Posted by shashwat shriparv <dw...@gmail.com>.
I think you need to install and configure ssh



On Thu, Aug 9, 2012 at 4:30 PM, anand sharma <an...@gmail.com> wrote:

> Thanks all for reply, yes the user has access to that directory and i have
> already formatted the namenode; just for simplicity i am not using ssh as i
> am doing things for the first time.
>
> On Thu, Aug 9, 2012 at 3:58 PM, shashwat shriparv <
> dwivedishashwat@gmail.com> wrote:
>
>> format the filesystem
>>
>> bin/hadoop namenode -format
>>
>> then try to start namenode :)
>>
>>
>> On Thu, Aug 9, 2012 at 3:51 PM, Mohammad Tariq <do...@gmail.com>wrote:
>>
>>> Hello Anand,
>>>
>>>     Is there any specific reason behind not using ssh??
>>>
>>> Regards,
>>>     Mohammad Tariq
>>>
>>>
>>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
>>> wrote:
>>> > Hi, i am just learning the Hadoop and i am setting the development
>>> > environment with CDH3 pseudo distributed mode without any ssh
>>> cofiguration
>>> > in CentOS 6.2 . i can run the sample programs as usual but when i try
>>> and
>>> > run namenode this is the error it logs...
>>> >
>>> > [hive@localhost ~]$ hadoop namenode
>>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>>> > /************************************************************
>>> > STARTUP_MSG: Starting NameNode
>>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>>> > STARTUP_MSG:   args = []
>>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
>>> > STARTUP_MSG:   build =
>>> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
>>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
>>> May
>>> > 7 14:01:59 PDT 2012
>>> > ************************************************************/
>>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
>>> > processName=NameNode, sessionId=null
>>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>>> > NameNodeMeterics using context
>>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
>>> entries
>>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
>>> (auth:SIMPLE)
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> > dfs.block.invalidate.limit=1000
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> isAccessTokenEnabled=false
>>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>>> > FSNamesystemMetrics using context
>>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
>>> initialization
>>> > failed.
>>> > java.io.FileNotFoundException:
>>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>>> denied)
>>> > at java.io.RandomAccessFile.open(Native Method)
>>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>>> > at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>>> > at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
>>> java.io.FileNotFoundException:
>>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>>> denied)
>>> > at java.io.RandomAccessFile.open(Native Method)
>>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>>> > at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>>> > at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> >
>>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>>> > /************************************************************
>>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/
>>> 127.0.0.1
>>> > ************************************************************/
>>> >
>>> >
>>>
>>
>>
>>
>> --
>>
>>
>> ∞
>> Shashwat Shriparv
>>
>>
>>
>


-- 


∞
Shashwat Shriparv

Re: namenode instantiation error

Posted by shashwat shriparv <dw...@gmail.com>.
I think you need to install and configure ssh



On Thu, Aug 9, 2012 at 4:30 PM, anand sharma <an...@gmail.com> wrote:

> Thanks all for reply, yes the user has access to that directory and i have
> already formatted the namenode; just for simplicity i am not using ssh as i
> am doing things for the first time.
>
> On Thu, Aug 9, 2012 at 3:58 PM, shashwat shriparv <
> dwivedishashwat@gmail.com> wrote:
>
>> format the filesystem
>>
>> bin/hadoop namenode -format
>>
>> then try to start namenode :)
>>
>>
>> On Thu, Aug 9, 2012 at 3:51 PM, Mohammad Tariq <do...@gmail.com>wrote:
>>
>>> Hello Anand,
>>>
>>>     Is there any specific reason behind not using ssh??
>>>
>>> Regards,
>>>     Mohammad Tariq
>>>
>>>
>>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
>>> wrote:
>>> > Hi, i am just learning the Hadoop and i am setting the development
>>> > environment with CDH3 pseudo distributed mode without any ssh
>>> cofiguration
>>> > in CentOS 6.2 . i can run the sample programs as usual but when i try
>>> and
>>> > run namenode this is the error it logs...
>>> >
>>> > [hive@localhost ~]$ hadoop namenode
>>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>>> > /************************************************************
>>> > STARTUP_MSG: Starting NameNode
>>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>>> > STARTUP_MSG:   args = []
>>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
>>> > STARTUP_MSG:   build =
>>> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
>>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
>>> May
>>> > 7 14:01:59 PDT 2012
>>> > ************************************************************/
>>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
>>> > processName=NameNode, sessionId=null
>>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>>> > NameNodeMeterics using context
>>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
>>> entries
>>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
>>> (auth:SIMPLE)
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> > dfs.block.invalidate.limit=1000
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> isAccessTokenEnabled=false
>>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>>> > FSNamesystemMetrics using context
>>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
>>> initialization
>>> > failed.
>>> > java.io.FileNotFoundException:
>>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>>> denied)
>>> > at java.io.RandomAccessFile.open(Native Method)
>>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>>> > at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>>> > at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
>>> java.io.FileNotFoundException:
>>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>>> denied)
>>> > at java.io.RandomAccessFile.open(Native Method)
>>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>>> > at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>>> > at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> >
>>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>>> > /************************************************************
>>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/
>>> 127.0.0.1
>>> > ************************************************************/
>>> >
>>> >
>>>
>>
>>
>>
>> --
>>
>>
>> ∞
>> Shashwat Shriparv
>>
>>
>>
>


-- 


∞
Shashwat Shriparv

Re: namenode instantiation error

Posted by shashwat shriparv <dw...@gmail.com>.
I think you need to install and configure ssh



On Thu, Aug 9, 2012 at 4:30 PM, anand sharma <an...@gmail.com> wrote:

> Thanks all for reply, yes the user has access to that directory and i have
> already formatted the namenode; just for simplicity i am not using ssh as i
> am doing things for the first time.
>
> On Thu, Aug 9, 2012 at 3:58 PM, shashwat shriparv <
> dwivedishashwat@gmail.com> wrote:
>
>> format the filesystem
>>
>> bin/hadoop namenode -format
>>
>> then try to start namenode :)
>>
>>
>> On Thu, Aug 9, 2012 at 3:51 PM, Mohammad Tariq <do...@gmail.com>wrote:
>>
>>> Hello Anand,
>>>
>>>     Is there any specific reason behind not using ssh??
>>>
>>> Regards,
>>>     Mohammad Tariq
>>>
>>>
>>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
>>> wrote:
>>> > Hi, i am just learning the Hadoop and i am setting the development
>>> > environment with CDH3 pseudo distributed mode without any ssh
>>> cofiguration
>>> > in CentOS 6.2 . i can run the sample programs as usual but when i try
>>> and
>>> > run namenode this is the error it logs...
>>> >
>>> > [hive@localhost ~]$ hadoop namenode
>>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>>> > /************************************************************
>>> > STARTUP_MSG: Starting NameNode
>>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>>> > STARTUP_MSG:   args = []
>>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
>>> > STARTUP_MSG:   build =
>>> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
>>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
>>> May
>>> > 7 14:01:59 PDT 2012
>>> > ************************************************************/
>>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
>>> > processName=NameNode, sessionId=null
>>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>>> > NameNodeMeterics using context
>>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
>>> entries
>>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
>>> (auth:SIMPLE)
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> > dfs.block.invalidate.limit=1000
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> isAccessTokenEnabled=false
>>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>>> > FSNamesystemMetrics using context
>>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
>>> initialization
>>> > failed.
>>> > java.io.FileNotFoundException:
>>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>>> denied)
>>> > at java.io.RandomAccessFile.open(Native Method)
>>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>>> > at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>>> > at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
>>> java.io.FileNotFoundException:
>>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>>> denied)
>>> > at java.io.RandomAccessFile.open(Native Method)
>>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>>> > at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>>> > at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> >
>>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>>> > /************************************************************
>>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/
>>> 127.0.0.1
>>> > ************************************************************/
>>> >
>>> >
>>>
>>
>>
>>
>> --
>>
>>
>> ∞
>> Shashwat Shriparv
>>
>>
>>
>


-- 


∞
Shashwat Shriparv

Re: namenode instantiation error

Posted by anand sharma <an...@gmail.com>.
Thanks all for reply, yes the user has access to that directory and i have
already formatted the namenode; just for simplicity i am not using ssh as i
am doing things for the first time.

On Thu, Aug 9, 2012 at 3:58 PM, shashwat shriparv <dwivedishashwat@gmail.com
> wrote:

> format the filesystem
>
> bin/hadoop namenode -format
>
> then try to start namenode :)
>
>
> On Thu, Aug 9, 2012 at 3:51 PM, Mohammad Tariq <do...@gmail.com> wrote:
>
>> Hello Anand,
>>
>>     Is there any specific reason behind not using ssh??
>>
>> Regards,
>>     Mohammad Tariq
>>
>>
>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
>> wrote:
>> > Hi, i am just learning the Hadoop and i am setting the development
>> > environment with CDH3 pseudo distributed mode without any ssh
>> cofiguration
>> > in CentOS 6.2 . i can run the sample programs as usual but when i try
>> and
>> > run namenode this is the error it logs...
>> >
>> > [hive@localhost ~]$ hadoop namenode
>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>> > /************************************************************
>> > STARTUP_MSG: Starting NameNode
>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>> > STARTUP_MSG:   args = []
>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
>> > STARTUP_MSG:   build =
>> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
>> May
>> > 7 14:01:59 PDT 2012
>> > ************************************************************/
>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
>> > processName=NameNode, sessionId=null
>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>> > NameNodeMeterics using context
>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>> > dfs.block.invalidate.limit=1000
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>> > FSNamesystemMetrics using context
>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
>> initialization
>> > failed.
>> > java.io.FileNotFoundException:
>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>> denied)
>> > at java.io.RandomAccessFile.open(Native Method)
>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>> > at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>> > at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
>> java.io.FileNotFoundException:
>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>> denied)
>> > at java.io.RandomAccessFile.open(Native Method)
>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>> > at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>> > at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>> >
>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>> > /************************************************************
>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
>> > ************************************************************/
>> >
>> >
>>
>
>
>
> --
>
>
> ∞
> Shashwat Shriparv
>
>
>

Re: namenode instantiation error

Posted by anand sharma <an...@gmail.com>.
Thanks all for reply, yes the user has access to that directory and i have
already formatted the namenode; just for simplicity i am not using ssh as i
am doing things for the first time.

On Thu, Aug 9, 2012 at 3:58 PM, shashwat shriparv <dwivedishashwat@gmail.com
> wrote:

> format the filesystem
>
> bin/hadoop namenode -format
>
> then try to start namenode :)
>
>
> On Thu, Aug 9, 2012 at 3:51 PM, Mohammad Tariq <do...@gmail.com> wrote:
>
>> Hello Anand,
>>
>>     Is there any specific reason behind not using ssh??
>>
>> Regards,
>>     Mohammad Tariq
>>
>>
>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
>> wrote:
>> > Hi, i am just learning the Hadoop and i am setting the development
>> > environment with CDH3 pseudo distributed mode without any ssh
>> cofiguration
>> > in CentOS 6.2 . i can run the sample programs as usual but when i try
>> and
>> > run namenode this is the error it logs...
>> >
>> > [hive@localhost ~]$ hadoop namenode
>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>> > /************************************************************
>> > STARTUP_MSG: Starting NameNode
>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>> > STARTUP_MSG:   args = []
>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
>> > STARTUP_MSG:   build =
>> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
>> May
>> > 7 14:01:59 PDT 2012
>> > ************************************************************/
>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
>> > processName=NameNode, sessionId=null
>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>> > NameNodeMeterics using context
>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>> > dfs.block.invalidate.limit=1000
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>> > FSNamesystemMetrics using context
>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
>> initialization
>> > failed.
>> > java.io.FileNotFoundException:
>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>> denied)
>> > at java.io.RandomAccessFile.open(Native Method)
>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>> > at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>> > at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
>> java.io.FileNotFoundException:
>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>> denied)
>> > at java.io.RandomAccessFile.open(Native Method)
>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>> > at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>> > at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>> >
>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>> > /************************************************************
>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
>> > ************************************************************/
>> >
>> >
>>
>
>
>
> --
>
>
> ∞
> Shashwat Shriparv
>
>
>

Re: namenode instantiation error

Posted by anand sharma <an...@gmail.com>.
Thanks all for reply, yes the user has access to that directory and i have
already formatted the namenode; just for simplicity i am not using ssh as i
am doing things for the first time.

On Thu, Aug 9, 2012 at 3:58 PM, shashwat shriparv <dwivedishashwat@gmail.com
> wrote:

> format the filesystem
>
> bin/hadoop namenode -format
>
> then try to start namenode :)
>
>
> On Thu, Aug 9, 2012 at 3:51 PM, Mohammad Tariq <do...@gmail.com> wrote:
>
>> Hello Anand,
>>
>>     Is there any specific reason behind not using ssh??
>>
>> Regards,
>>     Mohammad Tariq
>>
>>
>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
>> wrote:
>> > Hi, i am just learning the Hadoop and i am setting the development
>> > environment with CDH3 pseudo distributed mode without any ssh
>> cofiguration
>> > in CentOS 6.2 . i can run the sample programs as usual but when i try
>> and
>> > run namenode this is the error it logs...
>> >
>> > [hive@localhost ~]$ hadoop namenode
>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>> > /************************************************************
>> > STARTUP_MSG: Starting NameNode
>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>> > STARTUP_MSG:   args = []
>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
>> > STARTUP_MSG:   build =
>> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
>> May
>> > 7 14:01:59 PDT 2012
>> > ************************************************************/
>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
>> > processName=NameNode, sessionId=null
>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>> > NameNodeMeterics using context
>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>> > dfs.block.invalidate.limit=1000
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>> > FSNamesystemMetrics using context
>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
>> initialization
>> > failed.
>> > java.io.FileNotFoundException:
>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>> denied)
>> > at java.io.RandomAccessFile.open(Native Method)
>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>> > at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>> > at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
>> java.io.FileNotFoundException:
>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>> denied)
>> > at java.io.RandomAccessFile.open(Native Method)
>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>> > at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>> > at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>> >
>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>> > /************************************************************
>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
>> > ************************************************************/
>> >
>> >
>>
>
>
>
> --
>
>
> ∞
> Shashwat Shriparv
>
>
>

Re: namenode instantiation error

Posted by anand sharma <an...@gmail.com>.
Thanks all for reply, yes the user has access to that directory and i have
already formatted the namenode; just for simplicity i am not using ssh as i
am doing things for the first time.

On Thu, Aug 9, 2012 at 3:58 PM, shashwat shriparv <dwivedishashwat@gmail.com
> wrote:

> format the filesystem
>
> bin/hadoop namenode -format
>
> then try to start namenode :)
>
>
> On Thu, Aug 9, 2012 at 3:51 PM, Mohammad Tariq <do...@gmail.com> wrote:
>
>> Hello Anand,
>>
>>     Is there any specific reason behind not using ssh??
>>
>> Regards,
>>     Mohammad Tariq
>>
>>
>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
>> wrote:
>> > Hi, i am just learning the Hadoop and i am setting the development
>> > environment with CDH3 pseudo distributed mode without any ssh
>> cofiguration
>> > in CentOS 6.2 . i can run the sample programs as usual but when i try
>> and
>> > run namenode this is the error it logs...
>> >
>> > [hive@localhost ~]$ hadoop namenode
>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>> > /************************************************************
>> > STARTUP_MSG: Starting NameNode
>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>> > STARTUP_MSG:   args = []
>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
>> > STARTUP_MSG:   build =
>> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
>> May
>> > 7 14:01:59 PDT 2012
>> > ************************************************************/
>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
>> > processName=NameNode, sessionId=null
>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>> > NameNodeMeterics using context
>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>> > dfs.block.invalidate.limit=1000
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>> > FSNamesystemMetrics using context
>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
>> initialization
>> > failed.
>> > java.io.FileNotFoundException:
>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>> denied)
>> > at java.io.RandomAccessFile.open(Native Method)
>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>> > at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>> > at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
>> java.io.FileNotFoundException:
>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>> denied)
>> > at java.io.RandomAccessFile.open(Native Method)
>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>> > at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>> > at
>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>> > at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>> >
>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>> > /************************************************************
>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
>> > ************************************************************/
>> >
>> >
>>
>
>
>
> --
>
>
> ∞
> Shashwat Shriparv
>
>
>

Re: namenode instantiation error

Posted by shashwat shriparv <dw...@gmail.com>.
format the filesystem

bin/hadoop namenode -format

then try to start namenode :)

On Thu, Aug 9, 2012 at 3:51 PM, Mohammad Tariq <do...@gmail.com> wrote:

> Hello Anand,
>
>     Is there any specific reason behind not using ssh??
>
> Regards,
>     Mohammad Tariq
>
>
> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
> wrote:
> > Hi, i am just learning the Hadoop and i am setting the development
> > environment with CDH3 pseudo distributed mode without any ssh
> cofiguration
> > in CentOS 6.2 . i can run the sample programs as usual but when i try and
> > run namenode this is the error it logs...
> >
> > [hive@localhost ~]$ hadoop namenode
> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> > /************************************************************
> > STARTUP_MSG: Starting NameNode
> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> > STARTUP_MSG:   args = []
> > STARTUP_MSG:   version = 0.20.2-cdh3u4
> > STARTUP_MSG:   build =
> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
> May
> > 7 14:01:59 PDT 2012
> > ************************************************************/
> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> > processName=NameNode, sessionId=null
> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> > NameNodeMeterics using context
> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> > dfs.block.invalidate.limit=1000
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> > FSNamesystemMetrics using context
> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
> initialization
> > failed.
> > java.io.FileNotFoundException:
> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> denied)
> > at java.io.RandomAccessFile.open(Native Method)
> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> > at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> > at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> > 12/08/09 20:56:57 ERROR namenode.NameNode: java.io.FileNotFoundException:
> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> denied)
> > at java.io.RandomAccessFile.open(Native Method)
> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> > at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> > at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >
> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> > /************************************************************
> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
> > ************************************************************/
> >
> >
>



-- 


∞
Shashwat Shriparv

Re: namenode instantiation error

Posted by rahul p <ra...@gmail.com>.
Hi Tariq,

im not getting the right start..

On Thu, Aug 9, 2012 at 7:59 PM, Mohammad Tariq <do...@gmail.com> wrote:

> Hello Rahul,
>
>    That's great. That's the best way to learn(I am doing the same :)
> ). Since the installation part is over, I would suggest to get
> yourself familiar with Hdfs and MapReduce first. Try to do basic
> filesystem operations using the Hdfs API and run the wordcount
> program, if you haven't done it yet. Then move ahead.
>
> Regards,
>     Mohammad Tariq
>
>
> On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com>
> wrote:
> > Hi Tariq,
> >
> > I am also new to Hadoop trying to learn my self can anyone help me on the
> > same.
> > i have installed CDH3.
> >
> >
> >
> > On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com>
> wrote:
> >>
> >> Hello Anand,
> >>
> >>     Is there any specific reason behind not using ssh??
> >>
> >> Regards,
> >>     Mohammad Tariq
> >>
> >>
> >> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
> >> wrote:
> >> > Hi, i am just learning the Hadoop and i am setting the development
> >> > environment with CDH3 pseudo distributed mode without any ssh
> >> > cofiguration
> >> > in CentOS 6.2 . i can run the sample programs as usual but when i try
> >> > and
> >> > run namenode this is the error it logs...
> >> >
> >> > [hive@localhost ~]$ hadoop namenode
> >> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> >> > /************************************************************
> >> > STARTUP_MSG: Starting NameNode
> >> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> >> > STARTUP_MSG:   args = []
> >> > STARTUP_MSG:   version = 0.20.2-cdh3u4
> >> > STARTUP_MSG:   build =
> >> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
> >> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
> >> > May
> >> > 7 14:01:59 PDT 2012
> >> > ************************************************************/
> >> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> >> > processName=NameNode, sessionId=null
> >> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> >> > NameNodeMeterics using context
> >> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> >> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> >> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
> entries
> >> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
> (auth:SIMPLE)
> >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> isPermissionEnabled=false
> >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> >> > dfs.block.invalidate.limit=1000
> >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> isAccessTokenEnabled=false
> >> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> >> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> >> > FSNamesystemMetrics using context
> >> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
> >> > initialization
> >> > failed.
> >> > java.io.FileNotFoundException:
> >> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> >> > denied)
> >> > at java.io.RandomAccessFile.open(Native Method)
> >> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >> > at
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >> > at
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >> > 12/08/09 20:56:57 ERROR namenode.NameNode:
> >> > java.io.FileNotFoundException:
> >> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> >> > denied)
> >> > at java.io.RandomAccessFile.open(Native Method)
> >> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >> > at
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >> > at
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >> >
> >> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> >> > /************************************************************
> >> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/
> 127.0.0.1
> >> > ************************************************************/
> >> >
> >> >
> >
> >
>

Re: namenode instantiation error

Posted by rahul p <ra...@gmail.com>.
Thanks Tariq,
let me start with that.

On Thu, Aug 9, 2012 at 7:59 PM, Mohammad Tariq <do...@gmail.com> wrote:

> Hello Rahul,
>
>    That's great. That's the best way to learn(I am doing the same :)
> ). Since the installation part is over, I would suggest to get
> yourself familiar with Hdfs and MapReduce first. Try to do basic
> filesystem operations using the Hdfs API and run the wordcount
> program, if you haven't done it yet. Then move ahead.
>
> Regards,
>     Mohammad Tariq
>
>
> On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com>
> wrote:
> > Hi Tariq,
> >
> > I am also new to Hadoop trying to learn my self can anyone help me on the
> > same.
> > i have installed CDH3.
> >
> >
> >
> > On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com>
> wrote:
> >>
> >> Hello Anand,
> >>
> >>     Is there any specific reason behind not using ssh??
> >>
> >> Regards,
> >>     Mohammad Tariq
> >>
> >>
> >> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
> >> wrote:
> >> > Hi, i am just learning the Hadoop and i am setting the development
> >> > environment with CDH3 pseudo distributed mode without any ssh
> >> > cofiguration
> >> > in CentOS 6.2 . i can run the sample programs as usual but when i try
> >> > and
> >> > run namenode this is the error it logs...
> >> >
> >> > [hive@localhost ~]$ hadoop namenode
> >> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> >> > /************************************************************
> >> > STARTUP_MSG: Starting NameNode
> >> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> >> > STARTUP_MSG:   args = []
> >> > STARTUP_MSG:   version = 0.20.2-cdh3u4
> >> > STARTUP_MSG:   build =
> >> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
> >> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
> >> > May
> >> > 7 14:01:59 PDT 2012
> >> > ************************************************************/
> >> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> >> > processName=NameNode, sessionId=null
> >> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> >> > NameNodeMeterics using context
> >> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> >> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> >> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
> entries
> >> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
> (auth:SIMPLE)
> >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> isPermissionEnabled=false
> >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> >> > dfs.block.invalidate.limit=1000
> >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> isAccessTokenEnabled=false
> >> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> >> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> >> > FSNamesystemMetrics using context
> >> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
> >> > initialization
> >> > failed.
> >> > java.io.FileNotFoundException:
> >> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> >> > denied)
> >> > at java.io.RandomAccessFile.open(Native Method)
> >> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >> > at
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >> > at
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >> > 12/08/09 20:56:57 ERROR namenode.NameNode:
> >> > java.io.FileNotFoundException:
> >> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> >> > denied)
> >> > at java.io.RandomAccessFile.open(Native Method)
> >> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >> > at
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >> > at
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >> >
> >> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> >> > /************************************************************
> >> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/
> 127.0.0.1
> >> > ************************************************************/
> >> >
> >> >
> >
> >
>

Re: namenode instantiation error

Posted by anand sharma <an...@gmail.com>.
its false... Abhishek

 <property>
     <name>dfs.permissions</name>
     <value>false</value>
  </property>

<property>
     <!-- specify this so that running 'hadoop namenode -format' formats
the right dir -->
     <name>dfs.name.dir</name>
     <value>/var/lib/hadoop-0.20/cache/hadoop/dfs/name</value>
  </property>


On Thu, Aug 9, 2012 at 6:29 PM, Abhishek <ab...@gmail.com> wrote:

> Hi Anand,
>
> What are the permissions, on dfs.name.dir directory in hdfs-site.xml
>
> Regards
> Abhishek
>
>
> Sent from my iPhone
>
> On Aug 9, 2012, at 8:41 AM, anand sharma <an...@gmail.com> wrote:
>
> yea  Tariq !1 its a fresh installation i m doing it for the first time,
> hope someone will know the error code and the reason of error.
>
> On Thu, Aug 9, 2012 at 5:35 PM, Mohammad Tariq <do...@gmail.com> wrote:
>
>> Hi Anand,
>>
>>       Have you tried any other Hadoop distribution or version also??In
>> that case first remove the older one and start fresh.
>>
>> Regards,
>>     Mohammad Tariq
>>
>>
>> On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq <do...@gmail.com>
>> wrote:
>> > Hello Rahul,
>> >
>> >    That's great. That's the best way to learn(I am doing the same :)
>> > ). Since the installation part is over, I would suggest to get
>> > yourself familiar with Hdfs and MapReduce first. Try to do basic
>> > filesystem operations using the Hdfs API and run the wordcount
>> > program, if you haven't done it yet. Then move ahead.
>> >
>> > Regards,
>> >     Mohammad Tariq
>> >
>> >
>> > On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com>
>> wrote:
>> >> Hi Tariq,
>> >>
>> >> I am also new to Hadoop trying to learn my self can anyone help me on
>> the
>> >> same.
>> >> i have installed CDH3.
>> >>
>> >>
>> >>
>> >> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com>
>> wrote:
>> >>>
>> >>> Hello Anand,
>> >>>
>> >>>     Is there any specific reason behind not using ssh??
>> >>>
>> >>> Regards,
>> >>>     Mohammad Tariq
>> >>>
>> >>>
>> >>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
>> >>> wrote:
>> >>> > Hi, i am just learning the Hadoop and i am setting the development
>> >>> > environment with CDH3 pseudo distributed mode without any ssh
>> >>> > cofiguration
>> >>> > in CentOS 6.2 . i can run the sample programs as usual but when i
>> try
>> >>> > and
>> >>> > run namenode this is the error it logs...
>> >>> >
>> >>> > [hive@localhost ~]$ hadoop namenode
>> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>> >>> > /************************************************************
>> >>> > STARTUP_MSG: Starting NameNode
>> >>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>> >>> > STARTUP_MSG:   args = []
>> >>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
>> >>> > STARTUP_MSG:   build =
>> >>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
>> >>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on
>> Mon
>> >>> > May
>> >>> > 7 14:01:59 PDT 2012
>> >>> > ************************************************************/
>> >>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
>> >>> > processName=NameNode, sessionId=null
>> >>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>> >>> > NameNodeMeterics using context
>> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> >>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>> >>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>> >>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
>> entries
>> >>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152,
>> actual=2097152
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
>> (auth:SIMPLE)
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>> isPermissionEnabled=false
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>> >>> > dfs.block.invalidate.limit=1000
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>> isAccessTokenEnabled=false
>> >>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>> >>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>> >>> > FSNamesystemMetrics using context
>> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> >>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
>> >>> > initialization
>> >>> > failed.
>> >>> > java.io.FileNotFoundException:
>> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>> >>> > denied)
>> >>> > at java.io.RandomAccessFile.open(Native Method)
>> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>> >>> > at
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>> >>> > at
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>> >>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
>> >>> > java.io.FileNotFoundException:
>> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>> >>> > denied)
>> >>> > at java.io.RandomAccessFile.open(Native Method)
>> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>> >>> > at
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>> >>> > at
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>> >>> >
>> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>> >>> > /************************************************************
>> >>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/
>> 127.0.0.1
>> >>> > ************************************************************/
>> >>> >
>> >>> >
>> >>
>> >>
>>
>
>

Re: namenode instantiation error

Posted by anand sharma <an...@gmail.com>.
its false... Abhishek

 <property>
     <name>dfs.permissions</name>
     <value>false</value>
  </property>

<property>
     <!-- specify this so that running 'hadoop namenode -format' formats
the right dir -->
     <name>dfs.name.dir</name>
     <value>/var/lib/hadoop-0.20/cache/hadoop/dfs/name</value>
  </property>


On Thu, Aug 9, 2012 at 6:29 PM, Abhishek <ab...@gmail.com> wrote:

> Hi Anand,
>
> What are the permissions, on dfs.name.dir directory in hdfs-site.xml
>
> Regards
> Abhishek
>
>
> Sent from my iPhone
>
> On Aug 9, 2012, at 8:41 AM, anand sharma <an...@gmail.com> wrote:
>
> yea  Tariq !1 its a fresh installation i m doing it for the first time,
> hope someone will know the error code and the reason of error.
>
> On Thu, Aug 9, 2012 at 5:35 PM, Mohammad Tariq <do...@gmail.com> wrote:
>
>> Hi Anand,
>>
>>       Have you tried any other Hadoop distribution or version also??In
>> that case first remove the older one and start fresh.
>>
>> Regards,
>>     Mohammad Tariq
>>
>>
>> On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq <do...@gmail.com>
>> wrote:
>> > Hello Rahul,
>> >
>> >    That's great. That's the best way to learn(I am doing the same :)
>> > ). Since the installation part is over, I would suggest to get
>> > yourself familiar with Hdfs and MapReduce first. Try to do basic
>> > filesystem operations using the Hdfs API and run the wordcount
>> > program, if you haven't done it yet. Then move ahead.
>> >
>> > Regards,
>> >     Mohammad Tariq
>> >
>> >
>> > On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com>
>> wrote:
>> >> Hi Tariq,
>> >>
>> >> I am also new to Hadoop trying to learn my self can anyone help me on
>> the
>> >> same.
>> >> i have installed CDH3.
>> >>
>> >>
>> >>
>> >> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com>
>> wrote:
>> >>>
>> >>> Hello Anand,
>> >>>
>> >>>     Is there any specific reason behind not using ssh??
>> >>>
>> >>> Regards,
>> >>>     Mohammad Tariq
>> >>>
>> >>>
>> >>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
>> >>> wrote:
>> >>> > Hi, i am just learning the Hadoop and i am setting the development
>> >>> > environment with CDH3 pseudo distributed mode without any ssh
>> >>> > cofiguration
>> >>> > in CentOS 6.2 . i can run the sample programs as usual but when i
>> try
>> >>> > and
>> >>> > run namenode this is the error it logs...
>> >>> >
>> >>> > [hive@localhost ~]$ hadoop namenode
>> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>> >>> > /************************************************************
>> >>> > STARTUP_MSG: Starting NameNode
>> >>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>> >>> > STARTUP_MSG:   args = []
>> >>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
>> >>> > STARTUP_MSG:   build =
>> >>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
>> >>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on
>> Mon
>> >>> > May
>> >>> > 7 14:01:59 PDT 2012
>> >>> > ************************************************************/
>> >>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
>> >>> > processName=NameNode, sessionId=null
>> >>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>> >>> > NameNodeMeterics using context
>> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> >>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>> >>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>> >>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
>> entries
>> >>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152,
>> actual=2097152
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
>> (auth:SIMPLE)
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>> isPermissionEnabled=false
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>> >>> > dfs.block.invalidate.limit=1000
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>> isAccessTokenEnabled=false
>> >>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>> >>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>> >>> > FSNamesystemMetrics using context
>> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> >>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
>> >>> > initialization
>> >>> > failed.
>> >>> > java.io.FileNotFoundException:
>> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>> >>> > denied)
>> >>> > at java.io.RandomAccessFile.open(Native Method)
>> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>> >>> > at
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>> >>> > at
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>> >>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
>> >>> > java.io.FileNotFoundException:
>> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>> >>> > denied)
>> >>> > at java.io.RandomAccessFile.open(Native Method)
>> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>> >>> > at
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>> >>> > at
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>> >>> >
>> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>> >>> > /************************************************************
>> >>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/
>> 127.0.0.1
>> >>> > ************************************************************/
>> >>> >
>> >>> >
>> >>
>> >>
>>
>
>

Re: namenode instantiation error

Posted by anand sharma <an...@gmail.com>.
its false... Abhishek

 <property>
     <name>dfs.permissions</name>
     <value>false</value>
  </property>

<property>
     <!-- specify this so that running 'hadoop namenode -format' formats
the right dir -->
     <name>dfs.name.dir</name>
     <value>/var/lib/hadoop-0.20/cache/hadoop/dfs/name</value>
  </property>


On Thu, Aug 9, 2012 at 6:29 PM, Abhishek <ab...@gmail.com> wrote:

> Hi Anand,
>
> What are the permissions, on dfs.name.dir directory in hdfs-site.xml
>
> Regards
> Abhishek
>
>
> Sent from my iPhone
>
> On Aug 9, 2012, at 8:41 AM, anand sharma <an...@gmail.com> wrote:
>
> yea  Tariq !1 its a fresh installation i m doing it for the first time,
> hope someone will know the error code and the reason of error.
>
> On Thu, Aug 9, 2012 at 5:35 PM, Mohammad Tariq <do...@gmail.com> wrote:
>
>> Hi Anand,
>>
>>       Have you tried any other Hadoop distribution or version also??In
>> that case first remove the older one and start fresh.
>>
>> Regards,
>>     Mohammad Tariq
>>
>>
>> On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq <do...@gmail.com>
>> wrote:
>> > Hello Rahul,
>> >
>> >    That's great. That's the best way to learn(I am doing the same :)
>> > ). Since the installation part is over, I would suggest to get
>> > yourself familiar with Hdfs and MapReduce first. Try to do basic
>> > filesystem operations using the Hdfs API and run the wordcount
>> > program, if you haven't done it yet. Then move ahead.
>> >
>> > Regards,
>> >     Mohammad Tariq
>> >
>> >
>> > On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com>
>> wrote:
>> >> Hi Tariq,
>> >>
>> >> I am also new to Hadoop trying to learn my self can anyone help me on
>> the
>> >> same.
>> >> i have installed CDH3.
>> >>
>> >>
>> >>
>> >> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com>
>> wrote:
>> >>>
>> >>> Hello Anand,
>> >>>
>> >>>     Is there any specific reason behind not using ssh??
>> >>>
>> >>> Regards,
>> >>>     Mohammad Tariq
>> >>>
>> >>>
>> >>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
>> >>> wrote:
>> >>> > Hi, i am just learning the Hadoop and i am setting the development
>> >>> > environment with CDH3 pseudo distributed mode without any ssh
>> >>> > cofiguration
>> >>> > in CentOS 6.2 . i can run the sample programs as usual but when i
>> try
>> >>> > and
>> >>> > run namenode this is the error it logs...
>> >>> >
>> >>> > [hive@localhost ~]$ hadoop namenode
>> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>> >>> > /************************************************************
>> >>> > STARTUP_MSG: Starting NameNode
>> >>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>> >>> > STARTUP_MSG:   args = []
>> >>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
>> >>> > STARTUP_MSG:   build =
>> >>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
>> >>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on
>> Mon
>> >>> > May
>> >>> > 7 14:01:59 PDT 2012
>> >>> > ************************************************************/
>> >>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
>> >>> > processName=NameNode, sessionId=null
>> >>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>> >>> > NameNodeMeterics using context
>> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> >>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>> >>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>> >>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
>> entries
>> >>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152,
>> actual=2097152
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
>> (auth:SIMPLE)
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>> isPermissionEnabled=false
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>> >>> > dfs.block.invalidate.limit=1000
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>> isAccessTokenEnabled=false
>> >>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>> >>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>> >>> > FSNamesystemMetrics using context
>> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> >>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
>> >>> > initialization
>> >>> > failed.
>> >>> > java.io.FileNotFoundException:
>> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>> >>> > denied)
>> >>> > at java.io.RandomAccessFile.open(Native Method)
>> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>> >>> > at
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>> >>> > at
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>> >>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
>> >>> > java.io.FileNotFoundException:
>> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>> >>> > denied)
>> >>> > at java.io.RandomAccessFile.open(Native Method)
>> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>> >>> > at
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>> >>> > at
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>> >>> >
>> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>> >>> > /************************************************************
>> >>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/
>> 127.0.0.1
>> >>> > ************************************************************/
>> >>> >
>> >>> >
>> >>
>> >>
>>
>
>

Re: namenode instantiation error

Posted by anand sharma <an...@gmail.com>.
its false... Abhishek

 <property>
     <name>dfs.permissions</name>
     <value>false</value>
  </property>

<property>
     <!-- specify this so that running 'hadoop namenode -format' formats
the right dir -->
     <name>dfs.name.dir</name>
     <value>/var/lib/hadoop-0.20/cache/hadoop/dfs/name</value>
  </property>


On Thu, Aug 9, 2012 at 6:29 PM, Abhishek <ab...@gmail.com> wrote:

> Hi Anand,
>
> What are the permissions, on dfs.name.dir directory in hdfs-site.xml
>
> Regards
> Abhishek
>
>
> Sent from my iPhone
>
> On Aug 9, 2012, at 8:41 AM, anand sharma <an...@gmail.com> wrote:
>
> yea  Tariq !1 its a fresh installation i m doing it for the first time,
> hope someone will know the error code and the reason of error.
>
> On Thu, Aug 9, 2012 at 5:35 PM, Mohammad Tariq <do...@gmail.com> wrote:
>
>> Hi Anand,
>>
>>       Have you tried any other Hadoop distribution or version also??In
>> that case first remove the older one and start fresh.
>>
>> Regards,
>>     Mohammad Tariq
>>
>>
>> On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq <do...@gmail.com>
>> wrote:
>> > Hello Rahul,
>> >
>> >    That's great. That's the best way to learn(I am doing the same :)
>> > ). Since the installation part is over, I would suggest to get
>> > yourself familiar with Hdfs and MapReduce first. Try to do basic
>> > filesystem operations using the Hdfs API and run the wordcount
>> > program, if you haven't done it yet. Then move ahead.
>> >
>> > Regards,
>> >     Mohammad Tariq
>> >
>> >
>> > On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com>
>> wrote:
>> >> Hi Tariq,
>> >>
>> >> I am also new to Hadoop trying to learn my self can anyone help me on
>> the
>> >> same.
>> >> i have installed CDH3.
>> >>
>> >>
>> >>
>> >> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com>
>> wrote:
>> >>>
>> >>> Hello Anand,
>> >>>
>> >>>     Is there any specific reason behind not using ssh??
>> >>>
>> >>> Regards,
>> >>>     Mohammad Tariq
>> >>>
>> >>>
>> >>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
>> >>> wrote:
>> >>> > Hi, i am just learning the Hadoop and i am setting the development
>> >>> > environment with CDH3 pseudo distributed mode without any ssh
>> >>> > cofiguration
>> >>> > in CentOS 6.2 . i can run the sample programs as usual but when i
>> try
>> >>> > and
>> >>> > run namenode this is the error it logs...
>> >>> >
>> >>> > [hive@localhost ~]$ hadoop namenode
>> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>> >>> > /************************************************************
>> >>> > STARTUP_MSG: Starting NameNode
>> >>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>> >>> > STARTUP_MSG:   args = []
>> >>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
>> >>> > STARTUP_MSG:   build =
>> >>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
>> >>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on
>> Mon
>> >>> > May
>> >>> > 7 14:01:59 PDT 2012
>> >>> > ************************************************************/
>> >>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
>> >>> > processName=NameNode, sessionId=null
>> >>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>> >>> > NameNodeMeterics using context
>> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> >>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>> >>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>> >>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
>> entries
>> >>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152,
>> actual=2097152
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
>> (auth:SIMPLE)
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>> isPermissionEnabled=false
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>> >>> > dfs.block.invalidate.limit=1000
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>> isAccessTokenEnabled=false
>> >>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>> >>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>> >>> > FSNamesystemMetrics using context
>> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> >>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
>> >>> > initialization
>> >>> > failed.
>> >>> > java.io.FileNotFoundException:
>> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>> >>> > denied)
>> >>> > at java.io.RandomAccessFile.open(Native Method)
>> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>> >>> > at
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>> >>> > at
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>> >>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
>> >>> > java.io.FileNotFoundException:
>> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>> >>> > denied)
>> >>> > at java.io.RandomAccessFile.open(Native Method)
>> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>> >>> > at
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>> >>> > at
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>> >>> >
>> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>> >>> > /************************************************************
>> >>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/
>> 127.0.0.1
>> >>> > ************************************************************/
>> >>> >
>> >>> >
>> >>
>> >>
>>
>
>

Re: namenode instantiation error

Posted by Abhishek <ab...@gmail.com>.
Hi Anand,

What are the permissions, on dfs.name.dir directory in hdfs-site.xml

Regards
Abhishek 


Sent from my iPhone

On Aug 9, 2012, at 8:41 AM, anand sharma <an...@gmail.com> wrote:

> yea  Tariq !1 its a fresh installation i m doing it for the first time, hope someone will know the error code and the reason of error.
> 
> On Thu, Aug 9, 2012 at 5:35 PM, Mohammad Tariq <do...@gmail.com> wrote:
> Hi Anand,
> 
>       Have you tried any other Hadoop distribution or version also??In
> that case first remove the older one and start fresh.
> 
> Regards,
>     Mohammad Tariq
> 
> 
> On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq <do...@gmail.com> wrote:
> > Hello Rahul,
> >
> >    That's great. That's the best way to learn(I am doing the same :)
> > ). Since the installation part is over, I would suggest to get
> > yourself familiar with Hdfs and MapReduce first. Try to do basic
> > filesystem operations using the Hdfs API and run the wordcount
> > program, if you haven't done it yet. Then move ahead.
> >
> > Regards,
> >     Mohammad Tariq
> >
> >
> > On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com> wrote:
> >> Hi Tariq,
> >>
> >> I am also new to Hadoop trying to learn my self can anyone help me on the
> >> same.
> >> i have installed CDH3.
> >>
> >>
> >>
> >> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com> wrote:
> >>>
> >>> Hello Anand,
> >>>
> >>>     Is there any specific reason behind not using ssh??
> >>>
> >>> Regards,
> >>>     Mohammad Tariq
> >>>
> >>>
> >>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
> >>> wrote:
> >>> > Hi, i am just learning the Hadoop and i am setting the development
> >>> > environment with CDH3 pseudo distributed mode without any ssh
> >>> > cofiguration
> >>> > in CentOS 6.2 . i can run the sample programs as usual but when i try
> >>> > and
> >>> > run namenode this is the error it logs...
> >>> >
> >>> > [hive@localhost ~]$ hadoop namenode
> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> >>> > /************************************************************
> >>> > STARTUP_MSG: Starting NameNode
> >>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> >>> > STARTUP_MSG:   args = []
> >>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
> >>> > STARTUP_MSG:   build =
> >>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
> >>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
> >>> > May
> >>> > 7 14:01:59 PDT 2012
> >>> > ************************************************************/
> >>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> >>> > processName=NameNode, sessionId=null
> >>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> >>> > NameNodeMeterics using context
> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> >>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> >>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
> >>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> >>> > dfs.block.invalidate.limit=1000
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
> >>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> >>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> >>> > FSNamesystemMetrics using context
> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
> >>> > initialization
> >>> > failed.
> >>> > java.io.FileNotFoundException:
> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> >>> > denied)
> >>> > at java.io.RandomAccessFile.open(Native Method)
> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> > at
> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> > at
> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
> >>> > java.io.FileNotFoundException:
> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> >>> > denied)
> >>> > at java.io.RandomAccessFile.open(Native Method)
> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> > at
> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> > at
> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>> >
> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> >>> > /************************************************************
> >>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
> >>> > ************************************************************/
> >>> >
> >>> >
> >>
> >>
> 

Re: namenode instantiation error

Posted by Nitin Pawar <ni...@gmail.com>.
anand if you are trying single node instance, I had written an ugly
script to setup the single node mode.

you can refer to it @ https://github.com/nitinpawar/hadoop/

I did face these issues but mostly they were due to permissions related.


On Thu, Aug 9, 2012 at 6:23 PM, Owen Duan <su...@gmail.com> wrote:
> have you tried hadoop namenode -format?
>
>
> 2012/8/9 anand sharma <an...@gmail.com>
>>
>> yea  Tariq !1 its a fresh installation i m doing it for the first time,
>> hope someone will know the error code and the reason of error.
>>
>>
>> On Thu, Aug 9, 2012 at 5:35 PM, Mohammad Tariq <do...@gmail.com> wrote:
>>>
>>> Hi Anand,
>>>
>>>       Have you tried any other Hadoop distribution or version also??In
>>> that case first remove the older one and start fresh.
>>>
>>> Regards,
>>>     Mohammad Tariq
>>>
>>>
>>> On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq <do...@gmail.com>
>>> wrote:
>>> > Hello Rahul,
>>> >
>>> >    That's great. That's the best way to learn(I am doing the same :)
>>> > ). Since the installation part is over, I would suggest to get
>>> > yourself familiar with Hdfs and MapReduce first. Try to do basic
>>> > filesystem operations using the Hdfs API and run the wordcount
>>> > program, if you haven't done it yet. Then move ahead.
>>> >
>>> > Regards,
>>> >     Mohammad Tariq
>>> >
>>> >
>>> > On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com>
>>> > wrote:
>>> >> Hi Tariq,
>>> >>
>>> >> I am also new to Hadoop trying to learn my self can anyone help me on
>>> >> the
>>> >> same.
>>> >> i have installed CDH3.
>>> >>
>>> >>
>>> >>
>>> >> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com>
>>> >> wrote:
>>> >>>
>>> >>> Hello Anand,
>>> >>>
>>> >>>     Is there any specific reason behind not using ssh??
>>> >>>
>>> >>> Regards,
>>> >>>     Mohammad Tariq
>>> >>>
>>> >>>
>>> >>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
>>> >>> wrote:
>>> >>> > Hi, i am just learning the Hadoop and i am setting the development
>>> >>> > environment with CDH3 pseudo distributed mode without any ssh
>>> >>> > cofiguration
>>> >>> > in CentOS 6.2 . i can run the sample programs as usual but when i
>>> >>> > try
>>> >>> > and
>>> >>> > run namenode this is the error it logs...
>>> >>> >
>>> >>> > [hive@localhost ~]$ hadoop namenode
>>> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>>> >>> > /************************************************************
>>> >>> > STARTUP_MSG: Starting NameNode
>>> >>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>>> >>> > STARTUP_MSG:   args = []
>>> >>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
>>> >>> > STARTUP_MSG:   build =
>>> >>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
>>> >>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on
>>> >>> > Mon
>>> >>> > May
>>> >>> > 7 14:01:59 PDT 2012
>>> >>> > ************************************************************/
>>> >>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics
>>> >>> > with
>>> >>> > processName=NameNode, sessionId=null
>>> >>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>>> >>> > NameNodeMeterics using context
>>> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> >>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>>> >>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>>> >>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
>>> >>> > entries
>>> >>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152,
>>> >>> > actual=2097152
>>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
>>> >>> > (auth:SIMPLE)
>>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> >>> > isPermissionEnabled=false
>>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> >>> > dfs.block.invalidate.limit=1000
>>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> >>> > isAccessTokenEnabled=false
>>> >>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>>> >>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>>> >>> > FSNamesystemMetrics using context
>>> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> >>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
>>> >>> > initialization
>>> >>> > failed.
>>> >>> > java.io.FileNotFoundException:
>>> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>>> >>> > denied)
>>> >>> > at java.io.RandomAccessFile.open(Native Method)
>>> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>>> >>> > at
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>>> >>> > at
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> >>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
>>> >>> > java.io.FileNotFoundException:
>>> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>>> >>> > denied)
>>> >>> > at java.io.RandomAccessFile.open(Native Method)
>>> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>>> >>> > at
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>>> >>> > at
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> >>> >
>>> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>>> >>> > /************************************************************
>>> >>> > SHUTDOWN_MSG: Shutting down NameNode at
>>> >>> > localhost.localdomain/127.0.0.1
>>> >>> > ************************************************************/
>>> >>> >
>>> >>> >
>>> >>
>>> >>
>>
>>
>



-- 
Nitin Pawar

Re: namenode instantiation error

Posted by Nitin Pawar <ni...@gmail.com>.
anand if you are trying single node instance, I had written an ugly
script to setup the single node mode.

you can refer to it @ https://github.com/nitinpawar/hadoop/

I did face these issues but mostly they were due to permissions related.


On Thu, Aug 9, 2012 at 6:23 PM, Owen Duan <su...@gmail.com> wrote:
> have you tried hadoop namenode -format?
>
>
> 2012/8/9 anand sharma <an...@gmail.com>
>>
>> yea  Tariq !1 its a fresh installation i m doing it for the first time,
>> hope someone will know the error code and the reason of error.
>>
>>
>> On Thu, Aug 9, 2012 at 5:35 PM, Mohammad Tariq <do...@gmail.com> wrote:
>>>
>>> Hi Anand,
>>>
>>>       Have you tried any other Hadoop distribution or version also??In
>>> that case first remove the older one and start fresh.
>>>
>>> Regards,
>>>     Mohammad Tariq
>>>
>>>
>>> On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq <do...@gmail.com>
>>> wrote:
>>> > Hello Rahul,
>>> >
>>> >    That's great. That's the best way to learn(I am doing the same :)
>>> > ). Since the installation part is over, I would suggest to get
>>> > yourself familiar with Hdfs and MapReduce first. Try to do basic
>>> > filesystem operations using the Hdfs API and run the wordcount
>>> > program, if you haven't done it yet. Then move ahead.
>>> >
>>> > Regards,
>>> >     Mohammad Tariq
>>> >
>>> >
>>> > On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com>
>>> > wrote:
>>> >> Hi Tariq,
>>> >>
>>> >> I am also new to Hadoop trying to learn my self can anyone help me on
>>> >> the
>>> >> same.
>>> >> i have installed CDH3.
>>> >>
>>> >>
>>> >>
>>> >> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com>
>>> >> wrote:
>>> >>>
>>> >>> Hello Anand,
>>> >>>
>>> >>>     Is there any specific reason behind not using ssh??
>>> >>>
>>> >>> Regards,
>>> >>>     Mohammad Tariq
>>> >>>
>>> >>>
>>> >>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
>>> >>> wrote:
>>> >>> > Hi, i am just learning the Hadoop and i am setting the development
>>> >>> > environment with CDH3 pseudo distributed mode without any ssh
>>> >>> > cofiguration
>>> >>> > in CentOS 6.2 . i can run the sample programs as usual but when i
>>> >>> > try
>>> >>> > and
>>> >>> > run namenode this is the error it logs...
>>> >>> >
>>> >>> > [hive@localhost ~]$ hadoop namenode
>>> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>>> >>> > /************************************************************
>>> >>> > STARTUP_MSG: Starting NameNode
>>> >>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>>> >>> > STARTUP_MSG:   args = []
>>> >>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
>>> >>> > STARTUP_MSG:   build =
>>> >>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
>>> >>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on
>>> >>> > Mon
>>> >>> > May
>>> >>> > 7 14:01:59 PDT 2012
>>> >>> > ************************************************************/
>>> >>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics
>>> >>> > with
>>> >>> > processName=NameNode, sessionId=null
>>> >>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>>> >>> > NameNodeMeterics using context
>>> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> >>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>>> >>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>>> >>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
>>> >>> > entries
>>> >>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152,
>>> >>> > actual=2097152
>>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
>>> >>> > (auth:SIMPLE)
>>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> >>> > isPermissionEnabled=false
>>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> >>> > dfs.block.invalidate.limit=1000
>>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> >>> > isAccessTokenEnabled=false
>>> >>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>>> >>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>>> >>> > FSNamesystemMetrics using context
>>> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> >>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
>>> >>> > initialization
>>> >>> > failed.
>>> >>> > java.io.FileNotFoundException:
>>> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>>> >>> > denied)
>>> >>> > at java.io.RandomAccessFile.open(Native Method)
>>> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>>> >>> > at
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>>> >>> > at
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> >>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
>>> >>> > java.io.FileNotFoundException:
>>> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>>> >>> > denied)
>>> >>> > at java.io.RandomAccessFile.open(Native Method)
>>> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>>> >>> > at
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>>> >>> > at
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> >>> >
>>> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>>> >>> > /************************************************************
>>> >>> > SHUTDOWN_MSG: Shutting down NameNode at
>>> >>> > localhost.localdomain/127.0.0.1
>>> >>> > ************************************************************/
>>> >>> >
>>> >>> >
>>> >>
>>> >>
>>
>>
>



-- 
Nitin Pawar

RE: namenode instantiation error

Posted by Vinayakumar B <vi...@huawei.com>.
Hi Anand,

Its clearly telling namenode not able to access the lock file inside name
dir.

 

/var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)

 

Did you format the namenode using one user and starting namenode in another
user..?

 

Try formatting and starting from same user console.

 

From: anand sharma [mailto:anand2sharma@gmail.com] 
Sent: Friday, August 10, 2012 9:37 AM
To: user@hadoop.apache.org
Subject: Re: namenode instantiation error

 

yes Owen i did.

On Thu, Aug 9, 2012 at 6:23 PM, Owen Duan <su...@gmail.com> wrote:

have you tried hadoop namenode -format?

2012/8/9 anand sharma <an...@gmail.com>

yea  Tariq !1 its a fresh installation i m doing it for the first time, hope
someone will know the error code and the reason of error.

 

On Thu, Aug 9, 2012 at 5:35 PM, Mohammad Tariq <do...@gmail.com> wrote:

Hi Anand,

      Have you tried any other Hadoop distribution or version also??In
that case first remove the older one and start fresh.

Regards,
    Mohammad Tariq



On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq <do...@gmail.com> wrote:
> Hello Rahul,
>
>    That's great. That's the best way to learn(I am doing the same :)
> ). Since the installation part is over, I would suggest to get
> yourself familiar with Hdfs and MapReduce first. Try to do basic
> filesystem operations using the Hdfs API and run the wordcount
> program, if you haven't done it yet. Then move ahead.
>
> Regards,
>     Mohammad Tariq
>
>
> On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com>
wrote:
>> Hi Tariq,
>>
>> I am also new to Hadoop trying to learn my self can anyone help me on the
>> same.
>> i have installed CDH3.
>>
>>
>>
>> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com>
wrote:
>>>
>>> Hello Anand,
>>>
>>>     Is there any specific reason behind not using ssh??
>>>
>>> Regards,
>>>     Mohammad Tariq
>>>
>>>
>>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
>>> wrote:
>>> > Hi, i am just learning the Hadoop and i am setting the development
>>> > environment with CDH3 pseudo distributed mode without any ssh
>>> > cofiguration
>>> > in CentOS 6.2 . i can run the sample programs as usual but when i try
>>> > and
>>> > run namenode this is the error it logs...
>>> >
>>> > [hive@localhost ~]$ hadoop namenode
>>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>>> > /************************************************************
>>> > STARTUP_MSG: Starting NameNode
>>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>>> > STARTUP_MSG:   args = []
>>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
>>> > STARTUP_MSG:   build =
>>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
>>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
>>> > May
>>> > 7 14:01:59 PDT 2012
>>> > ************************************************************/
>>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
>>> > processName=NameNode, sessionId=null
>>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>>> > NameNodeMeterics using context
>>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
entries
>>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
(auth:SIMPLE)
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
isPermissionEnabled=false
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> > dfs.block.invalidate.limit=1000
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
isAccessTokenEnabled=false
>>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>>> > FSNamesystemMetrics using context
>>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
>>> > initialization
>>> > failed.
>>> > java.io.FileNotFoundException:
>>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>>> > denied)
>>> > at java.io.RandomAccessFile.open(Native Method)
>>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storag
e.java:614)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.j
ava:591)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage
(Storage.java:449)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage
.java:304)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.j
ava:110)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.
java:372)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java
:335)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271
)
>>> > at
>>> >
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java
:1330)
>>> > at
>>> >
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
>>> > java.io.FileNotFoundException:
>>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>>> > denied)
>>> > at java.io.RandomAccessFile.open(Native Method)
>>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storag
e.java:614)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.j
ava:591)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage
(Storage.java:449)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage
.java:304)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.j
ava:110)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.
java:372)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java
:335)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271
)
>>> > at
>>> >
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java
:1330)
>>> > at
>>> >
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> >
>>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>>> > /************************************************************
>>> > SHUTDOWN_MSG: Shutting down NameNode at
localhost.localdomain/127.0.0.1
>>> > ************************************************************/
>>> >
>>> >
>>
>>

 

 

 


RE: namenode instantiation error

Posted by Vinayakumar B <vi...@huawei.com>.
Hi Anand,

Its clearly telling namenode not able to access the lock file inside name
dir.

 

/var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)

 

Did you format the namenode using one user and starting namenode in another
user..?

 

Try formatting and starting from same user console.

 

From: anand sharma [mailto:anand2sharma@gmail.com] 
Sent: Friday, August 10, 2012 9:37 AM
To: user@hadoop.apache.org
Subject: Re: namenode instantiation error

 

yes Owen i did.

On Thu, Aug 9, 2012 at 6:23 PM, Owen Duan <su...@gmail.com> wrote:

have you tried hadoop namenode -format?

2012/8/9 anand sharma <an...@gmail.com>

yea  Tariq !1 its a fresh installation i m doing it for the first time, hope
someone will know the error code and the reason of error.

 

On Thu, Aug 9, 2012 at 5:35 PM, Mohammad Tariq <do...@gmail.com> wrote:

Hi Anand,

      Have you tried any other Hadoop distribution or version also??In
that case first remove the older one and start fresh.

Regards,
    Mohammad Tariq



On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq <do...@gmail.com> wrote:
> Hello Rahul,
>
>    That's great. That's the best way to learn(I am doing the same :)
> ). Since the installation part is over, I would suggest to get
> yourself familiar with Hdfs and MapReduce first. Try to do basic
> filesystem operations using the Hdfs API and run the wordcount
> program, if you haven't done it yet. Then move ahead.
>
> Regards,
>     Mohammad Tariq
>
>
> On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com>
wrote:
>> Hi Tariq,
>>
>> I am also new to Hadoop trying to learn my self can anyone help me on the
>> same.
>> i have installed CDH3.
>>
>>
>>
>> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com>
wrote:
>>>
>>> Hello Anand,
>>>
>>>     Is there any specific reason behind not using ssh??
>>>
>>> Regards,
>>>     Mohammad Tariq
>>>
>>>
>>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
>>> wrote:
>>> > Hi, i am just learning the Hadoop and i am setting the development
>>> > environment with CDH3 pseudo distributed mode without any ssh
>>> > cofiguration
>>> > in CentOS 6.2 . i can run the sample programs as usual but when i try
>>> > and
>>> > run namenode this is the error it logs...
>>> >
>>> > [hive@localhost ~]$ hadoop namenode
>>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>>> > /************************************************************
>>> > STARTUP_MSG: Starting NameNode
>>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>>> > STARTUP_MSG:   args = []
>>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
>>> > STARTUP_MSG:   build =
>>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
>>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
>>> > May
>>> > 7 14:01:59 PDT 2012
>>> > ************************************************************/
>>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
>>> > processName=NameNode, sessionId=null
>>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>>> > NameNodeMeterics using context
>>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
entries
>>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
(auth:SIMPLE)
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
isPermissionEnabled=false
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> > dfs.block.invalidate.limit=1000
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
isAccessTokenEnabled=false
>>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>>> > FSNamesystemMetrics using context
>>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
>>> > initialization
>>> > failed.
>>> > java.io.FileNotFoundException:
>>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>>> > denied)
>>> > at java.io.RandomAccessFile.open(Native Method)
>>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storag
e.java:614)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.j
ava:591)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage
(Storage.java:449)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage
.java:304)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.j
ava:110)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.
java:372)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java
:335)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271
)
>>> > at
>>> >
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java
:1330)
>>> > at
>>> >
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
>>> > java.io.FileNotFoundException:
>>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>>> > denied)
>>> > at java.io.RandomAccessFile.open(Native Method)
>>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storag
e.java:614)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.j
ava:591)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage
(Storage.java:449)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage
.java:304)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.j
ava:110)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.
java:372)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java
:335)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271
)
>>> > at
>>> >
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java
:1330)
>>> > at
>>> >
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> >
>>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>>> > /************************************************************
>>> > SHUTDOWN_MSG: Shutting down NameNode at
localhost.localdomain/127.0.0.1
>>> > ************************************************************/
>>> >
>>> >
>>
>>

 

 

 


RE: namenode instantiation error

Posted by Vinayakumar B <vi...@huawei.com>.
Hi Anand,

Its clearly telling namenode not able to access the lock file inside name
dir.

 

/var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)

 

Did you format the namenode using one user and starting namenode in another
user..?

 

Try formatting and starting from same user console.

 

From: anand sharma [mailto:anand2sharma@gmail.com] 
Sent: Friday, August 10, 2012 9:37 AM
To: user@hadoop.apache.org
Subject: Re: namenode instantiation error

 

yes Owen i did.

On Thu, Aug 9, 2012 at 6:23 PM, Owen Duan <su...@gmail.com> wrote:

have you tried hadoop namenode -format?

2012/8/9 anand sharma <an...@gmail.com>

yea  Tariq !1 its a fresh installation i m doing it for the first time, hope
someone will know the error code and the reason of error.

 

On Thu, Aug 9, 2012 at 5:35 PM, Mohammad Tariq <do...@gmail.com> wrote:

Hi Anand,

      Have you tried any other Hadoop distribution or version also??In
that case first remove the older one and start fresh.

Regards,
    Mohammad Tariq



On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq <do...@gmail.com> wrote:
> Hello Rahul,
>
>    That's great. That's the best way to learn(I am doing the same :)
> ). Since the installation part is over, I would suggest to get
> yourself familiar with Hdfs and MapReduce first. Try to do basic
> filesystem operations using the Hdfs API and run the wordcount
> program, if you haven't done it yet. Then move ahead.
>
> Regards,
>     Mohammad Tariq
>
>
> On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com>
wrote:
>> Hi Tariq,
>>
>> I am also new to Hadoop trying to learn my self can anyone help me on the
>> same.
>> i have installed CDH3.
>>
>>
>>
>> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com>
wrote:
>>>
>>> Hello Anand,
>>>
>>>     Is there any specific reason behind not using ssh??
>>>
>>> Regards,
>>>     Mohammad Tariq
>>>
>>>
>>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
>>> wrote:
>>> > Hi, i am just learning the Hadoop and i am setting the development
>>> > environment with CDH3 pseudo distributed mode without any ssh
>>> > cofiguration
>>> > in CentOS 6.2 . i can run the sample programs as usual but when i try
>>> > and
>>> > run namenode this is the error it logs...
>>> >
>>> > [hive@localhost ~]$ hadoop namenode
>>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>>> > /************************************************************
>>> > STARTUP_MSG: Starting NameNode
>>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>>> > STARTUP_MSG:   args = []
>>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
>>> > STARTUP_MSG:   build =
>>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
>>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
>>> > May
>>> > 7 14:01:59 PDT 2012
>>> > ************************************************************/
>>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
>>> > processName=NameNode, sessionId=null
>>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>>> > NameNodeMeterics using context
>>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
entries
>>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
(auth:SIMPLE)
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
isPermissionEnabled=false
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> > dfs.block.invalidate.limit=1000
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
isAccessTokenEnabled=false
>>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>>> > FSNamesystemMetrics using context
>>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
>>> > initialization
>>> > failed.
>>> > java.io.FileNotFoundException:
>>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>>> > denied)
>>> > at java.io.RandomAccessFile.open(Native Method)
>>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storag
e.java:614)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.j
ava:591)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage
(Storage.java:449)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage
.java:304)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.j
ava:110)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.
java:372)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java
:335)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271
)
>>> > at
>>> >
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java
:1330)
>>> > at
>>> >
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
>>> > java.io.FileNotFoundException:
>>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>>> > denied)
>>> > at java.io.RandomAccessFile.open(Native Method)
>>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storag
e.java:614)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.j
ava:591)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage
(Storage.java:449)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage
.java:304)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.j
ava:110)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.
java:372)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java
:335)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271
)
>>> > at
>>> >
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java
:1330)
>>> > at
>>> >
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> >
>>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>>> > /************************************************************
>>> > SHUTDOWN_MSG: Shutting down NameNode at
localhost.localdomain/127.0.0.1
>>> > ************************************************************/
>>> >
>>> >
>>
>>

 

 

 


RE: namenode instantiation error

Posted by Vinayakumar B <vi...@huawei.com>.
Hi Anand,

Its clearly telling namenode not able to access the lock file inside name
dir.

 

/var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)

 

Did you format the namenode using one user and starting namenode in another
user..?

 

Try formatting and starting from same user console.

 

From: anand sharma [mailto:anand2sharma@gmail.com] 
Sent: Friday, August 10, 2012 9:37 AM
To: user@hadoop.apache.org
Subject: Re: namenode instantiation error

 

yes Owen i did.

On Thu, Aug 9, 2012 at 6:23 PM, Owen Duan <su...@gmail.com> wrote:

have you tried hadoop namenode -format?

2012/8/9 anand sharma <an...@gmail.com>

yea  Tariq !1 its a fresh installation i m doing it for the first time, hope
someone will know the error code and the reason of error.

 

On Thu, Aug 9, 2012 at 5:35 PM, Mohammad Tariq <do...@gmail.com> wrote:

Hi Anand,

      Have you tried any other Hadoop distribution or version also??In
that case first remove the older one and start fresh.

Regards,
    Mohammad Tariq



On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq <do...@gmail.com> wrote:
> Hello Rahul,
>
>    That's great. That's the best way to learn(I am doing the same :)
> ). Since the installation part is over, I would suggest to get
> yourself familiar with Hdfs and MapReduce first. Try to do basic
> filesystem operations using the Hdfs API and run the wordcount
> program, if you haven't done it yet. Then move ahead.
>
> Regards,
>     Mohammad Tariq
>
>
> On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com>
wrote:
>> Hi Tariq,
>>
>> I am also new to Hadoop trying to learn my self can anyone help me on the
>> same.
>> i have installed CDH3.
>>
>>
>>
>> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com>
wrote:
>>>
>>> Hello Anand,
>>>
>>>     Is there any specific reason behind not using ssh??
>>>
>>> Regards,
>>>     Mohammad Tariq
>>>
>>>
>>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
>>> wrote:
>>> > Hi, i am just learning the Hadoop and i am setting the development
>>> > environment with CDH3 pseudo distributed mode without any ssh
>>> > cofiguration
>>> > in CentOS 6.2 . i can run the sample programs as usual but when i try
>>> > and
>>> > run namenode this is the error it logs...
>>> >
>>> > [hive@localhost ~]$ hadoop namenode
>>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>>> > /************************************************************
>>> > STARTUP_MSG: Starting NameNode
>>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>>> > STARTUP_MSG:   args = []
>>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
>>> > STARTUP_MSG:   build =
>>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
>>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
>>> > May
>>> > 7 14:01:59 PDT 2012
>>> > ************************************************************/
>>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
>>> > processName=NameNode, sessionId=null
>>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>>> > NameNodeMeterics using context
>>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
entries
>>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
(auth:SIMPLE)
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
isPermissionEnabled=false
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> > dfs.block.invalidate.limit=1000
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
isAccessTokenEnabled=false
>>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>>> > FSNamesystemMetrics using context
>>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
>>> > initialization
>>> > failed.
>>> > java.io.FileNotFoundException:
>>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>>> > denied)
>>> > at java.io.RandomAccessFile.open(Native Method)
>>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storag
e.java:614)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.j
ava:591)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage
(Storage.java:449)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage
.java:304)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.j
ava:110)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.
java:372)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java
:335)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271
)
>>> > at
>>> >
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java
:1330)
>>> > at
>>> >
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
>>> > java.io.FileNotFoundException:
>>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>>> > denied)
>>> > at java.io.RandomAccessFile.open(Native Method)
>>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storag
e.java:614)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.j
ava:591)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage
(Storage.java:449)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage
.java:304)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.j
ava:110)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.
java:372)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java
:335)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271
)
>>> > at
>>> >
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> > at
>>> >
>>> >
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java
:1330)
>>> > at
>>> >
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> >
>>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>>> > /************************************************************
>>> > SHUTDOWN_MSG: Shutting down NameNode at
localhost.localdomain/127.0.0.1
>>> > ************************************************************/
>>> >
>>> >
>>
>>

 

 

 


Re: namenode instantiation error

Posted by anand sharma <an...@gmail.com>.
Yea Vinay you are write i am formatting it from root and running it from
hive user beacause when i try to format namenode from hive it says..

[hive@localhost ~]$ hadoop namenode -format
12/08/10 21:42:13 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 0.20.2-cdh3u4
STARTUP_MSG:   build = file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
-r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May
 7 14:01:59 PDT 2012
************************************************************/
Re-format filesystem in /var/lib/hadoop-0.20/cache/hadoop/dfs/name ? (Y or
N) y
Format aborted in /var/lib/hadoop-0.20/cache/hadoop/dfs/name
12/08/10 21:42:18 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1

yea i think that i may need to install ssh in order to get it up and
running.

On Fri, Aug 10, 2012 at 10:14 AM, Vinayakumar B <vi...@huawei.com>wrote:

> Hi Anand,****
>
> Its clearly telling namenode not able to access the lock file inside name
> dir.****
>
> ** **
>
> * /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> denied)*
>
> * *
>
> Did you format the namenode using one user and starting namenode in
> another user..?****
>
> ** **
>
> Try formatting and starting from same user console.****
>
> ** **
>
> *From:* anand sharma [mailto:anand2sharma@gmail.com]
> *Sent:* Friday, August 10, 2012 9:37 AM
> *To:* user@hadoop.apache.org
> *Subject:* Re: namenode instantiation error****
>
> ** **
>
> yes Owen i did.****
>
> On Thu, Aug 9, 2012 at 6:23 PM, Owen Duan <su...@gmail.com> wrote:****
>
> have you tried hadoop namenode -format?****
>
> 2012/8/9 anand sharma <an...@gmail.com>****
>
> yea  Tariq !1 its a fresh installation i m doing it for the first time,
> hope someone will know the error code and the reason of error.****
>
> ** **
>
> On Thu, Aug 9, 2012 at 5:35 PM, Mohammad Tariq <do...@gmail.com> wrote:
> ****
>
> Hi Anand,
>
>       Have you tried any other Hadoop distribution or version also??In
> that case first remove the older one and start fresh.
>
> Regards,
>     Mohammad Tariq****
>
>
>
> On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq <do...@gmail.com> wrote:
> > Hello Rahul,
> >
> >    That's great. That's the best way to learn(I am doing the same :)
> > ). Since the installation part is over, I would suggest to get
> > yourself familiar with Hdfs and MapReduce first. Try to do basic
> > filesystem operations using the Hdfs API and run the wordcount
> > program, if you haven't done it yet. Then move ahead.
> >
> > Regards,
> >     Mohammad Tariq
> >
> >
> > On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com>
> wrote:
> >> Hi Tariq,
> >>
> >> I am also new to Hadoop trying to learn my self can anyone help me on
> the
> >> same.
> >> i have installed CDH3.
> >>
> >>
> >>
> >> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com>
> wrote:
> >>>
> >>> Hello Anand,
> >>>
> >>>     Is there any specific reason behind not using ssh??
> >>>
> >>> Regards,
> >>>     Mohammad Tariq
> >>>
> >>>
> >>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
> >>> wrote:
> >>> > Hi, i am just learning the Hadoop and i am setting the development
> >>> > environment with CDH3 pseudo distributed mode without any ssh
> >>> > cofiguration
> >>> > in CentOS 6.2 . i can run the sample programs as usual but when i try
> >>> > and
> >>> > run namenode this is the error it logs...
> >>> >
> >>> > [hive@localhost ~]$ hadoop namenode
> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> >>> > /************************************************************
> >>> > STARTUP_MSG: Starting NameNode
> >>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> >>> > STARTUP_MSG:   args = []
> >>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
> >>> > STARTUP_MSG:   build =
> >>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
> >>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on
> Mon
> >>> > May
> >>> > 7 14:01:59 PDT 2012
> >>> > ************************************************************/
> >>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> >>> > processName=NameNode, sessionId=null
> >>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> >>> > NameNodeMeterics using context
> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> >>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> >>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
> entries
> >>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
> (auth:SIMPLE)
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> isPermissionEnabled=false
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> >>> > dfs.block.invalidate.limit=1000
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> isAccessTokenEnabled=false
> >>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> >>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> >>> > FSNamesystemMetrics using context
> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
> >>> > initialization
> >>> > failed.
> >>> > java.io.FileNotFoundException:
> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> >>> > denied)
> >>> > at java.io.RandomAccessFile.open(Native Method)
> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> > at
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> > at
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
> >>> > java.io.FileNotFoundException:
> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> >>> > denied)
> >>> > at java.io.RandomAccessFile.open(Native Method)
> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> > at
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> > at
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>> >
> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> >>> > /************************************************************
> >>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/
> 127.0.0.1
> >>> > ************************************************************/
> >>> >
> >>> >
> >>
> >>****
>
> ** **
>
> ** **
>
> ** **
>

Re: namenode instantiation error

Posted by anand sharma <an...@gmail.com>.
Yea Vinay you are write i am formatting it from root and running it from
hive user beacause when i try to format namenode from hive it says..

[hive@localhost ~]$ hadoop namenode -format
12/08/10 21:42:13 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 0.20.2-cdh3u4
STARTUP_MSG:   build = file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
-r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May
 7 14:01:59 PDT 2012
************************************************************/
Re-format filesystem in /var/lib/hadoop-0.20/cache/hadoop/dfs/name ? (Y or
N) y
Format aborted in /var/lib/hadoop-0.20/cache/hadoop/dfs/name
12/08/10 21:42:18 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1

yea i think that i may need to install ssh in order to get it up and
running.

On Fri, Aug 10, 2012 at 10:14 AM, Vinayakumar B <vi...@huawei.com>wrote:

> Hi Anand,****
>
> Its clearly telling namenode not able to access the lock file inside name
> dir.****
>
> ** **
>
> * /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> denied)*
>
> * *
>
> Did you format the namenode using one user and starting namenode in
> another user..?****
>
> ** **
>
> Try formatting and starting from same user console.****
>
> ** **
>
> *From:* anand sharma [mailto:anand2sharma@gmail.com]
> *Sent:* Friday, August 10, 2012 9:37 AM
> *To:* user@hadoop.apache.org
> *Subject:* Re: namenode instantiation error****
>
> ** **
>
> yes Owen i did.****
>
> On Thu, Aug 9, 2012 at 6:23 PM, Owen Duan <su...@gmail.com> wrote:****
>
> have you tried hadoop namenode -format?****
>
> 2012/8/9 anand sharma <an...@gmail.com>****
>
> yea  Tariq !1 its a fresh installation i m doing it for the first time,
> hope someone will know the error code and the reason of error.****
>
> ** **
>
> On Thu, Aug 9, 2012 at 5:35 PM, Mohammad Tariq <do...@gmail.com> wrote:
> ****
>
> Hi Anand,
>
>       Have you tried any other Hadoop distribution or version also??In
> that case first remove the older one and start fresh.
>
> Regards,
>     Mohammad Tariq****
>
>
>
> On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq <do...@gmail.com> wrote:
> > Hello Rahul,
> >
> >    That's great. That's the best way to learn(I am doing the same :)
> > ). Since the installation part is over, I would suggest to get
> > yourself familiar with Hdfs and MapReduce first. Try to do basic
> > filesystem operations using the Hdfs API and run the wordcount
> > program, if you haven't done it yet. Then move ahead.
> >
> > Regards,
> >     Mohammad Tariq
> >
> >
> > On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com>
> wrote:
> >> Hi Tariq,
> >>
> >> I am also new to Hadoop trying to learn my self can anyone help me on
> the
> >> same.
> >> i have installed CDH3.
> >>
> >>
> >>
> >> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com>
> wrote:
> >>>
> >>> Hello Anand,
> >>>
> >>>     Is there any specific reason behind not using ssh??
> >>>
> >>> Regards,
> >>>     Mohammad Tariq
> >>>
> >>>
> >>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
> >>> wrote:
> >>> > Hi, i am just learning the Hadoop and i am setting the development
> >>> > environment with CDH3 pseudo distributed mode without any ssh
> >>> > cofiguration
> >>> > in CentOS 6.2 . i can run the sample programs as usual but when i try
> >>> > and
> >>> > run namenode this is the error it logs...
> >>> >
> >>> > [hive@localhost ~]$ hadoop namenode
> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> >>> > /************************************************************
> >>> > STARTUP_MSG: Starting NameNode
> >>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> >>> > STARTUP_MSG:   args = []
> >>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
> >>> > STARTUP_MSG:   build =
> >>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
> >>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on
> Mon
> >>> > May
> >>> > 7 14:01:59 PDT 2012
> >>> > ************************************************************/
> >>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> >>> > processName=NameNode, sessionId=null
> >>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> >>> > NameNodeMeterics using context
> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> >>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> >>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
> entries
> >>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
> (auth:SIMPLE)
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> isPermissionEnabled=false
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> >>> > dfs.block.invalidate.limit=1000
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> isAccessTokenEnabled=false
> >>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> >>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> >>> > FSNamesystemMetrics using context
> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
> >>> > initialization
> >>> > failed.
> >>> > java.io.FileNotFoundException:
> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> >>> > denied)
> >>> > at java.io.RandomAccessFile.open(Native Method)
> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> > at
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> > at
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
> >>> > java.io.FileNotFoundException:
> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> >>> > denied)
> >>> > at java.io.RandomAccessFile.open(Native Method)
> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> > at
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> > at
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>> >
> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> >>> > /************************************************************
> >>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/
> 127.0.0.1
> >>> > ************************************************************/
> >>> >
> >>> >
> >>
> >>****
>
> ** **
>
> ** **
>
> ** **
>

Re: namenode instantiation error

Posted by anand sharma <an...@gmail.com>.
Yea Vinay you are write i am formatting it from root and running it from
hive user beacause when i try to format namenode from hive it says..

[hive@localhost ~]$ hadoop namenode -format
12/08/10 21:42:13 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 0.20.2-cdh3u4
STARTUP_MSG:   build = file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
-r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May
 7 14:01:59 PDT 2012
************************************************************/
Re-format filesystem in /var/lib/hadoop-0.20/cache/hadoop/dfs/name ? (Y or
N) y
Format aborted in /var/lib/hadoop-0.20/cache/hadoop/dfs/name
12/08/10 21:42:18 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1

yea i think that i may need to install ssh in order to get it up and
running.

On Fri, Aug 10, 2012 at 10:14 AM, Vinayakumar B <vi...@huawei.com>wrote:

> Hi Anand,****
>
> Its clearly telling namenode not able to access the lock file inside name
> dir.****
>
> ** **
>
> * /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> denied)*
>
> * *
>
> Did you format the namenode using one user and starting namenode in
> another user..?****
>
> ** **
>
> Try formatting and starting from same user console.****
>
> ** **
>
> *From:* anand sharma [mailto:anand2sharma@gmail.com]
> *Sent:* Friday, August 10, 2012 9:37 AM
> *To:* user@hadoop.apache.org
> *Subject:* Re: namenode instantiation error****
>
> ** **
>
> yes Owen i did.****
>
> On Thu, Aug 9, 2012 at 6:23 PM, Owen Duan <su...@gmail.com> wrote:****
>
> have you tried hadoop namenode -format?****
>
> 2012/8/9 anand sharma <an...@gmail.com>****
>
> yea  Tariq !1 its a fresh installation i m doing it for the first time,
> hope someone will know the error code and the reason of error.****
>
> ** **
>
> On Thu, Aug 9, 2012 at 5:35 PM, Mohammad Tariq <do...@gmail.com> wrote:
> ****
>
> Hi Anand,
>
>       Have you tried any other Hadoop distribution or version also??In
> that case first remove the older one and start fresh.
>
> Regards,
>     Mohammad Tariq****
>
>
>
> On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq <do...@gmail.com> wrote:
> > Hello Rahul,
> >
> >    That's great. That's the best way to learn(I am doing the same :)
> > ). Since the installation part is over, I would suggest to get
> > yourself familiar with Hdfs and MapReduce first. Try to do basic
> > filesystem operations using the Hdfs API and run the wordcount
> > program, if you haven't done it yet. Then move ahead.
> >
> > Regards,
> >     Mohammad Tariq
> >
> >
> > On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com>
> wrote:
> >> Hi Tariq,
> >>
> >> I am also new to Hadoop trying to learn my self can anyone help me on
> the
> >> same.
> >> i have installed CDH3.
> >>
> >>
> >>
> >> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com>
> wrote:
> >>>
> >>> Hello Anand,
> >>>
> >>>     Is there any specific reason behind not using ssh??
> >>>
> >>> Regards,
> >>>     Mohammad Tariq
> >>>
> >>>
> >>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
> >>> wrote:
> >>> > Hi, i am just learning the Hadoop and i am setting the development
> >>> > environment with CDH3 pseudo distributed mode without any ssh
> >>> > cofiguration
> >>> > in CentOS 6.2 . i can run the sample programs as usual but when i try
> >>> > and
> >>> > run namenode this is the error it logs...
> >>> >
> >>> > [hive@localhost ~]$ hadoop namenode
> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> >>> > /************************************************************
> >>> > STARTUP_MSG: Starting NameNode
> >>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> >>> > STARTUP_MSG:   args = []
> >>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
> >>> > STARTUP_MSG:   build =
> >>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
> >>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on
> Mon
> >>> > May
> >>> > 7 14:01:59 PDT 2012
> >>> > ************************************************************/
> >>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> >>> > processName=NameNode, sessionId=null
> >>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> >>> > NameNodeMeterics using context
> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> >>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> >>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
> entries
> >>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
> (auth:SIMPLE)
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> isPermissionEnabled=false
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> >>> > dfs.block.invalidate.limit=1000
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> isAccessTokenEnabled=false
> >>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> >>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> >>> > FSNamesystemMetrics using context
> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
> >>> > initialization
> >>> > failed.
> >>> > java.io.FileNotFoundException:
> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> >>> > denied)
> >>> > at java.io.RandomAccessFile.open(Native Method)
> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> > at
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> > at
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
> >>> > java.io.FileNotFoundException:
> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> >>> > denied)
> >>> > at java.io.RandomAccessFile.open(Native Method)
> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> > at
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> > at
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>> >
> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> >>> > /************************************************************
> >>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/
> 127.0.0.1
> >>> > ************************************************************/
> >>> >
> >>> >
> >>
> >>****
>
> ** **
>
> ** **
>
> ** **
>

Re: namenode instantiation error

Posted by anand sharma <an...@gmail.com>.
Yea Vinay you are write i am formatting it from root and running it from
hive user beacause when i try to format namenode from hive it says..

[hive@localhost ~]$ hadoop namenode -format
12/08/10 21:42:13 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 0.20.2-cdh3u4
STARTUP_MSG:   build = file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
-r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May
 7 14:01:59 PDT 2012
************************************************************/
Re-format filesystem in /var/lib/hadoop-0.20/cache/hadoop/dfs/name ? (Y or
N) y
Format aborted in /var/lib/hadoop-0.20/cache/hadoop/dfs/name
12/08/10 21:42:18 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1

yea i think that i may need to install ssh in order to get it up and
running.

On Fri, Aug 10, 2012 at 10:14 AM, Vinayakumar B <vi...@huawei.com>wrote:

> Hi Anand,****
>
> Its clearly telling namenode not able to access the lock file inside name
> dir.****
>
> ** **
>
> * /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> denied)*
>
> * *
>
> Did you format the namenode using one user and starting namenode in
> another user..?****
>
> ** **
>
> Try formatting and starting from same user console.****
>
> ** **
>
> *From:* anand sharma [mailto:anand2sharma@gmail.com]
> *Sent:* Friday, August 10, 2012 9:37 AM
> *To:* user@hadoop.apache.org
> *Subject:* Re: namenode instantiation error****
>
> ** **
>
> yes Owen i did.****
>
> On Thu, Aug 9, 2012 at 6:23 PM, Owen Duan <su...@gmail.com> wrote:****
>
> have you tried hadoop namenode -format?****
>
> 2012/8/9 anand sharma <an...@gmail.com>****
>
> yea  Tariq !1 its a fresh installation i m doing it for the first time,
> hope someone will know the error code and the reason of error.****
>
> ** **
>
> On Thu, Aug 9, 2012 at 5:35 PM, Mohammad Tariq <do...@gmail.com> wrote:
> ****
>
> Hi Anand,
>
>       Have you tried any other Hadoop distribution or version also??In
> that case first remove the older one and start fresh.
>
> Regards,
>     Mohammad Tariq****
>
>
>
> On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq <do...@gmail.com> wrote:
> > Hello Rahul,
> >
> >    That's great. That's the best way to learn(I am doing the same :)
> > ). Since the installation part is over, I would suggest to get
> > yourself familiar with Hdfs and MapReduce first. Try to do basic
> > filesystem operations using the Hdfs API and run the wordcount
> > program, if you haven't done it yet. Then move ahead.
> >
> > Regards,
> >     Mohammad Tariq
> >
> >
> > On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com>
> wrote:
> >> Hi Tariq,
> >>
> >> I am also new to Hadoop trying to learn my self can anyone help me on
> the
> >> same.
> >> i have installed CDH3.
> >>
> >>
> >>
> >> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com>
> wrote:
> >>>
> >>> Hello Anand,
> >>>
> >>>     Is there any specific reason behind not using ssh??
> >>>
> >>> Regards,
> >>>     Mohammad Tariq
> >>>
> >>>
> >>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
> >>> wrote:
> >>> > Hi, i am just learning the Hadoop and i am setting the development
> >>> > environment with CDH3 pseudo distributed mode without any ssh
> >>> > cofiguration
> >>> > in CentOS 6.2 . i can run the sample programs as usual but when i try
> >>> > and
> >>> > run namenode this is the error it logs...
> >>> >
> >>> > [hive@localhost ~]$ hadoop namenode
> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> >>> > /************************************************************
> >>> > STARTUP_MSG: Starting NameNode
> >>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> >>> > STARTUP_MSG:   args = []
> >>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
> >>> > STARTUP_MSG:   build =
> >>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
> >>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on
> Mon
> >>> > May
> >>> > 7 14:01:59 PDT 2012
> >>> > ************************************************************/
> >>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> >>> > processName=NameNode, sessionId=null
> >>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> >>> > NameNodeMeterics using context
> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> >>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> >>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
> entries
> >>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
> (auth:SIMPLE)
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> isPermissionEnabled=false
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> >>> > dfs.block.invalidate.limit=1000
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> isAccessTokenEnabled=false
> >>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> >>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> >>> > FSNamesystemMetrics using context
> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
> >>> > initialization
> >>> > failed.
> >>> > java.io.FileNotFoundException:
> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> >>> > denied)
> >>> > at java.io.RandomAccessFile.open(Native Method)
> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> > at
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> > at
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
> >>> > java.io.FileNotFoundException:
> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> >>> > denied)
> >>> > at java.io.RandomAccessFile.open(Native Method)
> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> > at
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> > at
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>> >
> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> >>> > /************************************************************
> >>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/
> 127.0.0.1
> >>> > ************************************************************/
> >>> >
> >>> >
> >>
> >>****
>
> ** **
>
> ** **
>
> ** **
>

Re: namenode instantiation error

Posted by anand sharma <an...@gmail.com>.
yes Owen i did.

On Thu, Aug 9, 2012 at 6:23 PM, Owen Duan <su...@gmail.com> wrote:

> have you tried hadoop namenode -format?
>
> 2012/8/9 anand sharma <an...@gmail.com>
>
>> yea  Tariq !1 its a fresh installation i m doing it for the first time,
>> hope someone will know the error code and the reason of error.
>>
>>
>> On Thu, Aug 9, 2012 at 5:35 PM, Mohammad Tariq <do...@gmail.com>wrote:
>>
>>> Hi Anand,
>>>
>>>       Have you tried any other Hadoop distribution or version also??In
>>> that case first remove the older one and start fresh.
>>>
>>> Regards,
>>>     Mohammad Tariq
>>>
>>>
>>> On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq <do...@gmail.com>
>>> wrote:
>>> > Hello Rahul,
>>> >
>>> >    That's great. That's the best way to learn(I am doing the same :)
>>> > ). Since the installation part is over, I would suggest to get
>>> > yourself familiar with Hdfs and MapReduce first. Try to do basic
>>> > filesystem operations using the Hdfs API and run the wordcount
>>> > program, if you haven't done it yet. Then move ahead.
>>> >
>>> > Regards,
>>> >     Mohammad Tariq
>>> >
>>> >
>>> > On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com>
>>> wrote:
>>> >> Hi Tariq,
>>> >>
>>> >> I am also new to Hadoop trying to learn my self can anyone help me on
>>> the
>>> >> same.
>>> >> i have installed CDH3.
>>> >>
>>> >>
>>> >>
>>> >> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com>
>>> wrote:
>>> >>>
>>> >>> Hello Anand,
>>> >>>
>>> >>>     Is there any specific reason behind not using ssh??
>>> >>>
>>> >>> Regards,
>>> >>>     Mohammad Tariq
>>> >>>
>>> >>>
>>> >>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <anand2sharma@gmail.com
>>> >
>>> >>> wrote:
>>> >>> > Hi, i am just learning the Hadoop and i am setting the development
>>> >>> > environment with CDH3 pseudo distributed mode without any ssh
>>> >>> > cofiguration
>>> >>> > in CentOS 6.2 . i can run the sample programs as usual but when i
>>> try
>>> >>> > and
>>> >>> > run namenode this is the error it logs...
>>> >>> >
>>> >>> > [hive@localhost ~]$ hadoop namenode
>>> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>>> >>> > /************************************************************
>>> >>> > STARTUP_MSG: Starting NameNode
>>> >>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>>> >>> > STARTUP_MSG:   args = []
>>> >>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
>>> >>> > STARTUP_MSG:   build =
>>> >>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
>>> >>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on
>>> Mon
>>> >>> > May
>>> >>> > 7 14:01:59 PDT 2012
>>> >>> > ************************************************************/
>>> >>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics
>>> with
>>> >>> > processName=NameNode, sessionId=null
>>> >>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>>> >>> > NameNodeMeterics using context
>>> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> >>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>>> >>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>>> >>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
>>> entries
>>> >>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152,
>>> actual=2097152
>>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
>>> (auth:SIMPLE)
>>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> isPermissionEnabled=false
>>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> >>> > dfs.block.invalidate.limit=1000
>>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> isAccessTokenEnabled=false
>>> >>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>>> >>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>>> >>> > FSNamesystemMetrics using context
>>> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> >>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
>>> >>> > initialization
>>> >>> > failed.
>>> >>> > java.io.FileNotFoundException:
>>> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>>> >>> > denied)
>>> >>> > at java.io.RandomAccessFile.open(Native Method)
>>> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>>> >>> > at
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>>> >>> > at
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> >>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
>>> >>> > java.io.FileNotFoundException:
>>> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>>> >>> > denied)
>>> >>> > at java.io.RandomAccessFile.open(Native Method)
>>> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>>> >>> > at
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>>> >>> > at
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> >>> >
>>> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>>> >>> > /************************************************************
>>> >>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/
>>> 127.0.0.1
>>> >>> > ************************************************************/
>>> >>> >
>>> >>> >
>>> >>
>>> >>
>>>
>>
>>
>

Re: namenode instantiation error

Posted by anand sharma <an...@gmail.com>.
yes Owen i did.

On Thu, Aug 9, 2012 at 6:23 PM, Owen Duan <su...@gmail.com> wrote:

> have you tried hadoop namenode -format?
>
> 2012/8/9 anand sharma <an...@gmail.com>
>
>> yea  Tariq !1 its a fresh installation i m doing it for the first time,
>> hope someone will know the error code and the reason of error.
>>
>>
>> On Thu, Aug 9, 2012 at 5:35 PM, Mohammad Tariq <do...@gmail.com>wrote:
>>
>>> Hi Anand,
>>>
>>>       Have you tried any other Hadoop distribution or version also??In
>>> that case first remove the older one and start fresh.
>>>
>>> Regards,
>>>     Mohammad Tariq
>>>
>>>
>>> On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq <do...@gmail.com>
>>> wrote:
>>> > Hello Rahul,
>>> >
>>> >    That's great. That's the best way to learn(I am doing the same :)
>>> > ). Since the installation part is over, I would suggest to get
>>> > yourself familiar with Hdfs and MapReduce first. Try to do basic
>>> > filesystem operations using the Hdfs API and run the wordcount
>>> > program, if you haven't done it yet. Then move ahead.
>>> >
>>> > Regards,
>>> >     Mohammad Tariq
>>> >
>>> >
>>> > On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com>
>>> wrote:
>>> >> Hi Tariq,
>>> >>
>>> >> I am also new to Hadoop trying to learn my self can anyone help me on
>>> the
>>> >> same.
>>> >> i have installed CDH3.
>>> >>
>>> >>
>>> >>
>>> >> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com>
>>> wrote:
>>> >>>
>>> >>> Hello Anand,
>>> >>>
>>> >>>     Is there any specific reason behind not using ssh??
>>> >>>
>>> >>> Regards,
>>> >>>     Mohammad Tariq
>>> >>>
>>> >>>
>>> >>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <anand2sharma@gmail.com
>>> >
>>> >>> wrote:
>>> >>> > Hi, i am just learning the Hadoop and i am setting the development
>>> >>> > environment with CDH3 pseudo distributed mode without any ssh
>>> >>> > cofiguration
>>> >>> > in CentOS 6.2 . i can run the sample programs as usual but when i
>>> try
>>> >>> > and
>>> >>> > run namenode this is the error it logs...
>>> >>> >
>>> >>> > [hive@localhost ~]$ hadoop namenode
>>> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>>> >>> > /************************************************************
>>> >>> > STARTUP_MSG: Starting NameNode
>>> >>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>>> >>> > STARTUP_MSG:   args = []
>>> >>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
>>> >>> > STARTUP_MSG:   build =
>>> >>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
>>> >>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on
>>> Mon
>>> >>> > May
>>> >>> > 7 14:01:59 PDT 2012
>>> >>> > ************************************************************/
>>> >>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics
>>> with
>>> >>> > processName=NameNode, sessionId=null
>>> >>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>>> >>> > NameNodeMeterics using context
>>> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> >>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>>> >>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>>> >>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
>>> entries
>>> >>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152,
>>> actual=2097152
>>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
>>> (auth:SIMPLE)
>>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> isPermissionEnabled=false
>>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> >>> > dfs.block.invalidate.limit=1000
>>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> isAccessTokenEnabled=false
>>> >>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>>> >>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>>> >>> > FSNamesystemMetrics using context
>>> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> >>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
>>> >>> > initialization
>>> >>> > failed.
>>> >>> > java.io.FileNotFoundException:
>>> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>>> >>> > denied)
>>> >>> > at java.io.RandomAccessFile.open(Native Method)
>>> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>>> >>> > at
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>>> >>> > at
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> >>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
>>> >>> > java.io.FileNotFoundException:
>>> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>>> >>> > denied)
>>> >>> > at java.io.RandomAccessFile.open(Native Method)
>>> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>>> >>> > at
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>>> >>> > at
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> >>> >
>>> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>>> >>> > /************************************************************
>>> >>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/
>>> 127.0.0.1
>>> >>> > ************************************************************/
>>> >>> >
>>> >>> >
>>> >>
>>> >>
>>>
>>
>>
>

Re: namenode instantiation error

Posted by anand sharma <an...@gmail.com>.
yes Owen i did.

On Thu, Aug 9, 2012 at 6:23 PM, Owen Duan <su...@gmail.com> wrote:

> have you tried hadoop namenode -format?
>
> 2012/8/9 anand sharma <an...@gmail.com>
>
>> yea  Tariq !1 its a fresh installation i m doing it for the first time,
>> hope someone will know the error code and the reason of error.
>>
>>
>> On Thu, Aug 9, 2012 at 5:35 PM, Mohammad Tariq <do...@gmail.com>wrote:
>>
>>> Hi Anand,
>>>
>>>       Have you tried any other Hadoop distribution or version also??In
>>> that case first remove the older one and start fresh.
>>>
>>> Regards,
>>>     Mohammad Tariq
>>>
>>>
>>> On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq <do...@gmail.com>
>>> wrote:
>>> > Hello Rahul,
>>> >
>>> >    That's great. That's the best way to learn(I am doing the same :)
>>> > ). Since the installation part is over, I would suggest to get
>>> > yourself familiar with Hdfs and MapReduce first. Try to do basic
>>> > filesystem operations using the Hdfs API and run the wordcount
>>> > program, if you haven't done it yet. Then move ahead.
>>> >
>>> > Regards,
>>> >     Mohammad Tariq
>>> >
>>> >
>>> > On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com>
>>> wrote:
>>> >> Hi Tariq,
>>> >>
>>> >> I am also new to Hadoop trying to learn my self can anyone help me on
>>> the
>>> >> same.
>>> >> i have installed CDH3.
>>> >>
>>> >>
>>> >>
>>> >> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com>
>>> wrote:
>>> >>>
>>> >>> Hello Anand,
>>> >>>
>>> >>>     Is there any specific reason behind not using ssh??
>>> >>>
>>> >>> Regards,
>>> >>>     Mohammad Tariq
>>> >>>
>>> >>>
>>> >>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <anand2sharma@gmail.com
>>> >
>>> >>> wrote:
>>> >>> > Hi, i am just learning the Hadoop and i am setting the development
>>> >>> > environment with CDH3 pseudo distributed mode without any ssh
>>> >>> > cofiguration
>>> >>> > in CentOS 6.2 . i can run the sample programs as usual but when i
>>> try
>>> >>> > and
>>> >>> > run namenode this is the error it logs...
>>> >>> >
>>> >>> > [hive@localhost ~]$ hadoop namenode
>>> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>>> >>> > /************************************************************
>>> >>> > STARTUP_MSG: Starting NameNode
>>> >>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>>> >>> > STARTUP_MSG:   args = []
>>> >>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
>>> >>> > STARTUP_MSG:   build =
>>> >>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
>>> >>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on
>>> Mon
>>> >>> > May
>>> >>> > 7 14:01:59 PDT 2012
>>> >>> > ************************************************************/
>>> >>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics
>>> with
>>> >>> > processName=NameNode, sessionId=null
>>> >>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>>> >>> > NameNodeMeterics using context
>>> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> >>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>>> >>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>>> >>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
>>> entries
>>> >>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152,
>>> actual=2097152
>>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
>>> (auth:SIMPLE)
>>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> isPermissionEnabled=false
>>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> >>> > dfs.block.invalidate.limit=1000
>>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> isAccessTokenEnabled=false
>>> >>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>>> >>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>>> >>> > FSNamesystemMetrics using context
>>> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> >>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
>>> >>> > initialization
>>> >>> > failed.
>>> >>> > java.io.FileNotFoundException:
>>> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>>> >>> > denied)
>>> >>> > at java.io.RandomAccessFile.open(Native Method)
>>> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>>> >>> > at
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>>> >>> > at
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> >>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
>>> >>> > java.io.FileNotFoundException:
>>> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>>> >>> > denied)
>>> >>> > at java.io.RandomAccessFile.open(Native Method)
>>> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>>> >>> > at
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>>> >>> > at
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> >>> >
>>> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>>> >>> > /************************************************************
>>> >>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/
>>> 127.0.0.1
>>> >>> > ************************************************************/
>>> >>> >
>>> >>> >
>>> >>
>>> >>
>>>
>>
>>
>

Re: namenode instantiation error

Posted by anand sharma <an...@gmail.com>.
yes Owen i did.

On Thu, Aug 9, 2012 at 6:23 PM, Owen Duan <su...@gmail.com> wrote:

> have you tried hadoop namenode -format?
>
> 2012/8/9 anand sharma <an...@gmail.com>
>
>> yea  Tariq !1 its a fresh installation i m doing it for the first time,
>> hope someone will know the error code and the reason of error.
>>
>>
>> On Thu, Aug 9, 2012 at 5:35 PM, Mohammad Tariq <do...@gmail.com>wrote:
>>
>>> Hi Anand,
>>>
>>>       Have you tried any other Hadoop distribution or version also??In
>>> that case first remove the older one and start fresh.
>>>
>>> Regards,
>>>     Mohammad Tariq
>>>
>>>
>>> On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq <do...@gmail.com>
>>> wrote:
>>> > Hello Rahul,
>>> >
>>> >    That's great. That's the best way to learn(I am doing the same :)
>>> > ). Since the installation part is over, I would suggest to get
>>> > yourself familiar with Hdfs and MapReduce first. Try to do basic
>>> > filesystem operations using the Hdfs API and run the wordcount
>>> > program, if you haven't done it yet. Then move ahead.
>>> >
>>> > Regards,
>>> >     Mohammad Tariq
>>> >
>>> >
>>> > On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com>
>>> wrote:
>>> >> Hi Tariq,
>>> >>
>>> >> I am also new to Hadoop trying to learn my self can anyone help me on
>>> the
>>> >> same.
>>> >> i have installed CDH3.
>>> >>
>>> >>
>>> >>
>>> >> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com>
>>> wrote:
>>> >>>
>>> >>> Hello Anand,
>>> >>>
>>> >>>     Is there any specific reason behind not using ssh??
>>> >>>
>>> >>> Regards,
>>> >>>     Mohammad Tariq
>>> >>>
>>> >>>
>>> >>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <anand2sharma@gmail.com
>>> >
>>> >>> wrote:
>>> >>> > Hi, i am just learning the Hadoop and i am setting the development
>>> >>> > environment with CDH3 pseudo distributed mode without any ssh
>>> >>> > cofiguration
>>> >>> > in CentOS 6.2 . i can run the sample programs as usual but when i
>>> try
>>> >>> > and
>>> >>> > run namenode this is the error it logs...
>>> >>> >
>>> >>> > [hive@localhost ~]$ hadoop namenode
>>> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>>> >>> > /************************************************************
>>> >>> > STARTUP_MSG: Starting NameNode
>>> >>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>>> >>> > STARTUP_MSG:   args = []
>>> >>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
>>> >>> > STARTUP_MSG:   build =
>>> >>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
>>> >>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on
>>> Mon
>>> >>> > May
>>> >>> > 7 14:01:59 PDT 2012
>>> >>> > ************************************************************/
>>> >>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics
>>> with
>>> >>> > processName=NameNode, sessionId=null
>>> >>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>>> >>> > NameNodeMeterics using context
>>> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> >>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>>> >>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>>> >>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
>>> entries
>>> >>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152,
>>> actual=2097152
>>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
>>> (auth:SIMPLE)
>>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> isPermissionEnabled=false
>>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> >>> > dfs.block.invalidate.limit=1000
>>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> isAccessTokenEnabled=false
>>> >>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>>> >>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>>> >>> > FSNamesystemMetrics using context
>>> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> >>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
>>> >>> > initialization
>>> >>> > failed.
>>> >>> > java.io.FileNotFoundException:
>>> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>>> >>> > denied)
>>> >>> > at java.io.RandomAccessFile.open(Native Method)
>>> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>>> >>> > at
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>>> >>> > at
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> >>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
>>> >>> > java.io.FileNotFoundException:
>>> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>>> >>> > denied)
>>> >>> > at java.io.RandomAccessFile.open(Native Method)
>>> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>>> >>> > at
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>>> >>> > at
>>> >>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> >>> >
>>> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>>> >>> > /************************************************************
>>> >>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/
>>> 127.0.0.1
>>> >>> > ************************************************************/
>>> >>> >
>>> >>> >
>>> >>
>>> >>
>>>
>>
>>
>

Re: namenode instantiation error

Posted by Nitin Pawar <ni...@gmail.com>.
anand if you are trying single node instance, I had written an ugly
script to setup the single node mode.

you can refer to it @ https://github.com/nitinpawar/hadoop/

I did face these issues but mostly they were due to permissions related.


On Thu, Aug 9, 2012 at 6:23 PM, Owen Duan <su...@gmail.com> wrote:
> have you tried hadoop namenode -format?
>
>
> 2012/8/9 anand sharma <an...@gmail.com>
>>
>> yea  Tariq !1 its a fresh installation i m doing it for the first time,
>> hope someone will know the error code and the reason of error.
>>
>>
>> On Thu, Aug 9, 2012 at 5:35 PM, Mohammad Tariq <do...@gmail.com> wrote:
>>>
>>> Hi Anand,
>>>
>>>       Have you tried any other Hadoop distribution or version also??In
>>> that case first remove the older one and start fresh.
>>>
>>> Regards,
>>>     Mohammad Tariq
>>>
>>>
>>> On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq <do...@gmail.com>
>>> wrote:
>>> > Hello Rahul,
>>> >
>>> >    That's great. That's the best way to learn(I am doing the same :)
>>> > ). Since the installation part is over, I would suggest to get
>>> > yourself familiar with Hdfs and MapReduce first. Try to do basic
>>> > filesystem operations using the Hdfs API and run the wordcount
>>> > program, if you haven't done it yet. Then move ahead.
>>> >
>>> > Regards,
>>> >     Mohammad Tariq
>>> >
>>> >
>>> > On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com>
>>> > wrote:
>>> >> Hi Tariq,
>>> >>
>>> >> I am also new to Hadoop trying to learn my self can anyone help me on
>>> >> the
>>> >> same.
>>> >> i have installed CDH3.
>>> >>
>>> >>
>>> >>
>>> >> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com>
>>> >> wrote:
>>> >>>
>>> >>> Hello Anand,
>>> >>>
>>> >>>     Is there any specific reason behind not using ssh??
>>> >>>
>>> >>> Regards,
>>> >>>     Mohammad Tariq
>>> >>>
>>> >>>
>>> >>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
>>> >>> wrote:
>>> >>> > Hi, i am just learning the Hadoop and i am setting the development
>>> >>> > environment with CDH3 pseudo distributed mode without any ssh
>>> >>> > cofiguration
>>> >>> > in CentOS 6.2 . i can run the sample programs as usual but when i
>>> >>> > try
>>> >>> > and
>>> >>> > run namenode this is the error it logs...
>>> >>> >
>>> >>> > [hive@localhost ~]$ hadoop namenode
>>> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>>> >>> > /************************************************************
>>> >>> > STARTUP_MSG: Starting NameNode
>>> >>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>>> >>> > STARTUP_MSG:   args = []
>>> >>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
>>> >>> > STARTUP_MSG:   build =
>>> >>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
>>> >>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on
>>> >>> > Mon
>>> >>> > May
>>> >>> > 7 14:01:59 PDT 2012
>>> >>> > ************************************************************/
>>> >>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics
>>> >>> > with
>>> >>> > processName=NameNode, sessionId=null
>>> >>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>>> >>> > NameNodeMeterics using context
>>> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> >>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>>> >>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>>> >>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
>>> >>> > entries
>>> >>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152,
>>> >>> > actual=2097152
>>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
>>> >>> > (auth:SIMPLE)
>>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> >>> > isPermissionEnabled=false
>>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> >>> > dfs.block.invalidate.limit=1000
>>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> >>> > isAccessTokenEnabled=false
>>> >>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>>> >>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>>> >>> > FSNamesystemMetrics using context
>>> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> >>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
>>> >>> > initialization
>>> >>> > failed.
>>> >>> > java.io.FileNotFoundException:
>>> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>>> >>> > denied)
>>> >>> > at java.io.RandomAccessFile.open(Native Method)
>>> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>>> >>> > at
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>>> >>> > at
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> >>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
>>> >>> > java.io.FileNotFoundException:
>>> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>>> >>> > denied)
>>> >>> > at java.io.RandomAccessFile.open(Native Method)
>>> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>>> >>> > at
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>>> >>> > at
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> >>> >
>>> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>>> >>> > /************************************************************
>>> >>> > SHUTDOWN_MSG: Shutting down NameNode at
>>> >>> > localhost.localdomain/127.0.0.1
>>> >>> > ************************************************************/
>>> >>> >
>>> >>> >
>>> >>
>>> >>
>>
>>
>



-- 
Nitin Pawar

Re: namenode instantiation error

Posted by Nitin Pawar <ni...@gmail.com>.
anand if you are trying single node instance, I had written an ugly
script to setup the single node mode.

you can refer to it @ https://github.com/nitinpawar/hadoop/

I did face these issues but mostly they were due to permissions related.


On Thu, Aug 9, 2012 at 6:23 PM, Owen Duan <su...@gmail.com> wrote:
> have you tried hadoop namenode -format?
>
>
> 2012/8/9 anand sharma <an...@gmail.com>
>>
>> yea  Tariq !1 its a fresh installation i m doing it for the first time,
>> hope someone will know the error code and the reason of error.
>>
>>
>> On Thu, Aug 9, 2012 at 5:35 PM, Mohammad Tariq <do...@gmail.com> wrote:
>>>
>>> Hi Anand,
>>>
>>>       Have you tried any other Hadoop distribution or version also??In
>>> that case first remove the older one and start fresh.
>>>
>>> Regards,
>>>     Mohammad Tariq
>>>
>>>
>>> On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq <do...@gmail.com>
>>> wrote:
>>> > Hello Rahul,
>>> >
>>> >    That's great. That's the best way to learn(I am doing the same :)
>>> > ). Since the installation part is over, I would suggest to get
>>> > yourself familiar with Hdfs and MapReduce first. Try to do basic
>>> > filesystem operations using the Hdfs API and run the wordcount
>>> > program, if you haven't done it yet. Then move ahead.
>>> >
>>> > Regards,
>>> >     Mohammad Tariq
>>> >
>>> >
>>> > On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com>
>>> > wrote:
>>> >> Hi Tariq,
>>> >>
>>> >> I am also new to Hadoop trying to learn my self can anyone help me on
>>> >> the
>>> >> same.
>>> >> i have installed CDH3.
>>> >>
>>> >>
>>> >>
>>> >> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com>
>>> >> wrote:
>>> >>>
>>> >>> Hello Anand,
>>> >>>
>>> >>>     Is there any specific reason behind not using ssh??
>>> >>>
>>> >>> Regards,
>>> >>>     Mohammad Tariq
>>> >>>
>>> >>>
>>> >>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
>>> >>> wrote:
>>> >>> > Hi, i am just learning the Hadoop and i am setting the development
>>> >>> > environment with CDH3 pseudo distributed mode without any ssh
>>> >>> > cofiguration
>>> >>> > in CentOS 6.2 . i can run the sample programs as usual but when i
>>> >>> > try
>>> >>> > and
>>> >>> > run namenode this is the error it logs...
>>> >>> >
>>> >>> > [hive@localhost ~]$ hadoop namenode
>>> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>>> >>> > /************************************************************
>>> >>> > STARTUP_MSG: Starting NameNode
>>> >>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>>> >>> > STARTUP_MSG:   args = []
>>> >>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
>>> >>> > STARTUP_MSG:   build =
>>> >>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
>>> >>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on
>>> >>> > Mon
>>> >>> > May
>>> >>> > 7 14:01:59 PDT 2012
>>> >>> > ************************************************************/
>>> >>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics
>>> >>> > with
>>> >>> > processName=NameNode, sessionId=null
>>> >>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>>> >>> > NameNodeMeterics using context
>>> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> >>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>>> >>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>>> >>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
>>> >>> > entries
>>> >>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152,
>>> >>> > actual=2097152
>>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
>>> >>> > (auth:SIMPLE)
>>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> >>> > isPermissionEnabled=false
>>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> >>> > dfs.block.invalidate.limit=1000
>>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> >>> > isAccessTokenEnabled=false
>>> >>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>>> >>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>>> >>> > FSNamesystemMetrics using context
>>> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> >>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
>>> >>> > initialization
>>> >>> > failed.
>>> >>> > java.io.FileNotFoundException:
>>> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>>> >>> > denied)
>>> >>> > at java.io.RandomAccessFile.open(Native Method)
>>> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>>> >>> > at
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>>> >>> > at
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> >>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
>>> >>> > java.io.FileNotFoundException:
>>> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>>> >>> > denied)
>>> >>> > at java.io.RandomAccessFile.open(Native Method)
>>> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>>> >>> > at
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> >>> > at
>>> >>> >
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>>> >>> > at
>>> >>> >
>>> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> >>> >
>>> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>>> >>> > /************************************************************
>>> >>> > SHUTDOWN_MSG: Shutting down NameNode at
>>> >>> > localhost.localdomain/127.0.0.1
>>> >>> > ************************************************************/
>>> >>> >
>>> >>> >
>>> >>
>>> >>
>>
>>
>



-- 
Nitin Pawar

Re: namenode instantiation error

Posted by Owen Duan <su...@gmail.com>.
have you tried hadoop namenode -format?

2012/8/9 anand sharma <an...@gmail.com>

> yea  Tariq !1 its a fresh installation i m doing it for the first time,
> hope someone will know the error code and the reason of error.
>
>
> On Thu, Aug 9, 2012 at 5:35 PM, Mohammad Tariq <do...@gmail.com> wrote:
>
>> Hi Anand,
>>
>>       Have you tried any other Hadoop distribution or version also??In
>> that case first remove the older one and start fresh.
>>
>> Regards,
>>     Mohammad Tariq
>>
>>
>> On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq <do...@gmail.com>
>> wrote:
>> > Hello Rahul,
>> >
>> >    That's great. That's the best way to learn(I am doing the same :)
>> > ). Since the installation part is over, I would suggest to get
>> > yourself familiar with Hdfs and MapReduce first. Try to do basic
>> > filesystem operations using the Hdfs API and run the wordcount
>> > program, if you haven't done it yet. Then move ahead.
>> >
>> > Regards,
>> >     Mohammad Tariq
>> >
>> >
>> > On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com>
>> wrote:
>> >> Hi Tariq,
>> >>
>> >> I am also new to Hadoop trying to learn my self can anyone help me on
>> the
>> >> same.
>> >> i have installed CDH3.
>> >>
>> >>
>> >>
>> >> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com>
>> wrote:
>> >>>
>> >>> Hello Anand,
>> >>>
>> >>>     Is there any specific reason behind not using ssh??
>> >>>
>> >>> Regards,
>> >>>     Mohammad Tariq
>> >>>
>> >>>
>> >>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
>> >>> wrote:
>> >>> > Hi, i am just learning the Hadoop and i am setting the development
>> >>> > environment with CDH3 pseudo distributed mode without any ssh
>> >>> > cofiguration
>> >>> > in CentOS 6.2 . i can run the sample programs as usual but when i
>> try
>> >>> > and
>> >>> > run namenode this is the error it logs...
>> >>> >
>> >>> > [hive@localhost ~]$ hadoop namenode
>> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>> >>> > /************************************************************
>> >>> > STARTUP_MSG: Starting NameNode
>> >>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>> >>> > STARTUP_MSG:   args = []
>> >>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
>> >>> > STARTUP_MSG:   build =
>> >>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
>> >>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on
>> Mon
>> >>> > May
>> >>> > 7 14:01:59 PDT 2012
>> >>> > ************************************************************/
>> >>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
>> >>> > processName=NameNode, sessionId=null
>> >>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>> >>> > NameNodeMeterics using context
>> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> >>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>> >>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>> >>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
>> entries
>> >>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152,
>> actual=2097152
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
>> (auth:SIMPLE)
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>> isPermissionEnabled=false
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>> >>> > dfs.block.invalidate.limit=1000
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>> isAccessTokenEnabled=false
>> >>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>> >>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>> >>> > FSNamesystemMetrics using context
>> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> >>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
>> >>> > initialization
>> >>> > failed.
>> >>> > java.io.FileNotFoundException:
>> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>> >>> > denied)
>> >>> > at java.io.RandomAccessFile.open(Native Method)
>> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>> >>> > at
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>> >>> > at
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>> >>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
>> >>> > java.io.FileNotFoundException:
>> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>> >>> > denied)
>> >>> > at java.io.RandomAccessFile.open(Native Method)
>> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>> >>> > at
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>> >>> > at
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>> >>> >
>> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>> >>> > /************************************************************
>> >>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/
>> 127.0.0.1
>> >>> > ************************************************************/
>> >>> >
>> >>> >
>> >>
>> >>
>>
>
>

Re: namenode instantiation error

Posted by Owen Duan <su...@gmail.com>.
have you tried hadoop namenode -format?

2012/8/9 anand sharma <an...@gmail.com>

> yea  Tariq !1 its a fresh installation i m doing it for the first time,
> hope someone will know the error code and the reason of error.
>
>
> On Thu, Aug 9, 2012 at 5:35 PM, Mohammad Tariq <do...@gmail.com> wrote:
>
>> Hi Anand,
>>
>>       Have you tried any other Hadoop distribution or version also??In
>> that case first remove the older one and start fresh.
>>
>> Regards,
>>     Mohammad Tariq
>>
>>
>> On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq <do...@gmail.com>
>> wrote:
>> > Hello Rahul,
>> >
>> >    That's great. That's the best way to learn(I am doing the same :)
>> > ). Since the installation part is over, I would suggest to get
>> > yourself familiar with Hdfs and MapReduce first. Try to do basic
>> > filesystem operations using the Hdfs API and run the wordcount
>> > program, if you haven't done it yet. Then move ahead.
>> >
>> > Regards,
>> >     Mohammad Tariq
>> >
>> >
>> > On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com>
>> wrote:
>> >> Hi Tariq,
>> >>
>> >> I am also new to Hadoop trying to learn my self can anyone help me on
>> the
>> >> same.
>> >> i have installed CDH3.
>> >>
>> >>
>> >>
>> >> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com>
>> wrote:
>> >>>
>> >>> Hello Anand,
>> >>>
>> >>>     Is there any specific reason behind not using ssh??
>> >>>
>> >>> Regards,
>> >>>     Mohammad Tariq
>> >>>
>> >>>
>> >>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
>> >>> wrote:
>> >>> > Hi, i am just learning the Hadoop and i am setting the development
>> >>> > environment with CDH3 pseudo distributed mode without any ssh
>> >>> > cofiguration
>> >>> > in CentOS 6.2 . i can run the sample programs as usual but when i
>> try
>> >>> > and
>> >>> > run namenode this is the error it logs...
>> >>> >
>> >>> > [hive@localhost ~]$ hadoop namenode
>> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>> >>> > /************************************************************
>> >>> > STARTUP_MSG: Starting NameNode
>> >>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>> >>> > STARTUP_MSG:   args = []
>> >>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
>> >>> > STARTUP_MSG:   build =
>> >>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
>> >>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on
>> Mon
>> >>> > May
>> >>> > 7 14:01:59 PDT 2012
>> >>> > ************************************************************/
>> >>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
>> >>> > processName=NameNode, sessionId=null
>> >>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>> >>> > NameNodeMeterics using context
>> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> >>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>> >>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>> >>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
>> entries
>> >>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152,
>> actual=2097152
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
>> (auth:SIMPLE)
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>> isPermissionEnabled=false
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>> >>> > dfs.block.invalidate.limit=1000
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>> isAccessTokenEnabled=false
>> >>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>> >>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>> >>> > FSNamesystemMetrics using context
>> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> >>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
>> >>> > initialization
>> >>> > failed.
>> >>> > java.io.FileNotFoundException:
>> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>> >>> > denied)
>> >>> > at java.io.RandomAccessFile.open(Native Method)
>> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>> >>> > at
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>> >>> > at
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>> >>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
>> >>> > java.io.FileNotFoundException:
>> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>> >>> > denied)
>> >>> > at java.io.RandomAccessFile.open(Native Method)
>> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>> >>> > at
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>> >>> > at
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>> >>> >
>> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>> >>> > /************************************************************
>> >>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/
>> 127.0.0.1
>> >>> > ************************************************************/
>> >>> >
>> >>> >
>> >>
>> >>
>>
>
>

Re: namenode instantiation error

Posted by Abhishek <ab...@gmail.com>.
Hi Anand,

What are the permissions, on dfs.name.dir directory in hdfs-site.xml

Regards
Abhishek 


Sent from my iPhone

On Aug 9, 2012, at 8:41 AM, anand sharma <an...@gmail.com> wrote:

> yea  Tariq !1 its a fresh installation i m doing it for the first time, hope someone will know the error code and the reason of error.
> 
> On Thu, Aug 9, 2012 at 5:35 PM, Mohammad Tariq <do...@gmail.com> wrote:
> Hi Anand,
> 
>       Have you tried any other Hadoop distribution or version also??In
> that case first remove the older one and start fresh.
> 
> Regards,
>     Mohammad Tariq
> 
> 
> On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq <do...@gmail.com> wrote:
> > Hello Rahul,
> >
> >    That's great. That's the best way to learn(I am doing the same :)
> > ). Since the installation part is over, I would suggest to get
> > yourself familiar with Hdfs and MapReduce first. Try to do basic
> > filesystem operations using the Hdfs API and run the wordcount
> > program, if you haven't done it yet. Then move ahead.
> >
> > Regards,
> >     Mohammad Tariq
> >
> >
> > On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com> wrote:
> >> Hi Tariq,
> >>
> >> I am also new to Hadoop trying to learn my self can anyone help me on the
> >> same.
> >> i have installed CDH3.
> >>
> >>
> >>
> >> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com> wrote:
> >>>
> >>> Hello Anand,
> >>>
> >>>     Is there any specific reason behind not using ssh??
> >>>
> >>> Regards,
> >>>     Mohammad Tariq
> >>>
> >>>
> >>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
> >>> wrote:
> >>> > Hi, i am just learning the Hadoop and i am setting the development
> >>> > environment with CDH3 pseudo distributed mode without any ssh
> >>> > cofiguration
> >>> > in CentOS 6.2 . i can run the sample programs as usual but when i try
> >>> > and
> >>> > run namenode this is the error it logs...
> >>> >
> >>> > [hive@localhost ~]$ hadoop namenode
> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> >>> > /************************************************************
> >>> > STARTUP_MSG: Starting NameNode
> >>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> >>> > STARTUP_MSG:   args = []
> >>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
> >>> > STARTUP_MSG:   build =
> >>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
> >>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
> >>> > May
> >>> > 7 14:01:59 PDT 2012
> >>> > ************************************************************/
> >>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> >>> > processName=NameNode, sessionId=null
> >>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> >>> > NameNodeMeterics using context
> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> >>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> >>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
> >>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> >>> > dfs.block.invalidate.limit=1000
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
> >>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> >>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> >>> > FSNamesystemMetrics using context
> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
> >>> > initialization
> >>> > failed.
> >>> > java.io.FileNotFoundException:
> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> >>> > denied)
> >>> > at java.io.RandomAccessFile.open(Native Method)
> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> > at
> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> > at
> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
> >>> > java.io.FileNotFoundException:
> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> >>> > denied)
> >>> > at java.io.RandomAccessFile.open(Native Method)
> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> > at
> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> > at
> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>> >
> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> >>> > /************************************************************
> >>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
> >>> > ************************************************************/
> >>> >
> >>> >
> >>
> >>
> 

Re: namenode instantiation error

Posted by Owen Duan <su...@gmail.com>.
have you tried hadoop namenode -format?

2012/8/9 anand sharma <an...@gmail.com>

> yea  Tariq !1 its a fresh installation i m doing it for the first time,
> hope someone will know the error code and the reason of error.
>
>
> On Thu, Aug 9, 2012 at 5:35 PM, Mohammad Tariq <do...@gmail.com> wrote:
>
>> Hi Anand,
>>
>>       Have you tried any other Hadoop distribution or version also??In
>> that case first remove the older one and start fresh.
>>
>> Regards,
>>     Mohammad Tariq
>>
>>
>> On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq <do...@gmail.com>
>> wrote:
>> > Hello Rahul,
>> >
>> >    That's great. That's the best way to learn(I am doing the same :)
>> > ). Since the installation part is over, I would suggest to get
>> > yourself familiar with Hdfs and MapReduce first. Try to do basic
>> > filesystem operations using the Hdfs API and run the wordcount
>> > program, if you haven't done it yet. Then move ahead.
>> >
>> > Regards,
>> >     Mohammad Tariq
>> >
>> >
>> > On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com>
>> wrote:
>> >> Hi Tariq,
>> >>
>> >> I am also new to Hadoop trying to learn my self can anyone help me on
>> the
>> >> same.
>> >> i have installed CDH3.
>> >>
>> >>
>> >>
>> >> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com>
>> wrote:
>> >>>
>> >>> Hello Anand,
>> >>>
>> >>>     Is there any specific reason behind not using ssh??
>> >>>
>> >>> Regards,
>> >>>     Mohammad Tariq
>> >>>
>> >>>
>> >>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
>> >>> wrote:
>> >>> > Hi, i am just learning the Hadoop and i am setting the development
>> >>> > environment with CDH3 pseudo distributed mode without any ssh
>> >>> > cofiguration
>> >>> > in CentOS 6.2 . i can run the sample programs as usual but when i
>> try
>> >>> > and
>> >>> > run namenode this is the error it logs...
>> >>> >
>> >>> > [hive@localhost ~]$ hadoop namenode
>> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>> >>> > /************************************************************
>> >>> > STARTUP_MSG: Starting NameNode
>> >>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>> >>> > STARTUP_MSG:   args = []
>> >>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
>> >>> > STARTUP_MSG:   build =
>> >>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
>> >>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on
>> Mon
>> >>> > May
>> >>> > 7 14:01:59 PDT 2012
>> >>> > ************************************************************/
>> >>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
>> >>> > processName=NameNode, sessionId=null
>> >>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>> >>> > NameNodeMeterics using context
>> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> >>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>> >>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>> >>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
>> entries
>> >>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152,
>> actual=2097152
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
>> (auth:SIMPLE)
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>> isPermissionEnabled=false
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>> >>> > dfs.block.invalidate.limit=1000
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>> isAccessTokenEnabled=false
>> >>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>> >>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>> >>> > FSNamesystemMetrics using context
>> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> >>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
>> >>> > initialization
>> >>> > failed.
>> >>> > java.io.FileNotFoundException:
>> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>> >>> > denied)
>> >>> > at java.io.RandomAccessFile.open(Native Method)
>> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>> >>> > at
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>> >>> > at
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>> >>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
>> >>> > java.io.FileNotFoundException:
>> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>> >>> > denied)
>> >>> > at java.io.RandomAccessFile.open(Native Method)
>> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>> >>> > at
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>> >>> > at
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>> >>> >
>> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>> >>> > /************************************************************
>> >>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/
>> 127.0.0.1
>> >>> > ************************************************************/
>> >>> >
>> >>> >
>> >>
>> >>
>>
>
>

Re: namenode instantiation error

Posted by Owen Duan <su...@gmail.com>.
have you tried hadoop namenode -format?

2012/8/9 anand sharma <an...@gmail.com>

> yea  Tariq !1 its a fresh installation i m doing it for the first time,
> hope someone will know the error code and the reason of error.
>
>
> On Thu, Aug 9, 2012 at 5:35 PM, Mohammad Tariq <do...@gmail.com> wrote:
>
>> Hi Anand,
>>
>>       Have you tried any other Hadoop distribution or version also??In
>> that case first remove the older one and start fresh.
>>
>> Regards,
>>     Mohammad Tariq
>>
>>
>> On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq <do...@gmail.com>
>> wrote:
>> > Hello Rahul,
>> >
>> >    That's great. That's the best way to learn(I am doing the same :)
>> > ). Since the installation part is over, I would suggest to get
>> > yourself familiar with Hdfs and MapReduce first. Try to do basic
>> > filesystem operations using the Hdfs API and run the wordcount
>> > program, if you haven't done it yet. Then move ahead.
>> >
>> > Regards,
>> >     Mohammad Tariq
>> >
>> >
>> > On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com>
>> wrote:
>> >> Hi Tariq,
>> >>
>> >> I am also new to Hadoop trying to learn my self can anyone help me on
>> the
>> >> same.
>> >> i have installed CDH3.
>> >>
>> >>
>> >>
>> >> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com>
>> wrote:
>> >>>
>> >>> Hello Anand,
>> >>>
>> >>>     Is there any specific reason behind not using ssh??
>> >>>
>> >>> Regards,
>> >>>     Mohammad Tariq
>> >>>
>> >>>
>> >>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
>> >>> wrote:
>> >>> > Hi, i am just learning the Hadoop and i am setting the development
>> >>> > environment with CDH3 pseudo distributed mode without any ssh
>> >>> > cofiguration
>> >>> > in CentOS 6.2 . i can run the sample programs as usual but when i
>> try
>> >>> > and
>> >>> > run namenode this is the error it logs...
>> >>> >
>> >>> > [hive@localhost ~]$ hadoop namenode
>> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>> >>> > /************************************************************
>> >>> > STARTUP_MSG: Starting NameNode
>> >>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>> >>> > STARTUP_MSG:   args = []
>> >>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
>> >>> > STARTUP_MSG:   build =
>> >>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
>> >>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on
>> Mon
>> >>> > May
>> >>> > 7 14:01:59 PDT 2012
>> >>> > ************************************************************/
>> >>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
>> >>> > processName=NameNode, sessionId=null
>> >>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>> >>> > NameNodeMeterics using context
>> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> >>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>> >>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>> >>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
>> entries
>> >>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152,
>> actual=2097152
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
>> (auth:SIMPLE)
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>> isPermissionEnabled=false
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>> >>> > dfs.block.invalidate.limit=1000
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>> isAccessTokenEnabled=false
>> >>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>> >>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>> >>> > FSNamesystemMetrics using context
>> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> >>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
>> >>> > initialization
>> >>> > failed.
>> >>> > java.io.FileNotFoundException:
>> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>> >>> > denied)
>> >>> > at java.io.RandomAccessFile.open(Native Method)
>> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>> >>> > at
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>> >>> > at
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>> >>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
>> >>> > java.io.FileNotFoundException:
>> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>> >>> > denied)
>> >>> > at java.io.RandomAccessFile.open(Native Method)
>> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>> >>> > at
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>> >>> > at
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>> >>> >
>> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>> >>> > /************************************************************
>> >>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/
>> 127.0.0.1
>> >>> > ************************************************************/
>> >>> >
>> >>> >
>> >>
>> >>
>>
>
>

Re: namenode instantiation error

Posted by Abhishek <ab...@gmail.com>.
Hi Anand,

What are the permissions, on dfs.name.dir directory in hdfs-site.xml

Regards
Abhishek 


Sent from my iPhone

On Aug 9, 2012, at 8:41 AM, anand sharma <an...@gmail.com> wrote:

> yea  Tariq !1 its a fresh installation i m doing it for the first time, hope someone will know the error code and the reason of error.
> 
> On Thu, Aug 9, 2012 at 5:35 PM, Mohammad Tariq <do...@gmail.com> wrote:
> Hi Anand,
> 
>       Have you tried any other Hadoop distribution or version also??In
> that case first remove the older one and start fresh.
> 
> Regards,
>     Mohammad Tariq
> 
> 
> On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq <do...@gmail.com> wrote:
> > Hello Rahul,
> >
> >    That's great. That's the best way to learn(I am doing the same :)
> > ). Since the installation part is over, I would suggest to get
> > yourself familiar with Hdfs and MapReduce first. Try to do basic
> > filesystem operations using the Hdfs API and run the wordcount
> > program, if you haven't done it yet. Then move ahead.
> >
> > Regards,
> >     Mohammad Tariq
> >
> >
> > On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com> wrote:
> >> Hi Tariq,
> >>
> >> I am also new to Hadoop trying to learn my self can anyone help me on the
> >> same.
> >> i have installed CDH3.
> >>
> >>
> >>
> >> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com> wrote:
> >>>
> >>> Hello Anand,
> >>>
> >>>     Is there any specific reason behind not using ssh??
> >>>
> >>> Regards,
> >>>     Mohammad Tariq
> >>>
> >>>
> >>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
> >>> wrote:
> >>> > Hi, i am just learning the Hadoop and i am setting the development
> >>> > environment with CDH3 pseudo distributed mode without any ssh
> >>> > cofiguration
> >>> > in CentOS 6.2 . i can run the sample programs as usual but when i try
> >>> > and
> >>> > run namenode this is the error it logs...
> >>> >
> >>> > [hive@localhost ~]$ hadoop namenode
> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> >>> > /************************************************************
> >>> > STARTUP_MSG: Starting NameNode
> >>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> >>> > STARTUP_MSG:   args = []
> >>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
> >>> > STARTUP_MSG:   build =
> >>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
> >>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
> >>> > May
> >>> > 7 14:01:59 PDT 2012
> >>> > ************************************************************/
> >>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> >>> > processName=NameNode, sessionId=null
> >>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> >>> > NameNodeMeterics using context
> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> >>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> >>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
> >>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> >>> > dfs.block.invalidate.limit=1000
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
> >>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> >>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> >>> > FSNamesystemMetrics using context
> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
> >>> > initialization
> >>> > failed.
> >>> > java.io.FileNotFoundException:
> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> >>> > denied)
> >>> > at java.io.RandomAccessFile.open(Native Method)
> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> > at
> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> > at
> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
> >>> > java.io.FileNotFoundException:
> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> >>> > denied)
> >>> > at java.io.RandomAccessFile.open(Native Method)
> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> > at
> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> > at
> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>> >
> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> >>> > /************************************************************
> >>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
> >>> > ************************************************************/
> >>> >
> >>> >
> >>
> >>
> 

Re: namenode instantiation error

Posted by Abhishek <ab...@gmail.com>.
Hi Anand,

What are the permissions, on dfs.name.dir directory in hdfs-site.xml

Regards
Abhishek 


Sent from my iPhone

On Aug 9, 2012, at 8:41 AM, anand sharma <an...@gmail.com> wrote:

> yea  Tariq !1 its a fresh installation i m doing it for the first time, hope someone will know the error code and the reason of error.
> 
> On Thu, Aug 9, 2012 at 5:35 PM, Mohammad Tariq <do...@gmail.com> wrote:
> Hi Anand,
> 
>       Have you tried any other Hadoop distribution or version also??In
> that case first remove the older one and start fresh.
> 
> Regards,
>     Mohammad Tariq
> 
> 
> On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq <do...@gmail.com> wrote:
> > Hello Rahul,
> >
> >    That's great. That's the best way to learn(I am doing the same :)
> > ). Since the installation part is over, I would suggest to get
> > yourself familiar with Hdfs and MapReduce first. Try to do basic
> > filesystem operations using the Hdfs API and run the wordcount
> > program, if you haven't done it yet. Then move ahead.
> >
> > Regards,
> >     Mohammad Tariq
> >
> >
> > On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com> wrote:
> >> Hi Tariq,
> >>
> >> I am also new to Hadoop trying to learn my self can anyone help me on the
> >> same.
> >> i have installed CDH3.
> >>
> >>
> >>
> >> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com> wrote:
> >>>
> >>> Hello Anand,
> >>>
> >>>     Is there any specific reason behind not using ssh??
> >>>
> >>> Regards,
> >>>     Mohammad Tariq
> >>>
> >>>
> >>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
> >>> wrote:
> >>> > Hi, i am just learning the Hadoop and i am setting the development
> >>> > environment with CDH3 pseudo distributed mode without any ssh
> >>> > cofiguration
> >>> > in CentOS 6.2 . i can run the sample programs as usual but when i try
> >>> > and
> >>> > run namenode this is the error it logs...
> >>> >
> >>> > [hive@localhost ~]$ hadoop namenode
> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> >>> > /************************************************************
> >>> > STARTUP_MSG: Starting NameNode
> >>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> >>> > STARTUP_MSG:   args = []
> >>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
> >>> > STARTUP_MSG:   build =
> >>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
> >>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
> >>> > May
> >>> > 7 14:01:59 PDT 2012
> >>> > ************************************************************/
> >>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> >>> > processName=NameNode, sessionId=null
> >>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> >>> > NameNodeMeterics using context
> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> >>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> >>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
> >>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> >>> > dfs.block.invalidate.limit=1000
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
> >>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> >>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> >>> > FSNamesystemMetrics using context
> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
> >>> > initialization
> >>> > failed.
> >>> > java.io.FileNotFoundException:
> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> >>> > denied)
> >>> > at java.io.RandomAccessFile.open(Native Method)
> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> > at
> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> > at
> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
> >>> > java.io.FileNotFoundException:
> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> >>> > denied)
> >>> > at java.io.RandomAccessFile.open(Native Method)
> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> > at
> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> > at
> >>> >
> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> > at
> >>> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>> >
> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> >>> > /************************************************************
> >>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
> >>> > ************************************************************/
> >>> >
> >>> >
> >>
> >>
> 

Re: namenode instantiation error

Posted by anand sharma <an...@gmail.com>.
yea  Tariq !1 its a fresh installation i m doing it for the first time,
hope someone will know the error code and the reason of error.

On Thu, Aug 9, 2012 at 5:35 PM, Mohammad Tariq <do...@gmail.com> wrote:

> Hi Anand,
>
>       Have you tried any other Hadoop distribution or version also??In
> that case first remove the older one and start fresh.
>
> Regards,
>     Mohammad Tariq
>
>
> On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq <do...@gmail.com> wrote:
> > Hello Rahul,
> >
> >    That's great. That's the best way to learn(I am doing the same :)
> > ). Since the installation part is over, I would suggest to get
> > yourself familiar with Hdfs and MapReduce first. Try to do basic
> > filesystem operations using the Hdfs API and run the wordcount
> > program, if you haven't done it yet. Then move ahead.
> >
> > Regards,
> >     Mohammad Tariq
> >
> >
> > On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com>
> wrote:
> >> Hi Tariq,
> >>
> >> I am also new to Hadoop trying to learn my self can anyone help me on
> the
> >> same.
> >> i have installed CDH3.
> >>
> >>
> >>
> >> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com>
> wrote:
> >>>
> >>> Hello Anand,
> >>>
> >>>     Is there any specific reason behind not using ssh??
> >>>
> >>> Regards,
> >>>     Mohammad Tariq
> >>>
> >>>
> >>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
> >>> wrote:
> >>> > Hi, i am just learning the Hadoop and i am setting the development
> >>> > environment with CDH3 pseudo distributed mode without any ssh
> >>> > cofiguration
> >>> > in CentOS 6.2 . i can run the sample programs as usual but when i try
> >>> > and
> >>> > run namenode this is the error it logs...
> >>> >
> >>> > [hive@localhost ~]$ hadoop namenode
> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> >>> > /************************************************************
> >>> > STARTUP_MSG: Starting NameNode
> >>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> >>> > STARTUP_MSG:   args = []
> >>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
> >>> > STARTUP_MSG:   build =
> >>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
> >>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on
> Mon
> >>> > May
> >>> > 7 14:01:59 PDT 2012
> >>> > ************************************************************/
> >>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> >>> > processName=NameNode, sessionId=null
> >>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> >>> > NameNodeMeterics using context
> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> >>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> >>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
> entries
> >>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
> (auth:SIMPLE)
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> isPermissionEnabled=false
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> >>> > dfs.block.invalidate.limit=1000
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> isAccessTokenEnabled=false
> >>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> >>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> >>> > FSNamesystemMetrics using context
> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
> >>> > initialization
> >>> > failed.
> >>> > java.io.FileNotFoundException:
> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> >>> > denied)
> >>> > at java.io.RandomAccessFile.open(Native Method)
> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> > at
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> > at
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
> >>> > java.io.FileNotFoundException:
> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> >>> > denied)
> >>> > at java.io.RandomAccessFile.open(Native Method)
> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> > at
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> > at
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>> >
> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> >>> > /************************************************************
> >>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/
> 127.0.0.1
> >>> > ************************************************************/
> >>> >
> >>> >
> >>
> >>
>

Re: namenode instantiation error

Posted by anand sharma <an...@gmail.com>.
yea  Tariq !1 its a fresh installation i m doing it for the first time,
hope someone will know the error code and the reason of error.

On Thu, Aug 9, 2012 at 5:35 PM, Mohammad Tariq <do...@gmail.com> wrote:

> Hi Anand,
>
>       Have you tried any other Hadoop distribution or version also??In
> that case first remove the older one and start fresh.
>
> Regards,
>     Mohammad Tariq
>
>
> On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq <do...@gmail.com> wrote:
> > Hello Rahul,
> >
> >    That's great. That's the best way to learn(I am doing the same :)
> > ). Since the installation part is over, I would suggest to get
> > yourself familiar with Hdfs and MapReduce first. Try to do basic
> > filesystem operations using the Hdfs API and run the wordcount
> > program, if you haven't done it yet. Then move ahead.
> >
> > Regards,
> >     Mohammad Tariq
> >
> >
> > On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com>
> wrote:
> >> Hi Tariq,
> >>
> >> I am also new to Hadoop trying to learn my self can anyone help me on
> the
> >> same.
> >> i have installed CDH3.
> >>
> >>
> >>
> >> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com>
> wrote:
> >>>
> >>> Hello Anand,
> >>>
> >>>     Is there any specific reason behind not using ssh??
> >>>
> >>> Regards,
> >>>     Mohammad Tariq
> >>>
> >>>
> >>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
> >>> wrote:
> >>> > Hi, i am just learning the Hadoop and i am setting the development
> >>> > environment with CDH3 pseudo distributed mode without any ssh
> >>> > cofiguration
> >>> > in CentOS 6.2 . i can run the sample programs as usual but when i try
> >>> > and
> >>> > run namenode this is the error it logs...
> >>> >
> >>> > [hive@localhost ~]$ hadoop namenode
> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> >>> > /************************************************************
> >>> > STARTUP_MSG: Starting NameNode
> >>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> >>> > STARTUP_MSG:   args = []
> >>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
> >>> > STARTUP_MSG:   build =
> >>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
> >>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on
> Mon
> >>> > May
> >>> > 7 14:01:59 PDT 2012
> >>> > ************************************************************/
> >>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> >>> > processName=NameNode, sessionId=null
> >>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> >>> > NameNodeMeterics using context
> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> >>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> >>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
> entries
> >>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
> (auth:SIMPLE)
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> isPermissionEnabled=false
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> >>> > dfs.block.invalidate.limit=1000
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> isAccessTokenEnabled=false
> >>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> >>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> >>> > FSNamesystemMetrics using context
> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
> >>> > initialization
> >>> > failed.
> >>> > java.io.FileNotFoundException:
> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> >>> > denied)
> >>> > at java.io.RandomAccessFile.open(Native Method)
> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> > at
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> > at
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
> >>> > java.io.FileNotFoundException:
> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> >>> > denied)
> >>> > at java.io.RandomAccessFile.open(Native Method)
> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> > at
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> > at
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>> >
> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> >>> > /************************************************************
> >>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/
> 127.0.0.1
> >>> > ************************************************************/
> >>> >
> >>> >
> >>
> >>
>

Re: namenode instantiation error

Posted by anand sharma <an...@gmail.com>.
yea  Tariq !1 its a fresh installation i m doing it for the first time,
hope someone will know the error code and the reason of error.

On Thu, Aug 9, 2012 at 5:35 PM, Mohammad Tariq <do...@gmail.com> wrote:

> Hi Anand,
>
>       Have you tried any other Hadoop distribution or version also??In
> that case first remove the older one and start fresh.
>
> Regards,
>     Mohammad Tariq
>
>
> On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq <do...@gmail.com> wrote:
> > Hello Rahul,
> >
> >    That's great. That's the best way to learn(I am doing the same :)
> > ). Since the installation part is over, I would suggest to get
> > yourself familiar with Hdfs and MapReduce first. Try to do basic
> > filesystem operations using the Hdfs API and run the wordcount
> > program, if you haven't done it yet. Then move ahead.
> >
> > Regards,
> >     Mohammad Tariq
> >
> >
> > On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com>
> wrote:
> >> Hi Tariq,
> >>
> >> I am also new to Hadoop trying to learn my self can anyone help me on
> the
> >> same.
> >> i have installed CDH3.
> >>
> >>
> >>
> >> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com>
> wrote:
> >>>
> >>> Hello Anand,
> >>>
> >>>     Is there any specific reason behind not using ssh??
> >>>
> >>> Regards,
> >>>     Mohammad Tariq
> >>>
> >>>
> >>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
> >>> wrote:
> >>> > Hi, i am just learning the Hadoop and i am setting the development
> >>> > environment with CDH3 pseudo distributed mode without any ssh
> >>> > cofiguration
> >>> > in CentOS 6.2 . i can run the sample programs as usual but when i try
> >>> > and
> >>> > run namenode this is the error it logs...
> >>> >
> >>> > [hive@localhost ~]$ hadoop namenode
> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> >>> > /************************************************************
> >>> > STARTUP_MSG: Starting NameNode
> >>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> >>> > STARTUP_MSG:   args = []
> >>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
> >>> > STARTUP_MSG:   build =
> >>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
> >>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on
> Mon
> >>> > May
> >>> > 7 14:01:59 PDT 2012
> >>> > ************************************************************/
> >>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> >>> > processName=NameNode, sessionId=null
> >>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> >>> > NameNodeMeterics using context
> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> >>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> >>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
> entries
> >>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
> (auth:SIMPLE)
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> isPermissionEnabled=false
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> >>> > dfs.block.invalidate.limit=1000
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> isAccessTokenEnabled=false
> >>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> >>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> >>> > FSNamesystemMetrics using context
> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
> >>> > initialization
> >>> > failed.
> >>> > java.io.FileNotFoundException:
> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> >>> > denied)
> >>> > at java.io.RandomAccessFile.open(Native Method)
> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> > at
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> > at
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
> >>> > java.io.FileNotFoundException:
> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> >>> > denied)
> >>> > at java.io.RandomAccessFile.open(Native Method)
> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> > at
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> > at
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>> >
> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> >>> > /************************************************************
> >>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/
> 127.0.0.1
> >>> > ************************************************************/
> >>> >
> >>> >
> >>
> >>
>

Re: namenode instantiation error

Posted by anand sharma <an...@gmail.com>.
yea  Tariq !1 its a fresh installation i m doing it for the first time,
hope someone will know the error code and the reason of error.

On Thu, Aug 9, 2012 at 5:35 PM, Mohammad Tariq <do...@gmail.com> wrote:

> Hi Anand,
>
>       Have you tried any other Hadoop distribution or version also??In
> that case first remove the older one and start fresh.
>
> Regards,
>     Mohammad Tariq
>
>
> On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq <do...@gmail.com> wrote:
> > Hello Rahul,
> >
> >    That's great. That's the best way to learn(I am doing the same :)
> > ). Since the installation part is over, I would suggest to get
> > yourself familiar with Hdfs and MapReduce first. Try to do basic
> > filesystem operations using the Hdfs API and run the wordcount
> > program, if you haven't done it yet. Then move ahead.
> >
> > Regards,
> >     Mohammad Tariq
> >
> >
> > On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com>
> wrote:
> >> Hi Tariq,
> >>
> >> I am also new to Hadoop trying to learn my self can anyone help me on
> the
> >> same.
> >> i have installed CDH3.
> >>
> >>
> >>
> >> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com>
> wrote:
> >>>
> >>> Hello Anand,
> >>>
> >>>     Is there any specific reason behind not using ssh??
> >>>
> >>> Regards,
> >>>     Mohammad Tariq
> >>>
> >>>
> >>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
> >>> wrote:
> >>> > Hi, i am just learning the Hadoop and i am setting the development
> >>> > environment with CDH3 pseudo distributed mode without any ssh
> >>> > cofiguration
> >>> > in CentOS 6.2 . i can run the sample programs as usual but when i try
> >>> > and
> >>> > run namenode this is the error it logs...
> >>> >
> >>> > [hive@localhost ~]$ hadoop namenode
> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> >>> > /************************************************************
> >>> > STARTUP_MSG: Starting NameNode
> >>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> >>> > STARTUP_MSG:   args = []
> >>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
> >>> > STARTUP_MSG:   build =
> >>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
> >>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on
> Mon
> >>> > May
> >>> > 7 14:01:59 PDT 2012
> >>> > ************************************************************/
> >>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> >>> > processName=NameNode, sessionId=null
> >>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> >>> > NameNodeMeterics using context
> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> >>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> >>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
> entries
> >>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
> (auth:SIMPLE)
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> isPermissionEnabled=false
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> >>> > dfs.block.invalidate.limit=1000
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> isAccessTokenEnabled=false
> >>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> >>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> >>> > FSNamesystemMetrics using context
> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
> >>> > initialization
> >>> > failed.
> >>> > java.io.FileNotFoundException:
> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> >>> > denied)
> >>> > at java.io.RandomAccessFile.open(Native Method)
> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> > at
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> > at
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
> >>> > java.io.FileNotFoundException:
> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> >>> > denied)
> >>> > at java.io.RandomAccessFile.open(Native Method)
> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> > at
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> > at
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>> >
> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> >>> > /************************************************************
> >>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/
> 127.0.0.1
> >>> > ************************************************************/
> >>> >
> >>> >
> >>
> >>
>

Re: namenode instantiation error

Posted by Mohammad Tariq <do...@gmail.com>.
Hi Anand,

      Have you tried any other Hadoop distribution or version also??In
that case first remove the older one and start fresh.

Regards,
    Mohammad Tariq


On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq <do...@gmail.com> wrote:
> Hello Rahul,
>
>    That's great. That's the best way to learn(I am doing the same :)
> ). Since the installation part is over, I would suggest to get
> yourself familiar with Hdfs and MapReduce first. Try to do basic
> filesystem operations using the Hdfs API and run the wordcount
> program, if you haven't done it yet. Then move ahead.
>
> Regards,
>     Mohammad Tariq
>
>
> On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com> wrote:
>> Hi Tariq,
>>
>> I am also new to Hadoop trying to learn my self can anyone help me on the
>> same.
>> i have installed CDH3.
>>
>>
>>
>> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com> wrote:
>>>
>>> Hello Anand,
>>>
>>>     Is there any specific reason behind not using ssh??
>>>
>>> Regards,
>>>     Mohammad Tariq
>>>
>>>
>>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
>>> wrote:
>>> > Hi, i am just learning the Hadoop and i am setting the development
>>> > environment with CDH3 pseudo distributed mode without any ssh
>>> > cofiguration
>>> > in CentOS 6.2 . i can run the sample programs as usual but when i try
>>> > and
>>> > run namenode this is the error it logs...
>>> >
>>> > [hive@localhost ~]$ hadoop namenode
>>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>>> > /************************************************************
>>> > STARTUP_MSG: Starting NameNode
>>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>>> > STARTUP_MSG:   args = []
>>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
>>> > STARTUP_MSG:   build =
>>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
>>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
>>> > May
>>> > 7 14:01:59 PDT 2012
>>> > ************************************************************/
>>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
>>> > processName=NameNode, sessionId=null
>>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>>> > NameNodeMeterics using context
>>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
>>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> > dfs.block.invalidate.limit=1000
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
>>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>>> > FSNamesystemMetrics using context
>>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
>>> > initialization
>>> > failed.
>>> > java.io.FileNotFoundException:
>>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>>> > denied)
>>> > at java.io.RandomAccessFile.open(Native Method)
>>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>>> > at
>>> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>>> > at
>>> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
>>> > java.io.FileNotFoundException:
>>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>>> > denied)
>>> > at java.io.RandomAccessFile.open(Native Method)
>>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>>> > at
>>> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>>> > at
>>> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> >
>>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>>> > /************************************************************
>>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
>>> > ************************************************************/
>>> >
>>> >
>>
>>

Re: namenode instantiation error

Posted by rahul p <ra...@gmail.com>.
Hi Tariq,

im not getting the right start..

On Thu, Aug 9, 2012 at 7:59 PM, Mohammad Tariq <do...@gmail.com> wrote:

> Hello Rahul,
>
>    That's great. That's the best way to learn(I am doing the same :)
> ). Since the installation part is over, I would suggest to get
> yourself familiar with Hdfs and MapReduce first. Try to do basic
> filesystem operations using the Hdfs API and run the wordcount
> program, if you haven't done it yet. Then move ahead.
>
> Regards,
>     Mohammad Tariq
>
>
> On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com>
> wrote:
> > Hi Tariq,
> >
> > I am also new to Hadoop trying to learn my self can anyone help me on the
> > same.
> > i have installed CDH3.
> >
> >
> >
> > On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com>
> wrote:
> >>
> >> Hello Anand,
> >>
> >>     Is there any specific reason behind not using ssh??
> >>
> >> Regards,
> >>     Mohammad Tariq
> >>
> >>
> >> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
> >> wrote:
> >> > Hi, i am just learning the Hadoop and i am setting the development
> >> > environment with CDH3 pseudo distributed mode without any ssh
> >> > cofiguration
> >> > in CentOS 6.2 . i can run the sample programs as usual but when i try
> >> > and
> >> > run namenode this is the error it logs...
> >> >
> >> > [hive@localhost ~]$ hadoop namenode
> >> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> >> > /************************************************************
> >> > STARTUP_MSG: Starting NameNode
> >> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> >> > STARTUP_MSG:   args = []
> >> > STARTUP_MSG:   version = 0.20.2-cdh3u4
> >> > STARTUP_MSG:   build =
> >> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
> >> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
> >> > May
> >> > 7 14:01:59 PDT 2012
> >> > ************************************************************/
> >> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> >> > processName=NameNode, sessionId=null
> >> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> >> > NameNodeMeterics using context
> >> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> >> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> >> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
> entries
> >> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
> (auth:SIMPLE)
> >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> isPermissionEnabled=false
> >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> >> > dfs.block.invalidate.limit=1000
> >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> isAccessTokenEnabled=false
> >> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> >> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> >> > FSNamesystemMetrics using context
> >> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
> >> > initialization
> >> > failed.
> >> > java.io.FileNotFoundException:
> >> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> >> > denied)
> >> > at java.io.RandomAccessFile.open(Native Method)
> >> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >> > at
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >> > at
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >> > 12/08/09 20:56:57 ERROR namenode.NameNode:
> >> > java.io.FileNotFoundException:
> >> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> >> > denied)
> >> > at java.io.RandomAccessFile.open(Native Method)
> >> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >> > at
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >> > at
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >> >
> >> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> >> > /************************************************************
> >> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/
> 127.0.0.1
> >> > ************************************************************/
> >> >
> >> >
> >
> >
>

Re: namenode instantiation error

Posted by Mohammad Tariq <do...@gmail.com>.
Hi Anand,

      Have you tried any other Hadoop distribution or version also??In
that case first remove the older one and start fresh.

Regards,
    Mohammad Tariq


On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq <do...@gmail.com> wrote:
> Hello Rahul,
>
>    That's great. That's the best way to learn(I am doing the same :)
> ). Since the installation part is over, I would suggest to get
> yourself familiar with Hdfs and MapReduce first. Try to do basic
> filesystem operations using the Hdfs API and run the wordcount
> program, if you haven't done it yet. Then move ahead.
>
> Regards,
>     Mohammad Tariq
>
>
> On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com> wrote:
>> Hi Tariq,
>>
>> I am also new to Hadoop trying to learn my self can anyone help me on the
>> same.
>> i have installed CDH3.
>>
>>
>>
>> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com> wrote:
>>>
>>> Hello Anand,
>>>
>>>     Is there any specific reason behind not using ssh??
>>>
>>> Regards,
>>>     Mohammad Tariq
>>>
>>>
>>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
>>> wrote:
>>> > Hi, i am just learning the Hadoop and i am setting the development
>>> > environment with CDH3 pseudo distributed mode without any ssh
>>> > cofiguration
>>> > in CentOS 6.2 . i can run the sample programs as usual but when i try
>>> > and
>>> > run namenode this is the error it logs...
>>> >
>>> > [hive@localhost ~]$ hadoop namenode
>>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>>> > /************************************************************
>>> > STARTUP_MSG: Starting NameNode
>>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>>> > STARTUP_MSG:   args = []
>>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
>>> > STARTUP_MSG:   build =
>>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
>>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
>>> > May
>>> > 7 14:01:59 PDT 2012
>>> > ************************************************************/
>>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
>>> > processName=NameNode, sessionId=null
>>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>>> > NameNodeMeterics using context
>>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
>>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> > dfs.block.invalidate.limit=1000
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
>>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>>> > FSNamesystemMetrics using context
>>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
>>> > initialization
>>> > failed.
>>> > java.io.FileNotFoundException:
>>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>>> > denied)
>>> > at java.io.RandomAccessFile.open(Native Method)
>>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>>> > at
>>> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>>> > at
>>> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
>>> > java.io.FileNotFoundException:
>>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>>> > denied)
>>> > at java.io.RandomAccessFile.open(Native Method)
>>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>>> > at
>>> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>>> > at
>>> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> >
>>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>>> > /************************************************************
>>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
>>> > ************************************************************/
>>> >
>>> >
>>
>>

Re: namenode instantiation error

Posted by rahul p <ra...@gmail.com>.
Thanks Tariq,
let me start with that.

On Thu, Aug 9, 2012 at 7:59 PM, Mohammad Tariq <do...@gmail.com> wrote:

> Hello Rahul,
>
>    That's great. That's the best way to learn(I am doing the same :)
> ). Since the installation part is over, I would suggest to get
> yourself familiar with Hdfs and MapReduce first. Try to do basic
> filesystem operations using the Hdfs API and run the wordcount
> program, if you haven't done it yet. Then move ahead.
>
> Regards,
>     Mohammad Tariq
>
>
> On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com>
> wrote:
> > Hi Tariq,
> >
> > I am also new to Hadoop trying to learn my self can anyone help me on the
> > same.
> > i have installed CDH3.
> >
> >
> >
> > On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com>
> wrote:
> >>
> >> Hello Anand,
> >>
> >>     Is there any specific reason behind not using ssh??
> >>
> >> Regards,
> >>     Mohammad Tariq
> >>
> >>
> >> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
> >> wrote:
> >> > Hi, i am just learning the Hadoop and i am setting the development
> >> > environment with CDH3 pseudo distributed mode without any ssh
> >> > cofiguration
> >> > in CentOS 6.2 . i can run the sample programs as usual but when i try
> >> > and
> >> > run namenode this is the error it logs...
> >> >
> >> > [hive@localhost ~]$ hadoop namenode
> >> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> >> > /************************************************************
> >> > STARTUP_MSG: Starting NameNode
> >> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> >> > STARTUP_MSG:   args = []
> >> > STARTUP_MSG:   version = 0.20.2-cdh3u4
> >> > STARTUP_MSG:   build =
> >> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
> >> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
> >> > May
> >> > 7 14:01:59 PDT 2012
> >> > ************************************************************/
> >> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> >> > processName=NameNode, sessionId=null
> >> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> >> > NameNodeMeterics using context
> >> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> >> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> >> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
> entries
> >> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
> (auth:SIMPLE)
> >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> isPermissionEnabled=false
> >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> >> > dfs.block.invalidate.limit=1000
> >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> isAccessTokenEnabled=false
> >> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> >> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> >> > FSNamesystemMetrics using context
> >> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
> >> > initialization
> >> > failed.
> >> > java.io.FileNotFoundException:
> >> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> >> > denied)
> >> > at java.io.RandomAccessFile.open(Native Method)
> >> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >> > at
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >> > at
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >> > 12/08/09 20:56:57 ERROR namenode.NameNode:
> >> > java.io.FileNotFoundException:
> >> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> >> > denied)
> >> > at java.io.RandomAccessFile.open(Native Method)
> >> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >> > at
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >> > at
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >> >
> >> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> >> > /************************************************************
> >> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/
> 127.0.0.1
> >> > ************************************************************/
> >> >
> >> >
> >
> >
>

Re: namenode instantiation error

Posted by rahul p <ra...@gmail.com>.
Thanks Tariq,
let me start with that.

On Thu, Aug 9, 2012 at 7:59 PM, Mohammad Tariq <do...@gmail.com> wrote:

> Hello Rahul,
>
>    That's great. That's the best way to learn(I am doing the same :)
> ). Since the installation part is over, I would suggest to get
> yourself familiar with Hdfs and MapReduce first. Try to do basic
> filesystem operations using the Hdfs API and run the wordcount
> program, if you haven't done it yet. Then move ahead.
>
> Regards,
>     Mohammad Tariq
>
>
> On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com>
> wrote:
> > Hi Tariq,
> >
> > I am also new to Hadoop trying to learn my self can anyone help me on the
> > same.
> > i have installed CDH3.
> >
> >
> >
> > On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com>
> wrote:
> >>
> >> Hello Anand,
> >>
> >>     Is there any specific reason behind not using ssh??
> >>
> >> Regards,
> >>     Mohammad Tariq
> >>
> >>
> >> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
> >> wrote:
> >> > Hi, i am just learning the Hadoop and i am setting the development
> >> > environment with CDH3 pseudo distributed mode without any ssh
> >> > cofiguration
> >> > in CentOS 6.2 . i can run the sample programs as usual but when i try
> >> > and
> >> > run namenode this is the error it logs...
> >> >
> >> > [hive@localhost ~]$ hadoop namenode
> >> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> >> > /************************************************************
> >> > STARTUP_MSG: Starting NameNode
> >> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> >> > STARTUP_MSG:   args = []
> >> > STARTUP_MSG:   version = 0.20.2-cdh3u4
> >> > STARTUP_MSG:   build =
> >> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
> >> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
> >> > May
> >> > 7 14:01:59 PDT 2012
> >> > ************************************************************/
> >> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> >> > processName=NameNode, sessionId=null
> >> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> >> > NameNodeMeterics using context
> >> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> >> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> >> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
> entries
> >> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
> (auth:SIMPLE)
> >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> isPermissionEnabled=false
> >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> >> > dfs.block.invalidate.limit=1000
> >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> isAccessTokenEnabled=false
> >> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> >> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> >> > FSNamesystemMetrics using context
> >> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
> >> > initialization
> >> > failed.
> >> > java.io.FileNotFoundException:
> >> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> >> > denied)
> >> > at java.io.RandomAccessFile.open(Native Method)
> >> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >> > at
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >> > at
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >> > 12/08/09 20:56:57 ERROR namenode.NameNode:
> >> > java.io.FileNotFoundException:
> >> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> >> > denied)
> >> > at java.io.RandomAccessFile.open(Native Method)
> >> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >> > at
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >> > at
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >> >
> >> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> >> > /************************************************************
> >> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/
> 127.0.0.1
> >> > ************************************************************/
> >> >
> >> >
> >
> >
>

Re: namenode instantiation error

Posted by Mohammad Tariq <do...@gmail.com>.
Hi Anand,

      Have you tried any other Hadoop distribution or version also??In
that case first remove the older one and start fresh.

Regards,
    Mohammad Tariq


On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq <do...@gmail.com> wrote:
> Hello Rahul,
>
>    That's great. That's the best way to learn(I am doing the same :)
> ). Since the installation part is over, I would suggest to get
> yourself familiar with Hdfs and MapReduce first. Try to do basic
> filesystem operations using the Hdfs API and run the wordcount
> program, if you haven't done it yet. Then move ahead.
>
> Regards,
>     Mohammad Tariq
>
>
> On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com> wrote:
>> Hi Tariq,
>>
>> I am also new to Hadoop trying to learn my self can anyone help me on the
>> same.
>> i have installed CDH3.
>>
>>
>>
>> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com> wrote:
>>>
>>> Hello Anand,
>>>
>>>     Is there any specific reason behind not using ssh??
>>>
>>> Regards,
>>>     Mohammad Tariq
>>>
>>>
>>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
>>> wrote:
>>> > Hi, i am just learning the Hadoop and i am setting the development
>>> > environment with CDH3 pseudo distributed mode without any ssh
>>> > cofiguration
>>> > in CentOS 6.2 . i can run the sample programs as usual but when i try
>>> > and
>>> > run namenode this is the error it logs...
>>> >
>>> > [hive@localhost ~]$ hadoop namenode
>>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>>> > /************************************************************
>>> > STARTUP_MSG: Starting NameNode
>>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>>> > STARTUP_MSG:   args = []
>>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
>>> > STARTUP_MSG:   build =
>>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
>>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
>>> > May
>>> > 7 14:01:59 PDT 2012
>>> > ************************************************************/
>>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
>>> > processName=NameNode, sessionId=null
>>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>>> > NameNodeMeterics using context
>>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
>>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> > dfs.block.invalidate.limit=1000
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
>>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>>> > FSNamesystemMetrics using context
>>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
>>> > initialization
>>> > failed.
>>> > java.io.FileNotFoundException:
>>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>>> > denied)
>>> > at java.io.RandomAccessFile.open(Native Method)
>>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>>> > at
>>> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>>> > at
>>> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
>>> > java.io.FileNotFoundException:
>>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>>> > denied)
>>> > at java.io.RandomAccessFile.open(Native Method)
>>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>>> > at
>>> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>>> > at
>>> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> >
>>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>>> > /************************************************************
>>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
>>> > ************************************************************/
>>> >
>>> >
>>
>>

Re: namenode instantiation error

Posted by rahul p <ra...@gmail.com>.
Hi Tariq,

im not getting the right start..

On Thu, Aug 9, 2012 at 7:59 PM, Mohammad Tariq <do...@gmail.com> wrote:

> Hello Rahul,
>
>    That's great. That's the best way to learn(I am doing the same :)
> ). Since the installation part is over, I would suggest to get
> yourself familiar with Hdfs and MapReduce first. Try to do basic
> filesystem operations using the Hdfs API and run the wordcount
> program, if you haven't done it yet. Then move ahead.
>
> Regards,
>     Mohammad Tariq
>
>
> On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com>
> wrote:
> > Hi Tariq,
> >
> > I am also new to Hadoop trying to learn my self can anyone help me on the
> > same.
> > i have installed CDH3.
> >
> >
> >
> > On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com>
> wrote:
> >>
> >> Hello Anand,
> >>
> >>     Is there any specific reason behind not using ssh??
> >>
> >> Regards,
> >>     Mohammad Tariq
> >>
> >>
> >> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
> >> wrote:
> >> > Hi, i am just learning the Hadoop and i am setting the development
> >> > environment with CDH3 pseudo distributed mode without any ssh
> >> > cofiguration
> >> > in CentOS 6.2 . i can run the sample programs as usual but when i try
> >> > and
> >> > run namenode this is the error it logs...
> >> >
> >> > [hive@localhost ~]$ hadoop namenode
> >> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> >> > /************************************************************
> >> > STARTUP_MSG: Starting NameNode
> >> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> >> > STARTUP_MSG:   args = []
> >> > STARTUP_MSG:   version = 0.20.2-cdh3u4
> >> > STARTUP_MSG:   build =
> >> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
> >> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
> >> > May
> >> > 7 14:01:59 PDT 2012
> >> > ************************************************************/
> >> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> >> > processName=NameNode, sessionId=null
> >> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> >> > NameNodeMeterics using context
> >> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> >> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> >> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
> entries
> >> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
> (auth:SIMPLE)
> >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> isPermissionEnabled=false
> >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> >> > dfs.block.invalidate.limit=1000
> >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> isAccessTokenEnabled=false
> >> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> >> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> >> > FSNamesystemMetrics using context
> >> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
> >> > initialization
> >> > failed.
> >> > java.io.FileNotFoundException:
> >> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> >> > denied)
> >> > at java.io.RandomAccessFile.open(Native Method)
> >> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >> > at
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >> > at
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >> > 12/08/09 20:56:57 ERROR namenode.NameNode:
> >> > java.io.FileNotFoundException:
> >> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> >> > denied)
> >> > at java.io.RandomAccessFile.open(Native Method)
> >> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >> > at
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >> > at
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >> >
> >> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> >> > /************************************************************
> >> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/
> 127.0.0.1
> >> > ************************************************************/
> >> >
> >> >
> >
> >
>

Re: namenode instantiation error

Posted by Mohammad Tariq <do...@gmail.com>.
Hi Anand,

      Have you tried any other Hadoop distribution or version also??In
that case first remove the older one and start fresh.

Regards,
    Mohammad Tariq


On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq <do...@gmail.com> wrote:
> Hello Rahul,
>
>    That's great. That's the best way to learn(I am doing the same :)
> ). Since the installation part is over, I would suggest to get
> yourself familiar with Hdfs and MapReduce first. Try to do basic
> filesystem operations using the Hdfs API and run the wordcount
> program, if you haven't done it yet. Then move ahead.
>
> Regards,
>     Mohammad Tariq
>
>
> On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com> wrote:
>> Hi Tariq,
>>
>> I am also new to Hadoop trying to learn my self can anyone help me on the
>> same.
>> i have installed CDH3.
>>
>>
>>
>> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com> wrote:
>>>
>>> Hello Anand,
>>>
>>>     Is there any specific reason behind not using ssh??
>>>
>>> Regards,
>>>     Mohammad Tariq
>>>
>>>
>>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
>>> wrote:
>>> > Hi, i am just learning the Hadoop and i am setting the development
>>> > environment with CDH3 pseudo distributed mode without any ssh
>>> > cofiguration
>>> > in CentOS 6.2 . i can run the sample programs as usual but when i try
>>> > and
>>> > run namenode this is the error it logs...
>>> >
>>> > [hive@localhost ~]$ hadoop namenode
>>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>>> > /************************************************************
>>> > STARTUP_MSG: Starting NameNode
>>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>>> > STARTUP_MSG:   args = []
>>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
>>> > STARTUP_MSG:   build =
>>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
>>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
>>> > May
>>> > 7 14:01:59 PDT 2012
>>> > ************************************************************/
>>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
>>> > processName=NameNode, sessionId=null
>>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>>> > NameNodeMeterics using context
>>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
>>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> > dfs.block.invalidate.limit=1000
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
>>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>>> > FSNamesystemMetrics using context
>>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
>>> > initialization
>>> > failed.
>>> > java.io.FileNotFoundException:
>>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>>> > denied)
>>> > at java.io.RandomAccessFile.open(Native Method)
>>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>>> > at
>>> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>>> > at
>>> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
>>> > java.io.FileNotFoundException:
>>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>>> > denied)
>>> > at java.io.RandomAccessFile.open(Native Method)
>>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>>> > at
>>> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>>> > at
>>> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> >
>>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>>> > /************************************************************
>>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
>>> > ************************************************************/
>>> >
>>> >
>>
>>

Re: namenode instantiation error

Posted by rahul p <ra...@gmail.com>.
Hi Tariq,

im not getting the right start..

On Thu, Aug 9, 2012 at 7:59 PM, Mohammad Tariq <do...@gmail.com> wrote:

> Hello Rahul,
>
>    That's great. That's the best way to learn(I am doing the same :)
> ). Since the installation part is over, I would suggest to get
> yourself familiar with Hdfs and MapReduce first. Try to do basic
> filesystem operations using the Hdfs API and run the wordcount
> program, if you haven't done it yet. Then move ahead.
>
> Regards,
>     Mohammad Tariq
>
>
> On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com>
> wrote:
> > Hi Tariq,
> >
> > I am also new to Hadoop trying to learn my self can anyone help me on the
> > same.
> > i have installed CDH3.
> >
> >
> >
> > On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com>
> wrote:
> >>
> >> Hello Anand,
> >>
> >>     Is there any specific reason behind not using ssh??
> >>
> >> Regards,
> >>     Mohammad Tariq
> >>
> >>
> >> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
> >> wrote:
> >> > Hi, i am just learning the Hadoop and i am setting the development
> >> > environment with CDH3 pseudo distributed mode without any ssh
> >> > cofiguration
> >> > in CentOS 6.2 . i can run the sample programs as usual but when i try
> >> > and
> >> > run namenode this is the error it logs...
> >> >
> >> > [hive@localhost ~]$ hadoop namenode
> >> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> >> > /************************************************************
> >> > STARTUP_MSG: Starting NameNode
> >> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> >> > STARTUP_MSG:   args = []
> >> > STARTUP_MSG:   version = 0.20.2-cdh3u4
> >> > STARTUP_MSG:   build =
> >> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
> >> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
> >> > May
> >> > 7 14:01:59 PDT 2012
> >> > ************************************************************/
> >> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> >> > processName=NameNode, sessionId=null
> >> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> >> > NameNodeMeterics using context
> >> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> >> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> >> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
> entries
> >> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
> (auth:SIMPLE)
> >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> isPermissionEnabled=false
> >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> >> > dfs.block.invalidate.limit=1000
> >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> isAccessTokenEnabled=false
> >> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> >> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> >> > FSNamesystemMetrics using context
> >> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
> >> > initialization
> >> > failed.
> >> > java.io.FileNotFoundException:
> >> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> >> > denied)
> >> > at java.io.RandomAccessFile.open(Native Method)
> >> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >> > at
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >> > at
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >> > 12/08/09 20:56:57 ERROR namenode.NameNode:
> >> > java.io.FileNotFoundException:
> >> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> >> > denied)
> >> > at java.io.RandomAccessFile.open(Native Method)
> >> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >> > at
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >> > at
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >> >
> >> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> >> > /************************************************************
> >> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/
> 127.0.0.1
> >> > ************************************************************/
> >> >
> >> >
> >
> >
>

Re: namenode instantiation error

Posted by rahul p <ra...@gmail.com>.
Thanks Tariq,
let me start with that.

On Thu, Aug 9, 2012 at 7:59 PM, Mohammad Tariq <do...@gmail.com> wrote:

> Hello Rahul,
>
>    That's great. That's the best way to learn(I am doing the same :)
> ). Since the installation part is over, I would suggest to get
> yourself familiar with Hdfs and MapReduce first. Try to do basic
> filesystem operations using the Hdfs API and run the wordcount
> program, if you haven't done it yet. Then move ahead.
>
> Regards,
>     Mohammad Tariq
>
>
> On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com>
> wrote:
> > Hi Tariq,
> >
> > I am also new to Hadoop trying to learn my self can anyone help me on the
> > same.
> > i have installed CDH3.
> >
> >
> >
> > On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com>
> wrote:
> >>
> >> Hello Anand,
> >>
> >>     Is there any specific reason behind not using ssh??
> >>
> >> Regards,
> >>     Mohammad Tariq
> >>
> >>
> >> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
> >> wrote:
> >> > Hi, i am just learning the Hadoop and i am setting the development
> >> > environment with CDH3 pseudo distributed mode without any ssh
> >> > cofiguration
> >> > in CentOS 6.2 . i can run the sample programs as usual but when i try
> >> > and
> >> > run namenode this is the error it logs...
> >> >
> >> > [hive@localhost ~]$ hadoop namenode
> >> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> >> > /************************************************************
> >> > STARTUP_MSG: Starting NameNode
> >> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> >> > STARTUP_MSG:   args = []
> >> > STARTUP_MSG:   version = 0.20.2-cdh3u4
> >> > STARTUP_MSG:   build =
> >> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
> >> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
> >> > May
> >> > 7 14:01:59 PDT 2012
> >> > ************************************************************/
> >> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> >> > processName=NameNode, sessionId=null
> >> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> >> > NameNodeMeterics using context
> >> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> >> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> >> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
> entries
> >> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
> (auth:SIMPLE)
> >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> isPermissionEnabled=false
> >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> >> > dfs.block.invalidate.limit=1000
> >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> isAccessTokenEnabled=false
> >> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> >> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> >> > FSNamesystemMetrics using context
> >> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
> >> > initialization
> >> > failed.
> >> > java.io.FileNotFoundException:
> >> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> >> > denied)
> >> > at java.io.RandomAccessFile.open(Native Method)
> >> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >> > at
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >> > at
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >> > 12/08/09 20:56:57 ERROR namenode.NameNode:
> >> > java.io.FileNotFoundException:
> >> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> >> > denied)
> >> > at java.io.RandomAccessFile.open(Native Method)
> >> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >> > at
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >> > at
> >> >
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >> > at
> >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >> >
> >> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> >> > /************************************************************
> >> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/
> 127.0.0.1
> >> > ************************************************************/
> >> >
> >> >
> >
> >
>

Re: namenode instantiation error

Posted by Mohammad Tariq <do...@gmail.com>.
Hello Rahul,

   That's great. That's the best way to learn(I am doing the same :)
). Since the installation part is over, I would suggest to get
yourself familiar with Hdfs and MapReduce first. Try to do basic
filesystem operations using the Hdfs API and run the wordcount
program, if you haven't done it yet. Then move ahead.

Regards,
    Mohammad Tariq


On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com> wrote:
> Hi Tariq,
>
> I am also new to Hadoop trying to learn my self can anyone help me on the
> same.
> i have installed CDH3.
>
>
>
> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com> wrote:
>>
>> Hello Anand,
>>
>>     Is there any specific reason behind not using ssh??
>>
>> Regards,
>>     Mohammad Tariq
>>
>>
>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
>> wrote:
>> > Hi, i am just learning the Hadoop and i am setting the development
>> > environment with CDH3 pseudo distributed mode without any ssh
>> > cofiguration
>> > in CentOS 6.2 . i can run the sample programs as usual but when i try
>> > and
>> > run namenode this is the error it logs...
>> >
>> > [hive@localhost ~]$ hadoop namenode
>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>> > /************************************************************
>> > STARTUP_MSG: Starting NameNode
>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>> > STARTUP_MSG:   args = []
>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
>> > STARTUP_MSG:   build =
>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
>> > May
>> > 7 14:01:59 PDT 2012
>> > ************************************************************/
>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
>> > processName=NameNode, sessionId=null
>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>> > NameNodeMeterics using context
>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>> > dfs.block.invalidate.limit=1000
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>> > FSNamesystemMetrics using context
>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
>> > initialization
>> > failed.
>> > java.io.FileNotFoundException:
>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>> > denied)
>> > at java.io.RandomAccessFile.open(Native Method)
>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>> > at
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>> > at
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
>> > java.io.FileNotFoundException:
>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>> > denied)
>> > at java.io.RandomAccessFile.open(Native Method)
>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>> > at
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>> > at
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>> >
>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>> > /************************************************************
>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
>> > ************************************************************/
>> >
>> >
>
>

Re: namenode instantiation error

Posted by Mohammad Tariq <do...@gmail.com>.
Hello Rahul,

   That's great. That's the best way to learn(I am doing the same :)
). Since the installation part is over, I would suggest to get
yourself familiar with Hdfs and MapReduce first. Try to do basic
filesystem operations using the Hdfs API and run the wordcount
program, if you haven't done it yet. Then move ahead.

Regards,
    Mohammad Tariq


On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com> wrote:
> Hi Tariq,
>
> I am also new to Hadoop trying to learn my self can anyone help me on the
> same.
> i have installed CDH3.
>
>
>
> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com> wrote:
>>
>> Hello Anand,
>>
>>     Is there any specific reason behind not using ssh??
>>
>> Regards,
>>     Mohammad Tariq
>>
>>
>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
>> wrote:
>> > Hi, i am just learning the Hadoop and i am setting the development
>> > environment with CDH3 pseudo distributed mode without any ssh
>> > cofiguration
>> > in CentOS 6.2 . i can run the sample programs as usual but when i try
>> > and
>> > run namenode this is the error it logs...
>> >
>> > [hive@localhost ~]$ hadoop namenode
>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>> > /************************************************************
>> > STARTUP_MSG: Starting NameNode
>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>> > STARTUP_MSG:   args = []
>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
>> > STARTUP_MSG:   build =
>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
>> > May
>> > 7 14:01:59 PDT 2012
>> > ************************************************************/
>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
>> > processName=NameNode, sessionId=null
>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>> > NameNodeMeterics using context
>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>> > dfs.block.invalidate.limit=1000
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>> > FSNamesystemMetrics using context
>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
>> > initialization
>> > failed.
>> > java.io.FileNotFoundException:
>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>> > denied)
>> > at java.io.RandomAccessFile.open(Native Method)
>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>> > at
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>> > at
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
>> > java.io.FileNotFoundException:
>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>> > denied)
>> > at java.io.RandomAccessFile.open(Native Method)
>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>> > at
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>> > at
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>> >
>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>> > /************************************************************
>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
>> > ************************************************************/
>> >
>> >
>
>

Re: namenode instantiation error

Posted by Mohammad Tariq <do...@gmail.com>.
Hello Rahul,

   That's great. That's the best way to learn(I am doing the same :)
). Since the installation part is over, I would suggest to get
yourself familiar with Hdfs and MapReduce first. Try to do basic
filesystem operations using the Hdfs API and run the wordcount
program, if you haven't done it yet. Then move ahead.

Regards,
    Mohammad Tariq


On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com> wrote:
> Hi Tariq,
>
> I am also new to Hadoop trying to learn my self can anyone help me on the
> same.
> i have installed CDH3.
>
>
>
> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com> wrote:
>>
>> Hello Anand,
>>
>>     Is there any specific reason behind not using ssh??
>>
>> Regards,
>>     Mohammad Tariq
>>
>>
>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
>> wrote:
>> > Hi, i am just learning the Hadoop and i am setting the development
>> > environment with CDH3 pseudo distributed mode without any ssh
>> > cofiguration
>> > in CentOS 6.2 . i can run the sample programs as usual but when i try
>> > and
>> > run namenode this is the error it logs...
>> >
>> > [hive@localhost ~]$ hadoop namenode
>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>> > /************************************************************
>> > STARTUP_MSG: Starting NameNode
>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>> > STARTUP_MSG:   args = []
>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
>> > STARTUP_MSG:   build =
>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
>> > May
>> > 7 14:01:59 PDT 2012
>> > ************************************************************/
>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
>> > processName=NameNode, sessionId=null
>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>> > NameNodeMeterics using context
>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>> > dfs.block.invalidate.limit=1000
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>> > FSNamesystemMetrics using context
>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
>> > initialization
>> > failed.
>> > java.io.FileNotFoundException:
>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>> > denied)
>> > at java.io.RandomAccessFile.open(Native Method)
>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>> > at
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>> > at
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
>> > java.io.FileNotFoundException:
>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>> > denied)
>> > at java.io.RandomAccessFile.open(Native Method)
>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>> > at
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>> > at
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>> >
>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>> > /************************************************************
>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
>> > ************************************************************/
>> >
>> >
>
>

Re: namenode instantiation error

Posted by Mohammad Tariq <do...@gmail.com>.
Hello Rahul,

   That's great. That's the best way to learn(I am doing the same :)
). Since the installation part is over, I would suggest to get
yourself familiar with Hdfs and MapReduce first. Try to do basic
filesystem operations using the Hdfs API and run the wordcount
program, if you haven't done it yet. Then move ahead.

Regards,
    Mohammad Tariq


On Thu, Aug 9, 2012 at 5:20 PM, rahul p <ra...@gmail.com> wrote:
> Hi Tariq,
>
> I am also new to Hadoop trying to learn my self can anyone help me on the
> same.
> i have installed CDH3.
>
>
>
> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com> wrote:
>>
>> Hello Anand,
>>
>>     Is there any specific reason behind not using ssh??
>>
>> Regards,
>>     Mohammad Tariq
>>
>>
>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
>> wrote:
>> > Hi, i am just learning the Hadoop and i am setting the development
>> > environment with CDH3 pseudo distributed mode without any ssh
>> > cofiguration
>> > in CentOS 6.2 . i can run the sample programs as usual but when i try
>> > and
>> > run namenode this is the error it logs...
>> >
>> > [hive@localhost ~]$ hadoop namenode
>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>> > /************************************************************
>> > STARTUP_MSG: Starting NameNode
>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>> > STARTUP_MSG:   args = []
>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
>> > STARTUP_MSG:   build =
>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
>> > May
>> > 7 14:01:59 PDT 2012
>> > ************************************************************/
>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
>> > processName=NameNode, sessionId=null
>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>> > NameNodeMeterics using context
>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>> > dfs.block.invalidate.limit=1000
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>> > FSNamesystemMetrics using context
>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
>> > initialization
>> > failed.
>> > java.io.FileNotFoundException:
>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>> > denied)
>> > at java.io.RandomAccessFile.open(Native Method)
>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>> > at
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>> > at
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
>> > java.io.FileNotFoundException:
>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>> > denied)
>> > at java.io.RandomAccessFile.open(Native Method)
>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>> > at
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>> > at
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>> >
>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>> > /************************************************************
>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
>> > ************************************************************/
>> >
>> >
>
>

Re: namenode instantiation error

Posted by rahul p <ra...@gmail.com>.
Hi Tariq,

I am also new to Hadoop trying to learn my self can anyone help me on the
same.
i have installed CDH3.


On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com> wrote:

> Hello Anand,
>
>     Is there any specific reason behind not using ssh??
>
> Regards,
>     Mohammad Tariq
>
>
> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
> wrote:
> > Hi, i am just learning the Hadoop and i am setting the development
> > environment with CDH3 pseudo distributed mode without any ssh
> cofiguration
> > in CentOS 6.2 . i can run the sample programs as usual but when i try and
> > run namenode this is the error it logs...
> >
> > [hive@localhost ~]$ hadoop namenode
> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> > /************************************************************
> > STARTUP_MSG: Starting NameNode
> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> > STARTUP_MSG:   args = []
> > STARTUP_MSG:   version = 0.20.2-cdh3u4
> > STARTUP_MSG:   build =
> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
> May
> > 7 14:01:59 PDT 2012
> > ************************************************************/
> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> > processName=NameNode, sessionId=null
> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> > NameNodeMeterics using context
> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> > dfs.block.invalidate.limit=1000
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> > FSNamesystemMetrics using context
> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
> initialization
> > failed.
> > java.io.FileNotFoundException:
> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> denied)
> > at java.io.RandomAccessFile.open(Native Method)
> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> > at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> > at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> > 12/08/09 20:56:57 ERROR namenode.NameNode: java.io.FileNotFoundException:
> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> denied)
> > at java.io.RandomAccessFile.open(Native Method)
> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> > at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> > at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >
> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> > /************************************************************
> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
> > ************************************************************/
> >
> >
>

Re: namenode instantiation error

Posted by shashwat shriparv <dw...@gmail.com>.
format the filesystem

bin/hadoop namenode -format

then try to start namenode :)

On Thu, Aug 9, 2012 at 3:51 PM, Mohammad Tariq <do...@gmail.com> wrote:

> Hello Anand,
>
>     Is there any specific reason behind not using ssh??
>
> Regards,
>     Mohammad Tariq
>
>
> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
> wrote:
> > Hi, i am just learning the Hadoop and i am setting the development
> > environment with CDH3 pseudo distributed mode without any ssh
> cofiguration
> > in CentOS 6.2 . i can run the sample programs as usual but when i try and
> > run namenode this is the error it logs...
> >
> > [hive@localhost ~]$ hadoop namenode
> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> > /************************************************************
> > STARTUP_MSG: Starting NameNode
> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> > STARTUP_MSG:   args = []
> > STARTUP_MSG:   version = 0.20.2-cdh3u4
> > STARTUP_MSG:   build =
> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
> May
> > 7 14:01:59 PDT 2012
> > ************************************************************/
> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> > processName=NameNode, sessionId=null
> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> > NameNodeMeterics using context
> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> > dfs.block.invalidate.limit=1000
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> > FSNamesystemMetrics using context
> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
> initialization
> > failed.
> > java.io.FileNotFoundException:
> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> denied)
> > at java.io.RandomAccessFile.open(Native Method)
> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> > at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> > at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> > 12/08/09 20:56:57 ERROR namenode.NameNode: java.io.FileNotFoundException:
> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> denied)
> > at java.io.RandomAccessFile.open(Native Method)
> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> > at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> > at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >
> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> > /************************************************************
> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
> > ************************************************************/
> >
> >
>



-- 


∞
Shashwat Shriparv

Re: namenode instantiation error

Posted by rahul p <ra...@gmail.com>.
Hi Tariq,

I am also new to Hadoop trying to learn my self can anyone help me on the
same.
i have installed CDH3.


On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com> wrote:

> Hello Anand,
>
>     Is there any specific reason behind not using ssh??
>
> Regards,
>     Mohammad Tariq
>
>
> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
> wrote:
> > Hi, i am just learning the Hadoop and i am setting the development
> > environment with CDH3 pseudo distributed mode without any ssh
> cofiguration
> > in CentOS 6.2 . i can run the sample programs as usual but when i try and
> > run namenode this is the error it logs...
> >
> > [hive@localhost ~]$ hadoop namenode
> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> > /************************************************************
> > STARTUP_MSG: Starting NameNode
> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> > STARTUP_MSG:   args = []
> > STARTUP_MSG:   version = 0.20.2-cdh3u4
> > STARTUP_MSG:   build =
> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
> May
> > 7 14:01:59 PDT 2012
> > ************************************************************/
> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> > processName=NameNode, sessionId=null
> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> > NameNodeMeterics using context
> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> > dfs.block.invalidate.limit=1000
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> > FSNamesystemMetrics using context
> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
> initialization
> > failed.
> > java.io.FileNotFoundException:
> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> denied)
> > at java.io.RandomAccessFile.open(Native Method)
> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> > at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> > at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> > 12/08/09 20:56:57 ERROR namenode.NameNode: java.io.FileNotFoundException:
> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> denied)
> > at java.io.RandomAccessFile.open(Native Method)
> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> > at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> > at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >
> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> > /************************************************************
> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
> > ************************************************************/
> >
> >
>

Re: namenode instantiation error

Posted by shashwat shriparv <dw...@gmail.com>.
format the filesystem

bin/hadoop namenode -format

then try to start namenode :)

On Thu, Aug 9, 2012 at 3:51 PM, Mohammad Tariq <do...@gmail.com> wrote:

> Hello Anand,
>
>     Is there any specific reason behind not using ssh??
>
> Regards,
>     Mohammad Tariq
>
>
> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
> wrote:
> > Hi, i am just learning the Hadoop and i am setting the development
> > environment with CDH3 pseudo distributed mode without any ssh
> cofiguration
> > in CentOS 6.2 . i can run the sample programs as usual but when i try and
> > run namenode this is the error it logs...
> >
> > [hive@localhost ~]$ hadoop namenode
> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> > /************************************************************
> > STARTUP_MSG: Starting NameNode
> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> > STARTUP_MSG:   args = []
> > STARTUP_MSG:   version = 0.20.2-cdh3u4
> > STARTUP_MSG:   build =
> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
> May
> > 7 14:01:59 PDT 2012
> > ************************************************************/
> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> > processName=NameNode, sessionId=null
> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> > NameNodeMeterics using context
> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> > dfs.block.invalidate.limit=1000
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> > FSNamesystemMetrics using context
> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
> initialization
> > failed.
> > java.io.FileNotFoundException:
> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> denied)
> > at java.io.RandomAccessFile.open(Native Method)
> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> > at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> > at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> > 12/08/09 20:56:57 ERROR namenode.NameNode: java.io.FileNotFoundException:
> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> denied)
> > at java.io.RandomAccessFile.open(Native Method)
> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> > at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> > at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >
> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> > /************************************************************
> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
> > ************************************************************/
> >
> >
>



-- 


∞
Shashwat Shriparv

Re: namenode instantiation error

Posted by rahul p <ra...@gmail.com>.
Hi Tariq,

I am also new to Hadoop trying to learn my self can anyone help me on the
same.
i have installed CDH3.


On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com> wrote:

> Hello Anand,
>
>     Is there any specific reason behind not using ssh??
>
> Regards,
>     Mohammad Tariq
>
>
> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
> wrote:
> > Hi, i am just learning the Hadoop and i am setting the development
> > environment with CDH3 pseudo distributed mode without any ssh
> cofiguration
> > in CentOS 6.2 . i can run the sample programs as usual but when i try and
> > run namenode this is the error it logs...
> >
> > [hive@localhost ~]$ hadoop namenode
> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> > /************************************************************
> > STARTUP_MSG: Starting NameNode
> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> > STARTUP_MSG:   args = []
> > STARTUP_MSG:   version = 0.20.2-cdh3u4
> > STARTUP_MSG:   build =
> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
> May
> > 7 14:01:59 PDT 2012
> > ************************************************************/
> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> > processName=NameNode, sessionId=null
> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> > NameNodeMeterics using context
> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> > dfs.block.invalidate.limit=1000
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> > FSNamesystemMetrics using context
> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
> initialization
> > failed.
> > java.io.FileNotFoundException:
> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> denied)
> > at java.io.RandomAccessFile.open(Native Method)
> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> > at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> > at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> > 12/08/09 20:56:57 ERROR namenode.NameNode: java.io.FileNotFoundException:
> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> denied)
> > at java.io.RandomAccessFile.open(Native Method)
> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> > at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> > at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >
> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> > /************************************************************
> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
> > ************************************************************/
> >
> >
>

Re: namenode instantiation error

Posted by rahul p <ra...@gmail.com>.
Hi Tariq,

I am also new to Hadoop trying to learn my self can anyone help me on the
same.
i have installed CDH3.


On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <do...@gmail.com> wrote:

> Hello Anand,
>
>     Is there any specific reason behind not using ssh??
>
> Regards,
>     Mohammad Tariq
>
>
> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
> wrote:
> > Hi, i am just learning the Hadoop and i am setting the development
> > environment with CDH3 pseudo distributed mode without any ssh
> cofiguration
> > in CentOS 6.2 . i can run the sample programs as usual but when i try and
> > run namenode this is the error it logs...
> >
> > [hive@localhost ~]$ hadoop namenode
> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> > /************************************************************
> > STARTUP_MSG: Starting NameNode
> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> > STARTUP_MSG:   args = []
> > STARTUP_MSG:   version = 0.20.2-cdh3u4
> > STARTUP_MSG:   build =
> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
> May
> > 7 14:01:59 PDT 2012
> > ************************************************************/
> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> > processName=NameNode, sessionId=null
> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> > NameNodeMeterics using context
> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> > dfs.block.invalidate.limit=1000
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> > FSNamesystemMetrics using context
> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
> initialization
> > failed.
> > java.io.FileNotFoundException:
> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> denied)
> > at java.io.RandomAccessFile.open(Native Method)
> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> > at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> > at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> > 12/08/09 20:56:57 ERROR namenode.NameNode: java.io.FileNotFoundException:
> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> denied)
> > at java.io.RandomAccessFile.open(Native Method)
> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> > at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> > at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >
> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> > /************************************************************
> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
> > ************************************************************/
> >
> >
>

Re: namenode instantiation error

Posted by shashwat shriparv <dw...@gmail.com>.
format the filesystem

bin/hadoop namenode -format

then try to start namenode :)

On Thu, Aug 9, 2012 at 3:51 PM, Mohammad Tariq <do...@gmail.com> wrote:

> Hello Anand,
>
>     Is there any specific reason behind not using ssh??
>
> Regards,
>     Mohammad Tariq
>
>
> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
> wrote:
> > Hi, i am just learning the Hadoop and i am setting the development
> > environment with CDH3 pseudo distributed mode without any ssh
> cofiguration
> > in CentOS 6.2 . i can run the sample programs as usual but when i try and
> > run namenode this is the error it logs...
> >
> > [hive@localhost ~]$ hadoop namenode
> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> > /************************************************************
> > STARTUP_MSG: Starting NameNode
> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> > STARTUP_MSG:   args = []
> > STARTUP_MSG:   version = 0.20.2-cdh3u4
> > STARTUP_MSG:   build =
> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
> May
> > 7 14:01:59 PDT 2012
> > ************************************************************/
> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> > processName=NameNode, sessionId=null
> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> > NameNodeMeterics using context
> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> > dfs.block.invalidate.limit=1000
> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> > FSNamesystemMetrics using context
> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
> initialization
> > failed.
> > java.io.FileNotFoundException:
> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> denied)
> > at java.io.RandomAccessFile.open(Native Method)
> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> > at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> > at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> > 12/08/09 20:56:57 ERROR namenode.NameNode: java.io.FileNotFoundException:
> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> denied)
> > at java.io.RandomAccessFile.open(Native Method)
> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> > at
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> > at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> > at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >
> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> > /************************************************************
> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
> > ************************************************************/
> >
> >
>



-- 


∞
Shashwat Shriparv

Re: namenode instantiation error

Posted by Mohammad Tariq <do...@gmail.com>.
Hello Anand,

    Is there any specific reason behind not using ssh??

Regards,
    Mohammad Tariq


On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com> wrote:
> Hi, i am just learning the Hadoop and i am setting the development
> environment with CDH3 pseudo distributed mode without any ssh cofiguration
> in CentOS 6.2 . i can run the sample programs as usual but when i try and
> run namenode this is the error it logs...
>
> [hive@localhost ~]$ hadoop namenode
> 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2-cdh3u4
> STARTUP_MSG:   build = file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
> -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May
> 7 14:01:59 PDT 2012
> ************************************************************/
> 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> processName=NameNode, sessionId=null
> 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> NameNodeMeterics using context
> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
> 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
> 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> dfs.block.invalidate.limit=1000
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
> accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> FSNamesystemMetrics using context
> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem initialization
> failed.
> java.io.FileNotFoundException:
> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
> at java.io.RandomAccessFile.open(Native Method)
> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> 12/08/09 20:56:57 ERROR namenode.NameNode: java.io.FileNotFoundException:
> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
> at java.io.RandomAccessFile.open(Native Method)
> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>
> 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
> ************************************************************/
>
>

Re: namenode instantiation error

Posted by Mohamed Trad <ri...@inria.fr>.
Hi,

Try starting your hdfs from scratch. You had to deletet your hdfs folders, the hadoop related folders in the \tmp and finally recreate your hdfs folders and the hadoop namenode - format. Should be ok then.

Bests

Envoyé de mon iPhone

Le 11 août 2012 à 14:43, anand sharma <an...@gmail.com> a écrit :

> Thannks  Tariq  , i already have.
> 
> On Fri, Aug 10, 2012 at 7:51 PM, Mohammad Tariq <do...@gmail.com> wrote:
> Hello Anand,
> 
>    Sorry for being unresponsive. You have anyways got proper comments
> from the expert. I would just like to add one thing here. Since you
> want to reduce the complexity, I would suggest you to configure ssh.
> It's a one time pain but saves lot of time and efforts. Otherwise you
> have to go to each node even for the smallest thing. ssh configuration
> is quite straight forward and if you need some help on that you can go
> here :
> http://cloudfront.blogspot.in/2012/07/how-to-setup-and-configure-ssh-on-ubuntu.html
> 
> Regards,
>     Mohammad Tariq
> 
> 
> On Fri, Aug 10, 2012 at 5:34 PM, Harsh J <ha...@cloudera.com> wrote:
> > You do not need SSH generally. See
> > http://wiki.apache.org/hadoop/FAQ#Does_Hadoop_require_SSH.3F
> >
> > 1. Your original issue is that you are starting the NameNode as the
> > completely wrong user. Start it as the "hdfs" user, in a packaged
> > environment. Run "sudo -u hdfs hadoop namenode" to start it in
> > foreground, or simply run "sudo service hadoop-0.20-namenode start" to
> > start it in the background. This will fix it up for you.
> >
> > 2. Your format was aborted cause in 0.20.x/1.x, the input required was
> > case-sensitive, while in 2.x onwards the input is non-case-sensitive.
> > So if you typed "Y" instead of "y", it would have succeeded.
> >
> > HTH!
> >
> > On Fri, Aug 10, 2012 at 4:35 PM, anand sharma <an...@gmail.com> wrote:
> >> And are permission for that file which is causing problem..
> >>
> >> [root@localhost hive]# ls -l
> >> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock
> >> -rwxrwxrwx. 1 hdfs hdfs 0 Aug 10 21:23
> >> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock
> >>
> >>
> >>
> >>
> >>
> >> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com> wrote:
> >>>
> >>> Hi, i am just learning the Hadoop and i am setting the development
> >>> environment with CDH3 pseudo distributed mode without any ssh cofiguration
> >>> in CentOS 6.2 . i can run the sample programs as usual but when i try and
> >>> run namenode this is the error it logs...
> >>>
> >>> [hive@localhost ~]$ hadoop namenode
> >>> 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> >>> /************************************************************
> >>> STARTUP_MSG: Starting NameNode
> >>> STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> >>> STARTUP_MSG:   args = []
> >>> STARTUP_MSG:   version = 0.20.2-cdh3u4
> >>> STARTUP_MSG:   build =
> >>> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r
> >>> 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May  7
> >>> 14:01:59 PDT 2012
> >>> ************************************************************/
> >>> 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> >>> processName=NameNode, sessionId=null
> >>> 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> >>> NameNodeMeterics using context
> >>> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> >>> 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> >>> 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
> >>> 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> >>> dfs.block.invalidate.limit=1000
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
> >>> accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> >>> 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> >>> FSNamesystemMetrics using context
> >>> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem initialization
> >>> failed.
> >>> java.io.FileNotFoundException:
> >>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
> >>> at java.io.RandomAccessFile.open(Native Method)
> >>> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> at
> >>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> at
> >>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> at
> >>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>> 12/08/09 20:56:57 ERROR namenode.NameNode: java.io.FileNotFoundException:
> >>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
> >>> at java.io.RandomAccessFile.open(Native Method)
> >>> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> at
> >>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> at
> >>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> at
> >>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>>
> >>> 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> >>> /************************************************************
> >>> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
> >>> ************************************************************/
> >>>
> >>>
> >>
> >
> >
> >
> > --
> > Harsh J
> 

Re: namenode instantiation error

Posted by Mohamed Trad <ri...@inria.fr>.
Hi,

Try starting your hdfs from scratch. You had to deletet your hdfs folders, the hadoop related folders in the \tmp and finally recreate your hdfs folders and the hadoop namenode - format. Should be ok then.

Bests

Envoyé de mon iPhone

Le 11 août 2012 à 14:43, anand sharma <an...@gmail.com> a écrit :

> Thannks  Tariq  , i already have.
> 
> On Fri, Aug 10, 2012 at 7:51 PM, Mohammad Tariq <do...@gmail.com> wrote:
> Hello Anand,
> 
>    Sorry for being unresponsive. You have anyways got proper comments
> from the expert. I would just like to add one thing here. Since you
> want to reduce the complexity, I would suggest you to configure ssh.
> It's a one time pain but saves lot of time and efforts. Otherwise you
> have to go to each node even for the smallest thing. ssh configuration
> is quite straight forward and if you need some help on that you can go
> here :
> http://cloudfront.blogspot.in/2012/07/how-to-setup-and-configure-ssh-on-ubuntu.html
> 
> Regards,
>     Mohammad Tariq
> 
> 
> On Fri, Aug 10, 2012 at 5:34 PM, Harsh J <ha...@cloudera.com> wrote:
> > You do not need SSH generally. See
> > http://wiki.apache.org/hadoop/FAQ#Does_Hadoop_require_SSH.3F
> >
> > 1. Your original issue is that you are starting the NameNode as the
> > completely wrong user. Start it as the "hdfs" user, in a packaged
> > environment. Run "sudo -u hdfs hadoop namenode" to start it in
> > foreground, or simply run "sudo service hadoop-0.20-namenode start" to
> > start it in the background. This will fix it up for you.
> >
> > 2. Your format was aborted cause in 0.20.x/1.x, the input required was
> > case-sensitive, while in 2.x onwards the input is non-case-sensitive.
> > So if you typed "Y" instead of "y", it would have succeeded.
> >
> > HTH!
> >
> > On Fri, Aug 10, 2012 at 4:35 PM, anand sharma <an...@gmail.com> wrote:
> >> And are permission for that file which is causing problem..
> >>
> >> [root@localhost hive]# ls -l
> >> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock
> >> -rwxrwxrwx. 1 hdfs hdfs 0 Aug 10 21:23
> >> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock
> >>
> >>
> >>
> >>
> >>
> >> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com> wrote:
> >>>
> >>> Hi, i am just learning the Hadoop and i am setting the development
> >>> environment with CDH3 pseudo distributed mode without any ssh cofiguration
> >>> in CentOS 6.2 . i can run the sample programs as usual but when i try and
> >>> run namenode this is the error it logs...
> >>>
> >>> [hive@localhost ~]$ hadoop namenode
> >>> 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> >>> /************************************************************
> >>> STARTUP_MSG: Starting NameNode
> >>> STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> >>> STARTUP_MSG:   args = []
> >>> STARTUP_MSG:   version = 0.20.2-cdh3u4
> >>> STARTUP_MSG:   build =
> >>> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r
> >>> 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May  7
> >>> 14:01:59 PDT 2012
> >>> ************************************************************/
> >>> 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> >>> processName=NameNode, sessionId=null
> >>> 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> >>> NameNodeMeterics using context
> >>> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> >>> 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> >>> 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
> >>> 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> >>> dfs.block.invalidate.limit=1000
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
> >>> accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> >>> 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> >>> FSNamesystemMetrics using context
> >>> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem initialization
> >>> failed.
> >>> java.io.FileNotFoundException:
> >>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
> >>> at java.io.RandomAccessFile.open(Native Method)
> >>> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> at
> >>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> at
> >>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> at
> >>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>> 12/08/09 20:56:57 ERROR namenode.NameNode: java.io.FileNotFoundException:
> >>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
> >>> at java.io.RandomAccessFile.open(Native Method)
> >>> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> at
> >>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> at
> >>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> at
> >>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>>
> >>> 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> >>> /************************************************************
> >>> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
> >>> ************************************************************/
> >>>
> >>>
> >>
> >
> >
> >
> > --
> > Harsh J
> 

Re: namenode instantiation error

Posted by Mohamed Trad <ri...@inria.fr>.
Hi,

Try starting your hdfs from scratch. You had to deletet your hdfs folders, the hadoop related folders in the \tmp and finally recreate your hdfs folders and the hadoop namenode - format. Should be ok then.

Bests

Envoyé de mon iPhone

Le 11 août 2012 à 14:43, anand sharma <an...@gmail.com> a écrit :

> Thannks  Tariq  , i already have.
> 
> On Fri, Aug 10, 2012 at 7:51 PM, Mohammad Tariq <do...@gmail.com> wrote:
> Hello Anand,
> 
>    Sorry for being unresponsive. You have anyways got proper comments
> from the expert. I would just like to add one thing here. Since you
> want to reduce the complexity, I would suggest you to configure ssh.
> It's a one time pain but saves lot of time and efforts. Otherwise you
> have to go to each node even for the smallest thing. ssh configuration
> is quite straight forward and if you need some help on that you can go
> here :
> http://cloudfront.blogspot.in/2012/07/how-to-setup-and-configure-ssh-on-ubuntu.html
> 
> Regards,
>     Mohammad Tariq
> 
> 
> On Fri, Aug 10, 2012 at 5:34 PM, Harsh J <ha...@cloudera.com> wrote:
> > You do not need SSH generally. See
> > http://wiki.apache.org/hadoop/FAQ#Does_Hadoop_require_SSH.3F
> >
> > 1. Your original issue is that you are starting the NameNode as the
> > completely wrong user. Start it as the "hdfs" user, in a packaged
> > environment. Run "sudo -u hdfs hadoop namenode" to start it in
> > foreground, or simply run "sudo service hadoop-0.20-namenode start" to
> > start it in the background. This will fix it up for you.
> >
> > 2. Your format was aborted cause in 0.20.x/1.x, the input required was
> > case-sensitive, while in 2.x onwards the input is non-case-sensitive.
> > So if you typed "Y" instead of "y", it would have succeeded.
> >
> > HTH!
> >
> > On Fri, Aug 10, 2012 at 4:35 PM, anand sharma <an...@gmail.com> wrote:
> >> And are permission for that file which is causing problem..
> >>
> >> [root@localhost hive]# ls -l
> >> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock
> >> -rwxrwxrwx. 1 hdfs hdfs 0 Aug 10 21:23
> >> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock
> >>
> >>
> >>
> >>
> >>
> >> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com> wrote:
> >>>
> >>> Hi, i am just learning the Hadoop and i am setting the development
> >>> environment with CDH3 pseudo distributed mode without any ssh cofiguration
> >>> in CentOS 6.2 . i can run the sample programs as usual but when i try and
> >>> run namenode this is the error it logs...
> >>>
> >>> [hive@localhost ~]$ hadoop namenode
> >>> 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> >>> /************************************************************
> >>> STARTUP_MSG: Starting NameNode
> >>> STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> >>> STARTUP_MSG:   args = []
> >>> STARTUP_MSG:   version = 0.20.2-cdh3u4
> >>> STARTUP_MSG:   build =
> >>> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r
> >>> 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May  7
> >>> 14:01:59 PDT 2012
> >>> ************************************************************/
> >>> 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> >>> processName=NameNode, sessionId=null
> >>> 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> >>> NameNodeMeterics using context
> >>> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> >>> 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> >>> 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
> >>> 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> >>> dfs.block.invalidate.limit=1000
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
> >>> accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> >>> 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> >>> FSNamesystemMetrics using context
> >>> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem initialization
> >>> failed.
> >>> java.io.FileNotFoundException:
> >>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
> >>> at java.io.RandomAccessFile.open(Native Method)
> >>> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> at
> >>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> at
> >>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> at
> >>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>> 12/08/09 20:56:57 ERROR namenode.NameNode: java.io.FileNotFoundException:
> >>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
> >>> at java.io.RandomAccessFile.open(Native Method)
> >>> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> at
> >>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> at
> >>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> at
> >>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>>
> >>> 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> >>> /************************************************************
> >>> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
> >>> ************************************************************/
> >>>
> >>>
> >>
> >
> >
> >
> > --
> > Harsh J
> 

Re: namenode instantiation error

Posted by Mohamed Trad <ri...@inria.fr>.
Hi,

Try starting your hdfs from scratch. You had to deletet your hdfs folders, the hadoop related folders in the \tmp and finally recreate your hdfs folders and the hadoop namenode - format. Should be ok then.

Bests

Envoyé de mon iPhone

Le 11 août 2012 à 14:43, anand sharma <an...@gmail.com> a écrit :

> Thannks  Tariq  , i already have.
> 
> On Fri, Aug 10, 2012 at 7:51 PM, Mohammad Tariq <do...@gmail.com> wrote:
> Hello Anand,
> 
>    Sorry for being unresponsive. You have anyways got proper comments
> from the expert. I would just like to add one thing here. Since you
> want to reduce the complexity, I would suggest you to configure ssh.
> It's a one time pain but saves lot of time and efforts. Otherwise you
> have to go to each node even for the smallest thing. ssh configuration
> is quite straight forward and if you need some help on that you can go
> here :
> http://cloudfront.blogspot.in/2012/07/how-to-setup-and-configure-ssh-on-ubuntu.html
> 
> Regards,
>     Mohammad Tariq
> 
> 
> On Fri, Aug 10, 2012 at 5:34 PM, Harsh J <ha...@cloudera.com> wrote:
> > You do not need SSH generally. See
> > http://wiki.apache.org/hadoop/FAQ#Does_Hadoop_require_SSH.3F
> >
> > 1. Your original issue is that you are starting the NameNode as the
> > completely wrong user. Start it as the "hdfs" user, in a packaged
> > environment. Run "sudo -u hdfs hadoop namenode" to start it in
> > foreground, or simply run "sudo service hadoop-0.20-namenode start" to
> > start it in the background. This will fix it up for you.
> >
> > 2. Your format was aborted cause in 0.20.x/1.x, the input required was
> > case-sensitive, while in 2.x onwards the input is non-case-sensitive.
> > So if you typed "Y" instead of "y", it would have succeeded.
> >
> > HTH!
> >
> > On Fri, Aug 10, 2012 at 4:35 PM, anand sharma <an...@gmail.com> wrote:
> >> And are permission for that file which is causing problem..
> >>
> >> [root@localhost hive]# ls -l
> >> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock
> >> -rwxrwxrwx. 1 hdfs hdfs 0 Aug 10 21:23
> >> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock
> >>
> >>
> >>
> >>
> >>
> >> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com> wrote:
> >>>
> >>> Hi, i am just learning the Hadoop and i am setting the development
> >>> environment with CDH3 pseudo distributed mode without any ssh cofiguration
> >>> in CentOS 6.2 . i can run the sample programs as usual but when i try and
> >>> run namenode this is the error it logs...
> >>>
> >>> [hive@localhost ~]$ hadoop namenode
> >>> 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> >>> /************************************************************
> >>> STARTUP_MSG: Starting NameNode
> >>> STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> >>> STARTUP_MSG:   args = []
> >>> STARTUP_MSG:   version = 0.20.2-cdh3u4
> >>> STARTUP_MSG:   build =
> >>> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r
> >>> 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May  7
> >>> 14:01:59 PDT 2012
> >>> ************************************************************/
> >>> 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> >>> processName=NameNode, sessionId=null
> >>> 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> >>> NameNodeMeterics using context
> >>> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> >>> 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> >>> 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
> >>> 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> >>> dfs.block.invalidate.limit=1000
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
> >>> accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> >>> 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> >>> FSNamesystemMetrics using context
> >>> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem initialization
> >>> failed.
> >>> java.io.FileNotFoundException:
> >>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
> >>> at java.io.RandomAccessFile.open(Native Method)
> >>> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> at
> >>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> at
> >>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> at
> >>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>> 12/08/09 20:56:57 ERROR namenode.NameNode: java.io.FileNotFoundException:
> >>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
> >>> at java.io.RandomAccessFile.open(Native Method)
> >>> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> at
> >>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> at
> >>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> at
> >>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>>
> >>> 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> >>> /************************************************************
> >>> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
> >>> ************************************************************/
> >>>
> >>>
> >>
> >
> >
> >
> > --
> > Harsh J
> 

Re: namenode instantiation error

Posted by anand sharma <an...@gmail.com>.
Thannks  Tariq  , i already have.

On Fri, Aug 10, 2012 at 7:51 PM, Mohammad Tariq <do...@gmail.com> wrote:

> Hello Anand,
>
>    Sorry for being unresponsive. You have anyways got proper comments
> from the expert. I would just like to add one thing here. Since you
> want to reduce the complexity, I would suggest you to configure ssh.
> It's a one time pain but saves lot of time and efforts. Otherwise you
> have to go to each node even for the smallest thing. ssh configuration
> is quite straight forward and if you need some help on that you can go
> here :
>
> http://cloudfront.blogspot.in/2012/07/how-to-setup-and-configure-ssh-on-ubuntu.html
>
> Regards,
>     Mohammad Tariq
>
>
> On Fri, Aug 10, 2012 at 5:34 PM, Harsh J <ha...@cloudera.com> wrote:
> > You do not need SSH generally. See
> > http://wiki.apache.org/hadoop/FAQ#Does_Hadoop_require_SSH.3F
> >
> > 1. Your original issue is that you are starting the NameNode as the
> > completely wrong user. Start it as the "hdfs" user, in a packaged
> > environment. Run "sudo -u hdfs hadoop namenode" to start it in
> > foreground, or simply run "sudo service hadoop-0.20-namenode start" to
> > start it in the background. This will fix it up for you.
> >
> > 2. Your format was aborted cause in 0.20.x/1.x, the input required was
> > case-sensitive, while in 2.x onwards the input is non-case-sensitive.
> > So if you typed "Y" instead of "y", it would have succeeded.
> >
> > HTH!
> >
> > On Fri, Aug 10, 2012 at 4:35 PM, anand sharma <an...@gmail.com>
> wrote:
> >> And are permission for that file which is causing problem..
> >>
> >> [root@localhost hive]# ls -l
> >> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock
> >> -rwxrwxrwx. 1 hdfs hdfs 0 Aug 10 21:23
> >> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock
> >>
> >>
> >>
> >>
> >>
> >> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
> wrote:
> >>>
> >>> Hi, i am just learning the Hadoop and i am setting the development
> >>> environment with CDH3 pseudo distributed mode without any ssh
> cofiguration
> >>> in CentOS 6.2 . i can run the sample programs as usual but when i try
> and
> >>> run namenode this is the error it logs...
> >>>
> >>> [hive@localhost ~]$ hadoop namenode
> >>> 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> >>> /************************************************************
> >>> STARTUP_MSG: Starting NameNode
> >>> STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> >>> STARTUP_MSG:   args = []
> >>> STARTUP_MSG:   version = 0.20.2-cdh3u4
> >>> STARTUP_MSG:   build =
> >>> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r
> >>> 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
> May  7
> >>> 14:01:59 PDT 2012
> >>> ************************************************************/
> >>> 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> >>> processName=NameNode, sessionId=null
> >>> 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> >>> NameNodeMeterics using context
> >>> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> >>> 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> >>> 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
> entries
> >>> 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
> (auth:SIMPLE)
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> >>> dfs.block.invalidate.limit=1000
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> isAccessTokenEnabled=false
> >>> accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> >>> 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> >>> FSNamesystemMetrics using context
> >>> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
> initialization
> >>> failed.
> >>> java.io.FileNotFoundException:
> >>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> denied)
> >>> at java.io.RandomAccessFile.open(Native Method)
> >>> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>> 12/08/09 20:56:57 ERROR namenode.NameNode:
> java.io.FileNotFoundException:
> >>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> denied)
> >>> at java.io.RandomAccessFile.open(Native Method)
> >>> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>>
> >>> 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> >>> /************************************************************
> >>> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/
> 127.0.0.1
> >>> ************************************************************/
> >>>
> >>>
> >>
> >
> >
> >
> > --
> > Harsh J
>

Re: namenode instantiation error

Posted by anand sharma <an...@gmail.com>.
Thannks  Tariq  , i already have.

On Fri, Aug 10, 2012 at 7:51 PM, Mohammad Tariq <do...@gmail.com> wrote:

> Hello Anand,
>
>    Sorry for being unresponsive. You have anyways got proper comments
> from the expert. I would just like to add one thing here. Since you
> want to reduce the complexity, I would suggest you to configure ssh.
> It's a one time pain but saves lot of time and efforts. Otherwise you
> have to go to each node even for the smallest thing. ssh configuration
> is quite straight forward and if you need some help on that you can go
> here :
>
> http://cloudfront.blogspot.in/2012/07/how-to-setup-and-configure-ssh-on-ubuntu.html
>
> Regards,
>     Mohammad Tariq
>
>
> On Fri, Aug 10, 2012 at 5:34 PM, Harsh J <ha...@cloudera.com> wrote:
> > You do not need SSH generally. See
> > http://wiki.apache.org/hadoop/FAQ#Does_Hadoop_require_SSH.3F
> >
> > 1. Your original issue is that you are starting the NameNode as the
> > completely wrong user. Start it as the "hdfs" user, in a packaged
> > environment. Run "sudo -u hdfs hadoop namenode" to start it in
> > foreground, or simply run "sudo service hadoop-0.20-namenode start" to
> > start it in the background. This will fix it up for you.
> >
> > 2. Your format was aborted cause in 0.20.x/1.x, the input required was
> > case-sensitive, while in 2.x onwards the input is non-case-sensitive.
> > So if you typed "Y" instead of "y", it would have succeeded.
> >
> > HTH!
> >
> > On Fri, Aug 10, 2012 at 4:35 PM, anand sharma <an...@gmail.com>
> wrote:
> >> And are permission for that file which is causing problem..
> >>
> >> [root@localhost hive]# ls -l
> >> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock
> >> -rwxrwxrwx. 1 hdfs hdfs 0 Aug 10 21:23
> >> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock
> >>
> >>
> >>
> >>
> >>
> >> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
> wrote:
> >>>
> >>> Hi, i am just learning the Hadoop and i am setting the development
> >>> environment with CDH3 pseudo distributed mode without any ssh
> cofiguration
> >>> in CentOS 6.2 . i can run the sample programs as usual but when i try
> and
> >>> run namenode this is the error it logs...
> >>>
> >>> [hive@localhost ~]$ hadoop namenode
> >>> 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> >>> /************************************************************
> >>> STARTUP_MSG: Starting NameNode
> >>> STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> >>> STARTUP_MSG:   args = []
> >>> STARTUP_MSG:   version = 0.20.2-cdh3u4
> >>> STARTUP_MSG:   build =
> >>> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r
> >>> 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
> May  7
> >>> 14:01:59 PDT 2012
> >>> ************************************************************/
> >>> 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> >>> processName=NameNode, sessionId=null
> >>> 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> >>> NameNodeMeterics using context
> >>> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> >>> 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> >>> 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
> entries
> >>> 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
> (auth:SIMPLE)
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> >>> dfs.block.invalidate.limit=1000
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> isAccessTokenEnabled=false
> >>> accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> >>> 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> >>> FSNamesystemMetrics using context
> >>> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
> initialization
> >>> failed.
> >>> java.io.FileNotFoundException:
> >>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> denied)
> >>> at java.io.RandomAccessFile.open(Native Method)
> >>> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>> 12/08/09 20:56:57 ERROR namenode.NameNode:
> java.io.FileNotFoundException:
> >>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> denied)
> >>> at java.io.RandomAccessFile.open(Native Method)
> >>> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>>
> >>> 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> >>> /************************************************************
> >>> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/
> 127.0.0.1
> >>> ************************************************************/
> >>>
> >>>
> >>
> >
> >
> >
> > --
> > Harsh J
>

Re: namenode instantiation error

Posted by anand sharma <an...@gmail.com>.
Thannks  Tariq  , i already have.

On Fri, Aug 10, 2012 at 7:51 PM, Mohammad Tariq <do...@gmail.com> wrote:

> Hello Anand,
>
>    Sorry for being unresponsive. You have anyways got proper comments
> from the expert. I would just like to add one thing here. Since you
> want to reduce the complexity, I would suggest you to configure ssh.
> It's a one time pain but saves lot of time and efforts. Otherwise you
> have to go to each node even for the smallest thing. ssh configuration
> is quite straight forward and if you need some help on that you can go
> here :
>
> http://cloudfront.blogspot.in/2012/07/how-to-setup-and-configure-ssh-on-ubuntu.html
>
> Regards,
>     Mohammad Tariq
>
>
> On Fri, Aug 10, 2012 at 5:34 PM, Harsh J <ha...@cloudera.com> wrote:
> > You do not need SSH generally. See
> > http://wiki.apache.org/hadoop/FAQ#Does_Hadoop_require_SSH.3F
> >
> > 1. Your original issue is that you are starting the NameNode as the
> > completely wrong user. Start it as the "hdfs" user, in a packaged
> > environment. Run "sudo -u hdfs hadoop namenode" to start it in
> > foreground, or simply run "sudo service hadoop-0.20-namenode start" to
> > start it in the background. This will fix it up for you.
> >
> > 2. Your format was aborted cause in 0.20.x/1.x, the input required was
> > case-sensitive, while in 2.x onwards the input is non-case-sensitive.
> > So if you typed "Y" instead of "y", it would have succeeded.
> >
> > HTH!
> >
> > On Fri, Aug 10, 2012 at 4:35 PM, anand sharma <an...@gmail.com>
> wrote:
> >> And are permission for that file which is causing problem..
> >>
> >> [root@localhost hive]# ls -l
> >> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock
> >> -rwxrwxrwx. 1 hdfs hdfs 0 Aug 10 21:23
> >> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock
> >>
> >>
> >>
> >>
> >>
> >> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
> wrote:
> >>>
> >>> Hi, i am just learning the Hadoop and i am setting the development
> >>> environment with CDH3 pseudo distributed mode without any ssh
> cofiguration
> >>> in CentOS 6.2 . i can run the sample programs as usual but when i try
> and
> >>> run namenode this is the error it logs...
> >>>
> >>> [hive@localhost ~]$ hadoop namenode
> >>> 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> >>> /************************************************************
> >>> STARTUP_MSG: Starting NameNode
> >>> STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> >>> STARTUP_MSG:   args = []
> >>> STARTUP_MSG:   version = 0.20.2-cdh3u4
> >>> STARTUP_MSG:   build =
> >>> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r
> >>> 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
> May  7
> >>> 14:01:59 PDT 2012
> >>> ************************************************************/
> >>> 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> >>> processName=NameNode, sessionId=null
> >>> 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> >>> NameNodeMeterics using context
> >>> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> >>> 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> >>> 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
> entries
> >>> 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
> (auth:SIMPLE)
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> >>> dfs.block.invalidate.limit=1000
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> isAccessTokenEnabled=false
> >>> accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> >>> 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> >>> FSNamesystemMetrics using context
> >>> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
> initialization
> >>> failed.
> >>> java.io.FileNotFoundException:
> >>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> denied)
> >>> at java.io.RandomAccessFile.open(Native Method)
> >>> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>> 12/08/09 20:56:57 ERROR namenode.NameNode:
> java.io.FileNotFoundException:
> >>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> denied)
> >>> at java.io.RandomAccessFile.open(Native Method)
> >>> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>>
> >>> 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> >>> /************************************************************
> >>> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/
> 127.0.0.1
> >>> ************************************************************/
> >>>
> >>>
> >>
> >
> >
> >
> > --
> > Harsh J
>

Re: namenode instantiation error

Posted by anand sharma <an...@gmail.com>.
Thannks  Tariq  , i already have.

On Fri, Aug 10, 2012 at 7:51 PM, Mohammad Tariq <do...@gmail.com> wrote:

> Hello Anand,
>
>    Sorry for being unresponsive. You have anyways got proper comments
> from the expert. I would just like to add one thing here. Since you
> want to reduce the complexity, I would suggest you to configure ssh.
> It's a one time pain but saves lot of time and efforts. Otherwise you
> have to go to each node even for the smallest thing. ssh configuration
> is quite straight forward and if you need some help on that you can go
> here :
>
> http://cloudfront.blogspot.in/2012/07/how-to-setup-and-configure-ssh-on-ubuntu.html
>
> Regards,
>     Mohammad Tariq
>
>
> On Fri, Aug 10, 2012 at 5:34 PM, Harsh J <ha...@cloudera.com> wrote:
> > You do not need SSH generally. See
> > http://wiki.apache.org/hadoop/FAQ#Does_Hadoop_require_SSH.3F
> >
> > 1. Your original issue is that you are starting the NameNode as the
> > completely wrong user. Start it as the "hdfs" user, in a packaged
> > environment. Run "sudo -u hdfs hadoop namenode" to start it in
> > foreground, or simply run "sudo service hadoop-0.20-namenode start" to
> > start it in the background. This will fix it up for you.
> >
> > 2. Your format was aborted cause in 0.20.x/1.x, the input required was
> > case-sensitive, while in 2.x onwards the input is non-case-sensitive.
> > So if you typed "Y" instead of "y", it would have succeeded.
> >
> > HTH!
> >
> > On Fri, Aug 10, 2012 at 4:35 PM, anand sharma <an...@gmail.com>
> wrote:
> >> And are permission for that file which is causing problem..
> >>
> >> [root@localhost hive]# ls -l
> >> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock
> >> -rwxrwxrwx. 1 hdfs hdfs 0 Aug 10 21:23
> >> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock
> >>
> >>
> >>
> >>
> >>
> >> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com>
> wrote:
> >>>
> >>> Hi, i am just learning the Hadoop and i am setting the development
> >>> environment with CDH3 pseudo distributed mode without any ssh
> cofiguration
> >>> in CentOS 6.2 . i can run the sample programs as usual but when i try
> and
> >>> run namenode this is the error it logs...
> >>>
> >>> [hive@localhost ~]$ hadoop namenode
> >>> 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> >>> /************************************************************
> >>> STARTUP_MSG: Starting NameNode
> >>> STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> >>> STARTUP_MSG:   args = []
> >>> STARTUP_MSG:   version = 0.20.2-cdh3u4
> >>> STARTUP_MSG:   build =
> >>> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r
> >>> 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
> May  7
> >>> 14:01:59 PDT 2012
> >>> ************************************************************/
> >>> 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> >>> processName=NameNode, sessionId=null
> >>> 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> >>> NameNodeMeterics using context
> >>> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> >>> 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> >>> 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
> entries
> >>> 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
> (auth:SIMPLE)
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> >>> dfs.block.invalidate.limit=1000
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> isAccessTokenEnabled=false
> >>> accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> >>> 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> >>> FSNamesystemMetrics using context
> >>> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
> initialization
> >>> failed.
> >>> java.io.FileNotFoundException:
> >>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> denied)
> >>> at java.io.RandomAccessFile.open(Native Method)
> >>> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>> 12/08/09 20:56:57 ERROR namenode.NameNode:
> java.io.FileNotFoundException:
> >>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> denied)
> >>> at java.io.RandomAccessFile.open(Native Method)
> >>> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>>
> >>> 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> >>> /************************************************************
> >>> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/
> 127.0.0.1
> >>> ************************************************************/
> >>>
> >>>
> >>
> >
> >
> >
> > --
> > Harsh J
>

Re: namenode instantiation error

Posted by Mohammad Tariq <do...@gmail.com>.
Hello Anand,

   Sorry for being unresponsive. You have anyways got proper comments
from the expert. I would just like to add one thing here. Since you
want to reduce the complexity, I would suggest you to configure ssh.
It's a one time pain but saves lot of time and efforts. Otherwise you
have to go to each node even for the smallest thing. ssh configuration
is quite straight forward and if you need some help on that you can go
here :
http://cloudfront.blogspot.in/2012/07/how-to-setup-and-configure-ssh-on-ubuntu.html

Regards,
    Mohammad Tariq


On Fri, Aug 10, 2012 at 5:34 PM, Harsh J <ha...@cloudera.com> wrote:
> You do not need SSH generally. See
> http://wiki.apache.org/hadoop/FAQ#Does_Hadoop_require_SSH.3F
>
> 1. Your original issue is that you are starting the NameNode as the
> completely wrong user. Start it as the "hdfs" user, in a packaged
> environment. Run "sudo -u hdfs hadoop namenode" to start it in
> foreground, or simply run "sudo service hadoop-0.20-namenode start" to
> start it in the background. This will fix it up for you.
>
> 2. Your format was aborted cause in 0.20.x/1.x, the input required was
> case-sensitive, while in 2.x onwards the input is non-case-sensitive.
> So if you typed "Y" instead of "y", it would have succeeded.
>
> HTH!
>
> On Fri, Aug 10, 2012 at 4:35 PM, anand sharma <an...@gmail.com> wrote:
>> And are permission for that file which is causing problem..
>>
>> [root@localhost hive]# ls -l
>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock
>> -rwxrwxrwx. 1 hdfs hdfs 0 Aug 10 21:23
>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock
>>
>>
>>
>>
>>
>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com> wrote:
>>>
>>> Hi, i am just learning the Hadoop and i am setting the development
>>> environment with CDH3 pseudo distributed mode without any ssh cofiguration
>>> in CentOS 6.2 . i can run the sample programs as usual but when i try and
>>> run namenode this is the error it logs...
>>>
>>> [hive@localhost ~]$ hadoop namenode
>>> 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>>> /************************************************************
>>> STARTUP_MSG: Starting NameNode
>>> STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>>> STARTUP_MSG:   args = []
>>> STARTUP_MSG:   version = 0.20.2-cdh3u4
>>> STARTUP_MSG:   build =
>>> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r
>>> 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May  7
>>> 14:01:59 PDT 2012
>>> ************************************************************/
>>> 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
>>> processName=NameNode, sessionId=null
>>> 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>>> NameNodeMeterics using context
>>> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>>> 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>>> 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
>>> 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
>>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
>>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
>>> 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> dfs.block.invalidate.limit=1000
>>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
>>> accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>>> 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>>> FSNamesystemMetrics using context
>>> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem initialization
>>> failed.
>>> java.io.FileNotFoundException:
>>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
>>> at java.io.RandomAccessFile.open(Native Method)
>>> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> at
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>>> at
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>>> at
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> 12/08/09 20:56:57 ERROR namenode.NameNode: java.io.FileNotFoundException:
>>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
>>> at java.io.RandomAccessFile.open(Native Method)
>>> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> at
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>>> at
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>>> at
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>>
>>> 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>>> /************************************************************
>>> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
>>> ************************************************************/
>>>
>>>
>>
>
>
>
> --
> Harsh J

Re: namenode instantiation error

Posted by Mohammad Tariq <do...@gmail.com>.
Hello Anand,

   Sorry for being unresponsive. You have anyways got proper comments
from the expert. I would just like to add one thing here. Since you
want to reduce the complexity, I would suggest you to configure ssh.
It's a one time pain but saves lot of time and efforts. Otherwise you
have to go to each node even for the smallest thing. ssh configuration
is quite straight forward and if you need some help on that you can go
here :
http://cloudfront.blogspot.in/2012/07/how-to-setup-and-configure-ssh-on-ubuntu.html

Regards,
    Mohammad Tariq


On Fri, Aug 10, 2012 at 5:34 PM, Harsh J <ha...@cloudera.com> wrote:
> You do not need SSH generally. See
> http://wiki.apache.org/hadoop/FAQ#Does_Hadoop_require_SSH.3F
>
> 1. Your original issue is that you are starting the NameNode as the
> completely wrong user. Start it as the "hdfs" user, in a packaged
> environment. Run "sudo -u hdfs hadoop namenode" to start it in
> foreground, or simply run "sudo service hadoop-0.20-namenode start" to
> start it in the background. This will fix it up for you.
>
> 2. Your format was aborted cause in 0.20.x/1.x, the input required was
> case-sensitive, while in 2.x onwards the input is non-case-sensitive.
> So if you typed "Y" instead of "y", it would have succeeded.
>
> HTH!
>
> On Fri, Aug 10, 2012 at 4:35 PM, anand sharma <an...@gmail.com> wrote:
>> And are permission for that file which is causing problem..
>>
>> [root@localhost hive]# ls -l
>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock
>> -rwxrwxrwx. 1 hdfs hdfs 0 Aug 10 21:23
>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock
>>
>>
>>
>>
>>
>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com> wrote:
>>>
>>> Hi, i am just learning the Hadoop and i am setting the development
>>> environment with CDH3 pseudo distributed mode without any ssh cofiguration
>>> in CentOS 6.2 . i can run the sample programs as usual but when i try and
>>> run namenode this is the error it logs...
>>>
>>> [hive@localhost ~]$ hadoop namenode
>>> 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>>> /************************************************************
>>> STARTUP_MSG: Starting NameNode
>>> STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>>> STARTUP_MSG:   args = []
>>> STARTUP_MSG:   version = 0.20.2-cdh3u4
>>> STARTUP_MSG:   build =
>>> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r
>>> 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May  7
>>> 14:01:59 PDT 2012
>>> ************************************************************/
>>> 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
>>> processName=NameNode, sessionId=null
>>> 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>>> NameNodeMeterics using context
>>> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>>> 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>>> 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
>>> 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
>>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
>>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
>>> 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> dfs.block.invalidate.limit=1000
>>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
>>> accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>>> 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>>> FSNamesystemMetrics using context
>>> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem initialization
>>> failed.
>>> java.io.FileNotFoundException:
>>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
>>> at java.io.RandomAccessFile.open(Native Method)
>>> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> at
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>>> at
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>>> at
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> 12/08/09 20:56:57 ERROR namenode.NameNode: java.io.FileNotFoundException:
>>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
>>> at java.io.RandomAccessFile.open(Native Method)
>>> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> at
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>>> at
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>>> at
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>>
>>> 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>>> /************************************************************
>>> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
>>> ************************************************************/
>>>
>>>
>>
>
>
>
> --
> Harsh J

Re: namenode instantiation error

Posted by Mohammad Tariq <do...@gmail.com>.
Hello Anand,

   Sorry for being unresponsive. You have anyways got proper comments
from the expert. I would just like to add one thing here. Since you
want to reduce the complexity, I would suggest you to configure ssh.
It's a one time pain but saves lot of time and efforts. Otherwise you
have to go to each node even for the smallest thing. ssh configuration
is quite straight forward and if you need some help on that you can go
here :
http://cloudfront.blogspot.in/2012/07/how-to-setup-and-configure-ssh-on-ubuntu.html

Regards,
    Mohammad Tariq


On Fri, Aug 10, 2012 at 5:34 PM, Harsh J <ha...@cloudera.com> wrote:
> You do not need SSH generally. See
> http://wiki.apache.org/hadoop/FAQ#Does_Hadoop_require_SSH.3F
>
> 1. Your original issue is that you are starting the NameNode as the
> completely wrong user. Start it as the "hdfs" user, in a packaged
> environment. Run "sudo -u hdfs hadoop namenode" to start it in
> foreground, or simply run "sudo service hadoop-0.20-namenode start" to
> start it in the background. This will fix it up for you.
>
> 2. Your format was aborted cause in 0.20.x/1.x, the input required was
> case-sensitive, while in 2.x onwards the input is non-case-sensitive.
> So if you typed "Y" instead of "y", it would have succeeded.
>
> HTH!
>
> On Fri, Aug 10, 2012 at 4:35 PM, anand sharma <an...@gmail.com> wrote:
>> And are permission for that file which is causing problem..
>>
>> [root@localhost hive]# ls -l
>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock
>> -rwxrwxrwx. 1 hdfs hdfs 0 Aug 10 21:23
>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock
>>
>>
>>
>>
>>
>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com> wrote:
>>>
>>> Hi, i am just learning the Hadoop and i am setting the development
>>> environment with CDH3 pseudo distributed mode without any ssh cofiguration
>>> in CentOS 6.2 . i can run the sample programs as usual but when i try and
>>> run namenode this is the error it logs...
>>>
>>> [hive@localhost ~]$ hadoop namenode
>>> 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>>> /************************************************************
>>> STARTUP_MSG: Starting NameNode
>>> STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>>> STARTUP_MSG:   args = []
>>> STARTUP_MSG:   version = 0.20.2-cdh3u4
>>> STARTUP_MSG:   build =
>>> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r
>>> 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May  7
>>> 14:01:59 PDT 2012
>>> ************************************************************/
>>> 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
>>> processName=NameNode, sessionId=null
>>> 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>>> NameNodeMeterics using context
>>> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>>> 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>>> 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
>>> 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
>>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
>>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
>>> 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> dfs.block.invalidate.limit=1000
>>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
>>> accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>>> 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>>> FSNamesystemMetrics using context
>>> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem initialization
>>> failed.
>>> java.io.FileNotFoundException:
>>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
>>> at java.io.RandomAccessFile.open(Native Method)
>>> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> at
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>>> at
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>>> at
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> 12/08/09 20:56:57 ERROR namenode.NameNode: java.io.FileNotFoundException:
>>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
>>> at java.io.RandomAccessFile.open(Native Method)
>>> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> at
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>>> at
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>>> at
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>>
>>> 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>>> /************************************************************
>>> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
>>> ************************************************************/
>>>
>>>
>>
>
>
>
> --
> Harsh J

Re: namenode instantiation error

Posted by Mohammad Tariq <do...@gmail.com>.
Hello Anand,

   Sorry for being unresponsive. You have anyways got proper comments
from the expert. I would just like to add one thing here. Since you
want to reduce the complexity, I would suggest you to configure ssh.
It's a one time pain but saves lot of time and efforts. Otherwise you
have to go to each node even for the smallest thing. ssh configuration
is quite straight forward and if you need some help on that you can go
here :
http://cloudfront.blogspot.in/2012/07/how-to-setup-and-configure-ssh-on-ubuntu.html

Regards,
    Mohammad Tariq


On Fri, Aug 10, 2012 at 5:34 PM, Harsh J <ha...@cloudera.com> wrote:
> You do not need SSH generally. See
> http://wiki.apache.org/hadoop/FAQ#Does_Hadoop_require_SSH.3F
>
> 1. Your original issue is that you are starting the NameNode as the
> completely wrong user. Start it as the "hdfs" user, in a packaged
> environment. Run "sudo -u hdfs hadoop namenode" to start it in
> foreground, or simply run "sudo service hadoop-0.20-namenode start" to
> start it in the background. This will fix it up for you.
>
> 2. Your format was aborted cause in 0.20.x/1.x, the input required was
> case-sensitive, while in 2.x onwards the input is non-case-sensitive.
> So if you typed "Y" instead of "y", it would have succeeded.
>
> HTH!
>
> On Fri, Aug 10, 2012 at 4:35 PM, anand sharma <an...@gmail.com> wrote:
>> And are permission for that file which is causing problem..
>>
>> [root@localhost hive]# ls -l
>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock
>> -rwxrwxrwx. 1 hdfs hdfs 0 Aug 10 21:23
>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock
>>
>>
>>
>>
>>
>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com> wrote:
>>>
>>> Hi, i am just learning the Hadoop and i am setting the development
>>> environment with CDH3 pseudo distributed mode without any ssh cofiguration
>>> in CentOS 6.2 . i can run the sample programs as usual but when i try and
>>> run namenode this is the error it logs...
>>>
>>> [hive@localhost ~]$ hadoop namenode
>>> 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>>> /************************************************************
>>> STARTUP_MSG: Starting NameNode
>>> STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>>> STARTUP_MSG:   args = []
>>> STARTUP_MSG:   version = 0.20.2-cdh3u4
>>> STARTUP_MSG:   build =
>>> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r
>>> 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May  7
>>> 14:01:59 PDT 2012
>>> ************************************************************/
>>> 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
>>> processName=NameNode, sessionId=null
>>> 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>>> NameNodeMeterics using context
>>> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>>> 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>>> 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
>>> 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
>>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
>>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
>>> 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> dfs.block.invalidate.limit=1000
>>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
>>> accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>>> 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>>> FSNamesystemMetrics using context
>>> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>> 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem initialization
>>> failed.
>>> java.io.FileNotFoundException:
>>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
>>> at java.io.RandomAccessFile.open(Native Method)
>>> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> at
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>>> at
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>>> at
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>> 12/08/09 20:56:57 ERROR namenode.NameNode: java.io.FileNotFoundException:
>>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
>>> at java.io.RandomAccessFile.open(Native Method)
>>> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>>> at
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>>> at
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>>> at
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>>
>>> 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>>> /************************************************************
>>> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
>>> ************************************************************/
>>>
>>>
>>
>
>
>
> --
> Harsh J

Re: namenode instantiation error

Posted by Harsh J <ha...@cloudera.com>.
You do not need SSH generally. See
http://wiki.apache.org/hadoop/FAQ#Does_Hadoop_require_SSH.3F

1. Your original issue is that you are starting the NameNode as the
completely wrong user. Start it as the "hdfs" user, in a packaged
environment. Run "sudo -u hdfs hadoop namenode" to start it in
foreground, or simply run "sudo service hadoop-0.20-namenode start" to
start it in the background. This will fix it up for you.

2. Your format was aborted cause in 0.20.x/1.x, the input required was
case-sensitive, while in 2.x onwards the input is non-case-sensitive.
So if you typed "Y" instead of "y", it would have succeeded.

HTH!

On Fri, Aug 10, 2012 at 4:35 PM, anand sharma <an...@gmail.com> wrote:
> And are permission for that file which is causing problem..
>
> [root@localhost hive]# ls -l
> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock
> -rwxrwxrwx. 1 hdfs hdfs 0 Aug 10 21:23
> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock
>
>
>
>
>
> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com> wrote:
>>
>> Hi, i am just learning the Hadoop and i am setting the development
>> environment with CDH3 pseudo distributed mode without any ssh cofiguration
>> in CentOS 6.2 . i can run the sample programs as usual but when i try and
>> run namenode this is the error it logs...
>>
>> [hive@localhost ~]$ hadoop namenode
>> 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>> /************************************************************
>> STARTUP_MSG: Starting NameNode
>> STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 0.20.2-cdh3u4
>> STARTUP_MSG:   build =
>> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r
>> 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May  7
>> 14:01:59 PDT 2012
>> ************************************************************/
>> 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
>> processName=NameNode, sessionId=null
>> 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>> NameNodeMeterics using context
>> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>> 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>> 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
>> 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
>> 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>> dfs.block.invalidate.limit=1000
>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
>> accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>> 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>> FSNamesystemMetrics using context
>> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem initialization
>> failed.
>> java.io.FileNotFoundException:
>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
>> at java.io.RandomAccessFile.open(Native Method)
>> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> at
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> at
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> at
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>> 12/08/09 20:56:57 ERROR namenode.NameNode: java.io.FileNotFoundException:
>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
>> at java.io.RandomAccessFile.open(Native Method)
>> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> at
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> at
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> at
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>
>> 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>> /************************************************************
>> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
>> ************************************************************/
>>
>>
>



-- 
Harsh J

Re: namenode instantiation error

Posted by Harsh J <ha...@cloudera.com>.
You do not need SSH generally. See
http://wiki.apache.org/hadoop/FAQ#Does_Hadoop_require_SSH.3F

1. Your original issue is that you are starting the NameNode as the
completely wrong user. Start it as the "hdfs" user, in a packaged
environment. Run "sudo -u hdfs hadoop namenode" to start it in
foreground, or simply run "sudo service hadoop-0.20-namenode start" to
start it in the background. This will fix it up for you.

2. Your format was aborted cause in 0.20.x/1.x, the input required was
case-sensitive, while in 2.x onwards the input is non-case-sensitive.
So if you typed "Y" instead of "y", it would have succeeded.

HTH!

On Fri, Aug 10, 2012 at 4:35 PM, anand sharma <an...@gmail.com> wrote:
> And are permission for that file which is causing problem..
>
> [root@localhost hive]# ls -l
> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock
> -rwxrwxrwx. 1 hdfs hdfs 0 Aug 10 21:23
> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock
>
>
>
>
>
> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com> wrote:
>>
>> Hi, i am just learning the Hadoop and i am setting the development
>> environment with CDH3 pseudo distributed mode without any ssh cofiguration
>> in CentOS 6.2 . i can run the sample programs as usual but when i try and
>> run namenode this is the error it logs...
>>
>> [hive@localhost ~]$ hadoop namenode
>> 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>> /************************************************************
>> STARTUP_MSG: Starting NameNode
>> STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 0.20.2-cdh3u4
>> STARTUP_MSG:   build =
>> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r
>> 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May  7
>> 14:01:59 PDT 2012
>> ************************************************************/
>> 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
>> processName=NameNode, sessionId=null
>> 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>> NameNodeMeterics using context
>> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>> 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>> 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
>> 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
>> 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>> dfs.block.invalidate.limit=1000
>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
>> accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>> 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>> FSNamesystemMetrics using context
>> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem initialization
>> failed.
>> java.io.FileNotFoundException:
>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
>> at java.io.RandomAccessFile.open(Native Method)
>> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> at
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> at
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> at
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>> 12/08/09 20:56:57 ERROR namenode.NameNode: java.io.FileNotFoundException:
>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
>> at java.io.RandomAccessFile.open(Native Method)
>> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> at
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> at
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> at
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>
>> 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>> /************************************************************
>> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
>> ************************************************************/
>>
>>
>



-- 
Harsh J

Re: namenode instantiation error

Posted by Harsh J <ha...@cloudera.com>.
You do not need SSH generally. See
http://wiki.apache.org/hadoop/FAQ#Does_Hadoop_require_SSH.3F

1. Your original issue is that you are starting the NameNode as the
completely wrong user. Start it as the "hdfs" user, in a packaged
environment. Run "sudo -u hdfs hadoop namenode" to start it in
foreground, or simply run "sudo service hadoop-0.20-namenode start" to
start it in the background. This will fix it up for you.

2. Your format was aborted cause in 0.20.x/1.x, the input required was
case-sensitive, while in 2.x onwards the input is non-case-sensitive.
So if you typed "Y" instead of "y", it would have succeeded.

HTH!

On Fri, Aug 10, 2012 at 4:35 PM, anand sharma <an...@gmail.com> wrote:
> And are permission for that file which is causing problem..
>
> [root@localhost hive]# ls -l
> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock
> -rwxrwxrwx. 1 hdfs hdfs 0 Aug 10 21:23
> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock
>
>
>
>
>
> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com> wrote:
>>
>> Hi, i am just learning the Hadoop and i am setting the development
>> environment with CDH3 pseudo distributed mode without any ssh cofiguration
>> in CentOS 6.2 . i can run the sample programs as usual but when i try and
>> run namenode this is the error it logs...
>>
>> [hive@localhost ~]$ hadoop namenode
>> 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>> /************************************************************
>> STARTUP_MSG: Starting NameNode
>> STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 0.20.2-cdh3u4
>> STARTUP_MSG:   build =
>> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r
>> 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May  7
>> 14:01:59 PDT 2012
>> ************************************************************/
>> 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
>> processName=NameNode, sessionId=null
>> 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>> NameNodeMeterics using context
>> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>> 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>> 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
>> 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
>> 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>> dfs.block.invalidate.limit=1000
>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
>> accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>> 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>> FSNamesystemMetrics using context
>> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem initialization
>> failed.
>> java.io.FileNotFoundException:
>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
>> at java.io.RandomAccessFile.open(Native Method)
>> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> at
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> at
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> at
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>> 12/08/09 20:56:57 ERROR namenode.NameNode: java.io.FileNotFoundException:
>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
>> at java.io.RandomAccessFile.open(Native Method)
>> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> at
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> at
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> at
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>
>> 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>> /************************************************************
>> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
>> ************************************************************/
>>
>>
>



-- 
Harsh J

Re: namenode instantiation error

Posted by Harsh J <ha...@cloudera.com>.
You do not need SSH generally. See
http://wiki.apache.org/hadoop/FAQ#Does_Hadoop_require_SSH.3F

1. Your original issue is that you are starting the NameNode as the
completely wrong user. Start it as the "hdfs" user, in a packaged
environment. Run "sudo -u hdfs hadoop namenode" to start it in
foreground, or simply run "sudo service hadoop-0.20-namenode start" to
start it in the background. This will fix it up for you.

2. Your format was aborted cause in 0.20.x/1.x, the input required was
case-sensitive, while in 2.x onwards the input is non-case-sensitive.
So if you typed "Y" instead of "y", it would have succeeded.

HTH!

On Fri, Aug 10, 2012 at 4:35 PM, anand sharma <an...@gmail.com> wrote:
> And are permission for that file which is causing problem..
>
> [root@localhost hive]# ls -l
> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock
> -rwxrwxrwx. 1 hdfs hdfs 0 Aug 10 21:23
> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock
>
>
>
>
>
> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com> wrote:
>>
>> Hi, i am just learning the Hadoop and i am setting the development
>> environment with CDH3 pseudo distributed mode without any ssh cofiguration
>> in CentOS 6.2 . i can run the sample programs as usual but when i try and
>> run namenode this is the error it logs...
>>
>> [hive@localhost ~]$ hadoop namenode
>> 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>> /************************************************************
>> STARTUP_MSG: Starting NameNode
>> STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 0.20.2-cdh3u4
>> STARTUP_MSG:   build =
>> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r
>> 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May  7
>> 14:01:59 PDT 2012
>> ************************************************************/
>> 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
>> processName=NameNode, sessionId=null
>> 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>> NameNodeMeterics using context
>> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>> 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>> 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
>> 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
>> 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>> dfs.block.invalidate.limit=1000
>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
>> accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>> 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>> FSNamesystemMetrics using context
>> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem initialization
>> failed.
>> java.io.FileNotFoundException:
>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
>> at java.io.RandomAccessFile.open(Native Method)
>> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> at
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> at
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> at
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>> 12/08/09 20:56:57 ERROR namenode.NameNode: java.io.FileNotFoundException:
>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
>> at java.io.RandomAccessFile.open(Native Method)
>> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> at
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> at
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> at
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>>
>> 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>> /************************************************************
>> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
>> ************************************************************/
>>
>>
>



-- 
Harsh J

Re: namenode instantiation error

Posted by anand sharma <an...@gmail.com>.
And are permission for that file which is causing problem..

[root@localhost hive]# ls -l
/var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock
-rwxrwxrwx. 1 hdfs hdfs 0 Aug 10 21:23
/var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock





On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com> wrote:

> Hi, i am just learning the Hadoop and i am setting
> the development environment with CDH3 pseudo distributed mode without any
> ssh cofiguration  in CentOS 6.2 . i can run the sample programs as usual
> but when i try and run namenode this is the error it logs...
>
> [hive@localhost ~]$ hadoop namenode
> 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2-cdh3u4
> STARTUP_MSG:   build =
> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r
> 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May  7
> 14:01:59 PDT 2012
> ************************************************************/
> 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> processName=NameNode, sessionId=null
> 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> NameNodeMeterics using context
> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
> 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
> 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> dfs.block.invalidate.limit=1000
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
> accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> FSNamesystemMetrics using context
> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem initialization
> failed.
> java.io.FileNotFoundException:
> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
> at java.io.RandomAccessFile.open(Native Method)
>  at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>  at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>  at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>  at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> 12/08/09 20:56:57 ERROR namenode.NameNode: java.io.FileNotFoundException:
> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
> at java.io.RandomAccessFile.open(Native Method)
>  at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>  at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>  at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>  at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>
> 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
> ************************************************************/
>
>
>

Re: namenode instantiation error

Posted by anand sharma <an...@gmail.com>.
And are permission for that file which is causing problem..

[root@localhost hive]# ls -l
/var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock
-rwxrwxrwx. 1 hdfs hdfs 0 Aug 10 21:23
/var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock





On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com> wrote:

> Hi, i am just learning the Hadoop and i am setting
> the development environment with CDH3 pseudo distributed mode without any
> ssh cofiguration  in CentOS 6.2 . i can run the sample programs as usual
> but when i try and run namenode this is the error it logs...
>
> [hive@localhost ~]$ hadoop namenode
> 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2-cdh3u4
> STARTUP_MSG:   build =
> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r
> 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May  7
> 14:01:59 PDT 2012
> ************************************************************/
> 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> processName=NameNode, sessionId=null
> 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> NameNodeMeterics using context
> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
> 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
> 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> dfs.block.invalidate.limit=1000
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
> accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> FSNamesystemMetrics using context
> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem initialization
> failed.
> java.io.FileNotFoundException:
> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
> at java.io.RandomAccessFile.open(Native Method)
>  at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>  at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>  at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>  at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> 12/08/09 20:56:57 ERROR namenode.NameNode: java.io.FileNotFoundException:
> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
> at java.io.RandomAccessFile.open(Native Method)
>  at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>  at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>  at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>  at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>
> 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
> ************************************************************/
>
>
>

Re: namenode instantiation error

Posted by Nitin Pawar <ni...@gmail.com>.
which user you are starting namenode?

if you are not root, does the user have access to mentioned directory?

On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com> wrote:
> Hi, i am just learning the Hadoop and i am setting the development
> environment with CDH3 pseudo distributed mode without any ssh cofiguration
> in CentOS 6.2 . i can run the sample programs as usual but when i try and
> run namenode this is the error it logs...
>
> [hive@localhost ~]$ hadoop namenode
> 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2-cdh3u4
> STARTUP_MSG:   build = file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
> -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May
> 7 14:01:59 PDT 2012
> ************************************************************/
> 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> processName=NameNode, sessionId=null
> 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> NameNodeMeterics using context
> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
> 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
> 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> dfs.block.invalidate.limit=1000
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
> accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> FSNamesystemMetrics using context
> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem initialization
> failed.
> java.io.FileNotFoundException:
> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
> at java.io.RandomAccessFile.open(Native Method)
> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> 12/08/09 20:56:57 ERROR namenode.NameNode: java.io.FileNotFoundException:
> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
> at java.io.RandomAccessFile.open(Native Method)
> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>
> 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
> ************************************************************/
>
>



-- 
Nitin Pawar

Re: namenode instantiation error

Posted by anand sharma <an...@gmail.com>.
And are permission for that file which is causing problem..

[root@localhost hive]# ls -l
/var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock
-rwxrwxrwx. 1 hdfs hdfs 0 Aug 10 21:23
/var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock





On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com> wrote:

> Hi, i am just learning the Hadoop and i am setting
> the development environment with CDH3 pseudo distributed mode without any
> ssh cofiguration  in CentOS 6.2 . i can run the sample programs as usual
> but when i try and run namenode this is the error it logs...
>
> [hive@localhost ~]$ hadoop namenode
> 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2-cdh3u4
> STARTUP_MSG:   build =
> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r
> 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May  7
> 14:01:59 PDT 2012
> ************************************************************/
> 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> processName=NameNode, sessionId=null
> 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> NameNodeMeterics using context
> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
> 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
> 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> dfs.block.invalidate.limit=1000
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
> accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> FSNamesystemMetrics using context
> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem initialization
> failed.
> java.io.FileNotFoundException:
> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
> at java.io.RandomAccessFile.open(Native Method)
>  at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>  at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>  at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>  at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> 12/08/09 20:56:57 ERROR namenode.NameNode: java.io.FileNotFoundException:
> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
> at java.io.RandomAccessFile.open(Native Method)
>  at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>  at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>  at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>  at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>
> 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
> ************************************************************/
>
>
>

Re: namenode instantiation error

Posted by Nitin Pawar <ni...@gmail.com>.
which user you are starting namenode?

if you are not root, does the user have access to mentioned directory?

On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com> wrote:
> Hi, i am just learning the Hadoop and i am setting the development
> environment with CDH3 pseudo distributed mode without any ssh cofiguration
> in CentOS 6.2 . i can run the sample programs as usual but when i try and
> run namenode this is the error it logs...
>
> [hive@localhost ~]$ hadoop namenode
> 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2-cdh3u4
> STARTUP_MSG:   build = file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
> -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May
> 7 14:01:59 PDT 2012
> ************************************************************/
> 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> processName=NameNode, sessionId=null
> 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> NameNodeMeterics using context
> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
> 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
> 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> dfs.block.invalidate.limit=1000
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
> accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> FSNamesystemMetrics using context
> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem initialization
> failed.
> java.io.FileNotFoundException:
> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
> at java.io.RandomAccessFile.open(Native Method)
> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> 12/08/09 20:56:57 ERROR namenode.NameNode: java.io.FileNotFoundException:
> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
> at java.io.RandomAccessFile.open(Native Method)
> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>
> 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
> ************************************************************/
>
>



-- 
Nitin Pawar

Re: namenode instantiation error

Posted by Nitin Pawar <ni...@gmail.com>.
which user you are starting namenode?

if you are not root, does the user have access to mentioned directory?

On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com> wrote:
> Hi, i am just learning the Hadoop and i am setting the development
> environment with CDH3 pseudo distributed mode without any ssh cofiguration
> in CentOS 6.2 . i can run the sample programs as usual but when i try and
> run namenode this is the error it logs...
>
> [hive@localhost ~]$ hadoop namenode
> 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2-cdh3u4
> STARTUP_MSG:   build = file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
> -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May
> 7 14:01:59 PDT 2012
> ************************************************************/
> 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> processName=NameNode, sessionId=null
> 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> NameNodeMeterics using context
> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
> 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
> 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> dfs.block.invalidate.limit=1000
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
> accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> FSNamesystemMetrics using context
> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem initialization
> failed.
> java.io.FileNotFoundException:
> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
> at java.io.RandomAccessFile.open(Native Method)
> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> 12/08/09 20:56:57 ERROR namenode.NameNode: java.io.FileNotFoundException:
> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
> at java.io.RandomAccessFile.open(Native Method)
> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>
> 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
> ************************************************************/
>
>



-- 
Nitin Pawar

Re: namenode instantiation error

Posted by Nitin Pawar <ni...@gmail.com>.
which user you are starting namenode?

if you are not root, does the user have access to mentioned directory?

On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com> wrote:
> Hi, i am just learning the Hadoop and i am setting the development
> environment with CDH3 pseudo distributed mode without any ssh cofiguration
> in CentOS 6.2 . i can run the sample programs as usual but when i try and
> run namenode this is the error it logs...
>
> [hive@localhost ~]$ hadoop namenode
> 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2-cdh3u4
> STARTUP_MSG:   build = file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
> -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May
> 7 14:01:59 PDT 2012
> ************************************************************/
> 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> processName=NameNode, sessionId=null
> 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> NameNodeMeterics using context
> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
> 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
> 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> dfs.block.invalidate.limit=1000
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
> accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> FSNamesystemMetrics using context
> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem initialization
> failed.
> java.io.FileNotFoundException:
> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
> at java.io.RandomAccessFile.open(Native Method)
> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> 12/08/09 20:56:57 ERROR namenode.NameNode: java.io.FileNotFoundException:
> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
> at java.io.RandomAccessFile.open(Native Method)
> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>
> 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
> ************************************************************/
>
>



-- 
Nitin Pawar

Re: namenode instantiation error

Posted by Mohammad Tariq <do...@gmail.com>.
Hello Anand,

    Is there any specific reason behind not using ssh??

Regards,
    Mohammad Tariq


On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com> wrote:
> Hi, i am just learning the Hadoop and i am setting the development
> environment with CDH3 pseudo distributed mode without any ssh cofiguration
> in CentOS 6.2 . i can run the sample programs as usual but when i try and
> run namenode this is the error it logs...
>
> [hive@localhost ~]$ hadoop namenode
> 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2-cdh3u4
> STARTUP_MSG:   build = file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
> -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May
> 7 14:01:59 PDT 2012
> ************************************************************/
> 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> processName=NameNode, sessionId=null
> 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> NameNodeMeterics using context
> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
> 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
> 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> dfs.block.invalidate.limit=1000
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
> accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> FSNamesystemMetrics using context
> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem initialization
> failed.
> java.io.FileNotFoundException:
> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
> at java.io.RandomAccessFile.open(Native Method)
> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> 12/08/09 20:56:57 ERROR namenode.NameNode: java.io.FileNotFoundException:
> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
> at java.io.RandomAccessFile.open(Native Method)
> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>
> 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
> ************************************************************/
>
>

Re: namenode instantiation error

Posted by Mohammad Tariq <do...@gmail.com>.
Hello Anand,

    Is there any specific reason behind not using ssh??

Regards,
    Mohammad Tariq


On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com> wrote:
> Hi, i am just learning the Hadoop and i am setting the development
> environment with CDH3 pseudo distributed mode without any ssh cofiguration
> in CentOS 6.2 . i can run the sample programs as usual but when i try and
> run namenode this is the error it logs...
>
> [hive@localhost ~]$ hadoop namenode
> 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2-cdh3u4
> STARTUP_MSG:   build = file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
> -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May
> 7 14:01:59 PDT 2012
> ************************************************************/
> 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> processName=NameNode, sessionId=null
> 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> NameNodeMeterics using context
> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
> 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
> 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> dfs.block.invalidate.limit=1000
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
> accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> FSNamesystemMetrics using context
> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem initialization
> failed.
> java.io.FileNotFoundException:
> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
> at java.io.RandomAccessFile.open(Native Method)
> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> 12/08/09 20:56:57 ERROR namenode.NameNode: java.io.FileNotFoundException:
> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
> at java.io.RandomAccessFile.open(Native Method)
> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>
> 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
> ************************************************************/
>
>

Re: namenode instantiation error

Posted by anand sharma <an...@gmail.com>.
And are permission for that file which is causing problem..

[root@localhost hive]# ls -l
/var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock
-rwxrwxrwx. 1 hdfs hdfs 0 Aug 10 21:23
/var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock





On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com> wrote:

> Hi, i am just learning the Hadoop and i am setting
> the development environment with CDH3 pseudo distributed mode without any
> ssh cofiguration  in CentOS 6.2 . i can run the sample programs as usual
> but when i try and run namenode this is the error it logs...
>
> [hive@localhost ~]$ hadoop namenode
> 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2-cdh3u4
> STARTUP_MSG:   build =
> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r
> 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May  7
> 14:01:59 PDT 2012
> ************************************************************/
> 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> processName=NameNode, sessionId=null
> 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> NameNodeMeterics using context
> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
> 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
> 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> dfs.block.invalidate.limit=1000
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
> accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> FSNamesystemMetrics using context
> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem initialization
> failed.
> java.io.FileNotFoundException:
> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
> at java.io.RandomAccessFile.open(Native Method)
>  at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>  at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>  at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>  at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> 12/08/09 20:56:57 ERROR namenode.NameNode: java.io.FileNotFoundException:
> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
> at java.io.RandomAccessFile.open(Native Method)
>  at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>  at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>  at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>  at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>
> 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
> ************************************************************/
>
>
>

Re: namenode instantiation error

Posted by Mohammad Tariq <do...@gmail.com>.
Hello Anand,

    Is there any specific reason behind not using ssh??

Regards,
    Mohammad Tariq


On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <an...@gmail.com> wrote:
> Hi, i am just learning the Hadoop and i am setting the development
> environment with CDH3 pseudo distributed mode without any ssh cofiguration
> in CentOS 6.2 . i can run the sample programs as usual but when i try and
> run namenode this is the error it logs...
>
> [hive@localhost ~]$ hadoop namenode
> 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2-cdh3u4
> STARTUP_MSG:   build = file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
> -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May
> 7 14:01:59 PDT 2012
> ************************************************************/
> 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> processName=NameNode, sessionId=null
> 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> NameNodeMeterics using context
> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
> 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE)
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
> 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> dfs.block.invalidate.limit=1000
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
> accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> FSNamesystemMetrics using context
> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem initialization
> failed.
> java.io.FileNotFoundException:
> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
> at java.io.RandomAccessFile.open(Native Method)
> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> 12/08/09 20:56:57 ERROR namenode.NameNode: java.io.FileNotFoundException:
> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
> at java.io.RandomAccessFile.open(Native Method)
> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>
> 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
> ************************************************************/
>
>