You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Onur AKTAS <on...@live.com> on 2009/08/04 01:57:23 UTC

Problem with starting Hadoop in Pseudo Distributed Mode

Hi,

I'm having troubles with running Hadoop in RHEL 5, I did everything as documented in: 
http://hadoop.apache.org/common/docs/r0.20.0/quickstart.html

And configured:
conf/core-site.xml, conf/hdfs-site.xml, 
conf/mapred-site.xml.

Connected to "localhost" with ssh (did passphrase stuff etc.), then I did the following:

$ bin/hadoop namenode -format
$ bin/start-all.sh 
starting namenode, logging to /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-namenode-localhost.localdomain.out
localhost: starting datanode, logging to /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-datanode-localhost.localdomain.out
localhost: starting secondarynamenode, logging to /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-secondarynamenode-localhost.localdomain.out
starting jobtracker, logging to /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-jobtracker-localhost.localdomain.out
localhost: starting tasktracker, logging to /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-tasktracker-localhost.localdomain.out

Everything seems ok, but when I check the Hadoop Logs I see many errors. (and they all cause HBase connection problems.)
How can I solve this problem? Here are the Logs

 hadoop-oracle-datanode-localhost.localdomain.log:
2009-08-04 02:54:28,971 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.0
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.20 -r 763504; compiled by 'ndaley' on Thu Apr  9 05:18:40 UTC 2009
************************************************************/
2009-08-04 02:54:29,562 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /tmp/hadoop-oracle/dfs/data: namenode namespaceID = 36527197; datanode namespaceID = 2138759529
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:233)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:148)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:298)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:216)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1283)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1238)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1246)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1368)

2009-08-04 02:54:29,563 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at localhost.localdomain/127.0.0.1
************************************************************/
------------------------------------------------------------------------------------------
hadoop-oracle-namenode-localhost.localdomain.log
2009-08-04 02:54:26,987 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.0
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.20 -r 763504; compiled by 'ndaley' on Thu Apr  9 05:18:40 UTC 2009
************************************************************/
2009-08-04 02:54:27,116 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with hostName=NameNode, port=9000
2009-08-04 02:54:27,174 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: localhost.localdomain/127.0.0.1:9000
2009-08-04 02:54:27,179 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=NameNode, sessionId=null
2009-08-04 02:54:27,180 INFO org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
2009-08-04 02:54:27,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=oracle,oinstall,root,dba,oper,asmadmin
2009-08-04 02:54:27,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2009-08-04 02:54:27,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2009-08-04 02:54:27,294 INFO org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics: Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
2009-08-04 02:54:27,297 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStatusMBean
2009-08-04 02:54:27,341 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 8
2009-08-04 02:54:27,348 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction = 2
2009-08-04 02:54:27,351 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 923 loaded in 0 seconds.
2009-08-04 02:54:27,351 INFO org.apache.hadoop.hdfs.server.common.Storage: Edits file /tmp/hadoop-oracle/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
2009-08-04 02:54:27,435 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 923 saved in 0 seconds.
2009-08-04 02:54:27,495 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 262 msecs
2009-08-04 02:54:27,496 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks = 0
2009-08-04 02:54:27,496 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid blocks = 0
2009-08-04 02:54:27,497 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of under-replicated blocks = 0
2009-08-04 02:54:27,497 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of  over-replicated blocks = 0
2009-08-04 02:54:27,497 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 0 secs.
2009-08-04 02:54:27,497 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
2009-08-04 02:54:27,497 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
2009-08-04 02:54:27,696 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2009-08-04 02:54:27,775 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50070
2009-08-04 02:54:27,775 INFO org.mortbay.log: jetty-6.1.14
2009-08-04 02:54:28,277 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50070
2009-08-04 02:54:28,278 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at: 0.0.0.0:50070
2009-08-04 02:54:28,278 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2009-08-04 02:54:28,279 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9000: starting
2009-08-04 02:54:28,280 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 9000: starting
2009-08-04 02:54:28,280 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 9000: starting
2009-08-04 02:54:28,316 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 9000: starting
2009-08-04 02:54:28,316 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 9000: starting
2009-08-04 02:54:28,321 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 9000: starting
2009-08-04 02:54:28,321 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 9000: starting
2009-08-04 02:54:28,328 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 9000: starting
2009-08-04 02:54:28,361 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 9000: starting
2009-08-04 02:54:28,362 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 9000: starting
2009-08-04 02:54:28,366 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 9000: starting
2009-08-04 02:54:38,433 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1    cmd=listStatus    src=/tmp/hadoop-oracle/mapred/system    dst=null    perm=null
2009-08-04 02:54:38,755 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1    cmd=delete    src=/tmp/hadoop-oracle/mapred/system    dst=null    perm=null
2009-08-04 02:54:38,773 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1    cmd=mkdirs    src=/tmp/hadoop-oracle/mapred/system    dst=null    perm=oracle:supergroup:rwxr-xr-x
2009-08-04 02:54:38,785 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1    cmd=setPermission    src=/tmp/hadoop-oracle/mapred/system    dst=null    perm=oracle:supergroup:rwx-wx-wx
2009-08-04 02:54:38,862 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1    cmd=create    src=/tmp/hadoop-oracle/mapred/system/jobtracker.info    dst=null    perm=oracle:supergroup:rw-r--r--
2009-08-04 02:54:38,900 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1    cmd=setPermission    src=/tmp/hadoop-oracle/mapred/system/jobtracker.info    dst=null    perm=oracle:supergroup:rw-------
2009-08-04 02:54:38,955 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/jobtracker.info, DFSClient_-603868025) from 127.0.0.1:51803: error: java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
2009-08-04 02:54:39,548 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/jobtracker.info, DFSClient_-603868025) from 127.0.0.1:51803: error: java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
2009-08-04 02:54:40,359 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/jobtracker.info, DFSClient_-603868025) from 127.0.0.1:51803: error: java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
2009-08-04 02:54:41,969 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/jobtracker.info, DFSClient_-603868025) from 127.0.0.1:51803: error: java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
2009-08-04 02:54:45,180 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/jobtracker.info, DFSClient_-603868025) from 127.0.0.1:51803: error: java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)



_________________________________________________________________
Windows Live ile fotoğraflarınızı organize edebilir, düzenleyebilir ve paylaşabilirsiniz.
http://www.microsoft.com/turkiye/windows/windowslive/products/photo-gallery-edit.aspx

Re: Problem with starting Hadoop in Pseudo Distributed Mode

Posted by Amandeep Khurana <am...@gmail.com>.
No probs.

I hope you got the data directory to point out of /tmp as well... If not, do
that as well. Otherwise, when the /tmp gets cleaned up, you'll lose your
data.



Amandeep Khurana
Computer Science Graduate Student
University of California, Santa Cruz


2009/8/3 Onur AKTAS <on...@live.com>

>
> Thank you very much!
>
> I added tags below to conf/core-site.xml and reformatted it again..
> it started without any problems and I also started HBase and connected it
> with a client!
>
> <property>
>    <name>hadoop.tmp.dir</name>
>     <value>/tmp/hadoop-onur</value>
>     <description>A base for other temporary directories.</description>
>   </property>
>
> Thank you again..
>
> > From: amansk@gmail.com
> > Date: Mon, 3 Aug 2009 17:48:24 -0700
> > Subject: Re: Problem with starting Hadoop in Pseudo Distributed Mode
> > To: common-user@hadoop.apache.org
> >
> > 1. The default xmls are in $HADOOP_HOME/build/classes
> > 2. You have to ovverride the parameters and put them in the site.xml's so
> > you can have it in some other directory and not /tmp
> >
> > Do that and try starting hadoop.
> >
> >
> > Amandeep Khurana
> > Computer Science Graduate Student
> > University of California, Santa Cruz
> >
> >
> > 2009/8/3 Onur AKTAS <on...@live.com>
> >
> > >
> > > There is no default.xml in Hadoop 0.20.0, but luckly I also have
> release
> > > 0.18.3 and found these..
> > >
> > > <property>
> > >  <name>hadoop.tmp.dir</name>
> > >  <value>/tmp/hadoop-${user.name}</value>
> > >  <description>A base for other temporary directories.</description>
> > > </property>
> > >
> > > It seems /tmp/hadoop-${user.name} is a temporary directory as
> description
> > > indicates, then where is the real directory?
> > > I deleted whole tmp directory and formatted it again.. Started the
> server,
> > > checked the logs and still have same errors.
> > >
> > > > Date: Mon, 3 Aug 2009 17:29:52 -0700
> > > > Subject: Re: Problem with starting Hadoop in Pseudo Distributed Mode
> > > > From: amansk@gmail.com
> > > > To: common-user@hadoop.apache.org
> > > >
> > > > Yes, you need to change these directories. The config is put in the
> > > > hadoop-site.xml. Or in this case, separately in the 3 xmls. See the
> > > > default xml for syntax and property name.
> > > >
> > > > On 8/3/09, Onur AKTAS <on...@live.com> wrote:
> > > > >
> > > > > Is it the directory that Hadoop uses?
> > > > >
> > > > > /tmp/hadoop-oracle
> > > > > /tmp/hadoop-oracle/dfs/
> > > > > /tmp/hadoop-oracle/mapred/
> > > > >
> > > > > If yes, how can I change the directory to anywhere else? I do not
> want
> > > it to
> > > > > be kept in /tmp folder.
> > > > >
> > > > >> From: amansk@gmail.com
> > > > >> Date: Mon, 3 Aug 2009 17:02:50 -0700
> > > > >> Subject: Re: Problem with starting Hadoop in Pseudo Distributed
> Mode
> > > > >> To: common-user@hadoop.apache.org
> > > > >>
> > > > >> I'm assuming that you have no data in HDFS since it never came
> up...
> > > So,
> > > > >> go
> > > > >> ahead and clean up the directory where you are storing the
> datanode's
> > > data
> > > > >> and the namenode's metadata. After that format the namenode and
> > > restart
> > > > >> hadoop.
> > > > >>
> > > > >>
> > > > >> 2009/8/3 Onur AKTAS <on...@live.com>
> > > > >>
> > > > >> >
> > > > >> > Hi,
> > > > >> >
> > > > >> > I'm having troubles with running Hadoop in RHEL 5, I did
> everything
> > > as
> > > > >> > documented in:
> > > > >> > http://hadoop.apache.org/common/docs/r0.20.0/quickstart.html
> > > > >> >
> > > > >> > And configured:
> > > > >> > conf/core-site.xml, conf/hdfs-site.xml,
> > > > >> > conf/mapred-site.xml.
> > > > >> >
> > > > >> > Connected to "localhost" with ssh (did passphrase stuff etc.),
> then
> > > I
> > > > >> > did
> > > > >> > the following:
> > > > >> >
> > > > >> > $ bin/hadoop namenode -format
> > > > >> > $ bin/start-all.sh
> > > > >> > starting namenode, logging to
> > > > >> >
> > >
> /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-namenode-localhost.localdomain.out
> > > > >> > localhost: starting datanode, logging to
> > > > >> >
> > >
> /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-datanode-localhost.localdomain.out
> > > > >> > localhost: starting secondarynamenode, logging to
> > > > >> >
> > >
> /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-secondarynamenode-localhost.localdomain.out
> > > > >> > starting jobtracker, logging to
> > > > >> >
> > >
> /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-jobtracker-localhost.localdomain.out
> > > > >> > localhost: starting tasktracker, logging to
> > > > >> >
> > >
> /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-tasktracker-localhost.localdomain.out
> > > > >> >
> > > > >> > Everything seems ok, but when I check the Hadoop Logs I see many
> > > errors.
> > > > >> > (and they all cause HBase connection problems.)
> > > > >> > How can I solve this problem? Here are the Logs
> > > > >> >
> > > > >> >  hadoop-oracle-datanode-localhost.localdomain.log:
> > > > >> > 2009-08-04 02:54:28,971 INFO
> > > > >> > org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> > > > >> > /************************************************************
> > > > >> > STARTUP_MSG: Starting DataNode
> > > > >> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> > > > >> > STARTUP_MSG:   args = []
> > > > >> > STARTUP_MSG:   version = 0.20.0
> > > > >> > STARTUP_MSG:   build =
> > > > >> >
> https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.20-r
> > > > >> > 763504; compiled by 'ndaley' on Thu Apr  9 05:18:40 UTC 2009
> > > > >> > ************************************************************/
> > > > >> > 2009-08-04 02:54:29,562 ERROR
> > > > >> > org.apache.hadoop.hdfs.server.datanode.DataNode:
> > > java.io.IOException:
> > > > >> > Incompatible namespaceIDs in /tmp/hadoop-oracle/dfs/data:
> namenode
> > > > >> > namespaceID = 36527197; datanode namespaceID = 2138759529
> > > > >> >    at
> > > > >> >
> > >
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:233)
> > > > >> >    at
> > > > >> >
> > >
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:148)
> > > > >> >    at
> > > > >> >
> > >
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:298)
> > > > >> >    at
> > > > >> >
> > >
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:216)
> > > > >> >    at
> > > > >> >
> > >
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1283)
> > > > >> >    at
> > > > >> >
> > >
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1238)
> > > > >> >    at
> > > > >> >
> > >
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1246)
> > > > >> >    at
> > > > >> >
> > >
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1368)
> > > > >> >
> > > > >> > 2009-08-04 02:54:29,563 INFO
> > > > >> > org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> > > > >> > /************************************************************
> > > > >> > SHUTDOWN_MSG: Shutting down DataNode at localhost.localdomain/
> > > 127.0.0.1
> > > > >> > ************************************************************/
> > > > >> >
> > > > >> >
> > >
> ------------------------------------------------------------------------------------------
> > > > >> > hadoop-oracle-namenode-localhost.localdomain.log
> > > > >> > 2009-08-04 02:54:26,987 INFO
> > > > >> > org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
> > > > >> > /************************************************************
> > > > >> > STARTUP_MSG: Starting NameNode
> > > > >> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> > > > >> > STARTUP_MSG:   args = []
> > > > >> > STARTUP_MSG:   version = 0.20.0
> > > > >> > STARTUP_MSG:   build =
> > > > >> >
> https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.20-r
> > > > >> > 763504; compiled by 'ndaley' on Thu Apr  9 05:18:40 UTC 2009
> > > > >> > ************************************************************/
> > > > >> > 2009-08-04 02:54:27,116 INFO
> > > org.apache.hadoop.ipc.metrics.RpcMetrics:
> > > > >> > Initializing RPC Metrics with hostName=NameNode, port=9000
> > > > >> > 2009-08-04 02:54:27,174 INFO
> > > > >> > org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
> > > > >> > localhost.localdomain/127.0.0.1:9000
> > > > >> > 2009-08-04 02:54:27,179 INFO
> > > org.apache.hadoop.metrics.jvm.JvmMetrics:
> > > > >> > Initializing JVM Metrics with processName=NameNode,
> sessionId=null
> > > > >> > 2009-08-04 02:54:27,180 INFO
> > > > >> > org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics:
> > > > >> > Initializing
> > > > >> > NameNodeMeterics using context
> > > > >> > object:org.apache.hadoop.metrics.spi.NullContext
> > > > >> > 2009-08-04 02:54:27,278 INFO
> > > > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> > > > >> > fsOwner=oracle,oinstall,root,dba,oper,asmadmin
> > > > >> > 2009-08-04 02:54:27,278 INFO
> > > > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> > > > >> > supergroup=supergroup
> > > > >> > 2009-08-04 02:54:27,278 INFO
> > > > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> > > > >> > isPermissionEnabled=true
> > > > >> > 2009-08-04 02:54:27,294 INFO
> > > > >> >
> org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
> > > > >> > Initializing FSNamesystemMetrics using context
> > > > >> > object:org.apache.hadoop.metrics.spi.NullContext
> > > > >> > 2009-08-04 02:54:27,297 INFO
> > > > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> > > > >> > FSNamesystemStatusMBean
> > > > >> > 2009-08-04 02:54:27,341 INFO
> > > > >> > org.apache.hadoop.hdfs.server.common.Storage:
> > > > >> > Number of files = 8
> > > > >> > 2009-08-04 02:54:27,348 INFO
> > > > >> > org.apache.hadoop.hdfs.server.common.Storage:
> > > > >> > Number of files under construction = 2
> > > > >> > 2009-08-04 02:54:27,351 INFO
> > > > >> > org.apache.hadoop.hdfs.server.common.Storage:
> > > > >> > Image file of size 923 loaded in 0 seconds.
> > > > >> > 2009-08-04 02:54:27,351 INFO
> > > > >> > org.apache.hadoop.hdfs.server.common.Storage:
> > > > >> > Edits file /tmp/hadoop-oracle/dfs/name/current/edits of size 4
> edits
> > > # 0
> > > > >> > loaded in 0 seconds.
> > > > >> > 2009-08-04 02:54:27,435 INFO
> > > > >> > org.apache.hadoop.hdfs.server.common.Storage:
> > > > >> > Image file of size 923 saved in 0 seconds.
> > > > >> > 2009-08-04 02:54:27,495 INFO
> > > > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished
> > > loading
> > > > >> > FSImage in 262 msecs
> > > > >> > 2009-08-04 02:54:27,496 INFO
> > > > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total
> number of
> > > > >> > blocks
> > > > >> > = 0
> > > > >> > 2009-08-04 02:54:27,496 INFO
> > > > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> > > invalid
> > > > >> > blocks = 0
> > > > >> > 2009-08-04 02:54:27,497 INFO
> > > > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> > > > >> > under-replicated blocks = 0
> > > > >> > 2009-08-04 02:54:27,497 INFO
> > > > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> > > > >> >  over-replicated blocks = 0
> > > > >> > 2009-08-04 02:54:27,497 INFO org.apache.hadoop.hdfs.StateChange:
> > > STATE*
> > > > >> > Leaving safe mode after 0 secs.
> > > > >> > 2009-08-04 02:54:27,497 INFO org.apache.hadoop.hdfs.StateChange:
> > > STATE*
> > > > >> > Network topology has 0 racks and 0 datanodes
> > > > >> > 2009-08-04 02:54:27,497 INFO org.apache.hadoop.hdfs.StateChange:
> > > STATE*
> > > > >> > UnderReplicatedBlocks has 0 blocks
> > > > >> > 2009-08-04 02:54:27,696 INFO org.mortbay.log: Logging to
> > > > >> > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> > > > >> > org.mortbay.log.Slf4jLog
> > > > >> > 2009-08-04 02:54:27,775 INFO org.apache.hadoop.http.HttpServer:
> > > Jetty
> > > > >> > bound
> > > > >> > to port 50070
> > > > >> > 2009-08-04 02:54:27,775 INFO org.mortbay.log: jetty-6.1.14
> > > > >> > 2009-08-04 02:54:28,277 INFO org.mortbay.log: Started
> > > > >> > SelectChannelConnector@0.0.0.0:50070
> > > > >> > 2009-08-04 02:54:28,278 INFO
> > > > >> > org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up
> at:
> > > > >> > 0.0.0.0:50070
> > > > >> > 2009-08-04 02:54:28,278 INFO org.apache.hadoop.ipc.Server: IPC
> > > Server
> > > > >> > Responder: starting
> > > > >> > 2009-08-04 02:54:28,279 INFO org.apache.hadoop.ipc.Server: IPC
> > > Server
> > > > >> > listener on 9000: starting
> > > > >> > 2009-08-04 02:54:28,280 INFO org.apache.hadoop.ipc.Server: IPC
> > > Server
> > > > >> > handler 0 on 9000: starting
> > > > >> > 2009-08-04 02:54:28,280 INFO org.apache.hadoop.ipc.Server: IPC
> > > Server
> > > > >> > handler 1 on 9000: starting
> > > > >> > 2009-08-04 02:54:28,316 INFO org.apache.hadoop.ipc.Server: IPC
> > > Server
> > > > >> > handler 2 on 9000: starting
> > > > >> > 2009-08-04 02:54:28,316 INFO org.apache.hadoop.ipc.Server: IPC
> > > Server
> > > > >> > handler 3 on 9000: starting
> > > > >> > 2009-08-04 02:54:28,321 INFO org.apache.hadoop.ipc.Server: IPC
> > > Server
> > > > >> > handler 4 on 9000: starting
> > > > >> > 2009-08-04 02:54:28,321 INFO org.apache.hadoop.ipc.Server: IPC
> > > Server
> > > > >> > handler 5 on 9000: starting
> > > > >> > 2009-08-04 02:54:28,328 INFO org.apache.hadoop.ipc.Server: IPC
> > > Server
> > > > >> > handler 6 on 9000: starting
> > > > >> > 2009-08-04 02:54:28,361 INFO org.apache.hadoop.ipc.Server: IPC
> > > Server
> > > > >> > handler 7 on 9000: starting
> > > > >> > 2009-08-04 02:54:28,362 INFO org.apache.hadoop.ipc.Server: IPC
> > > Server
> > > > >> > handler 8 on 9000: starting
> > > > >> > 2009-08-04 02:54:28,366 INFO org.apache.hadoop.ipc.Server: IPC
> > > Server
> > > > >> > handler 9 on 9000: starting
> > > > >> > 2009-08-04 02:54:38,433 INFO
> > > > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
> > > > >> > ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1
> > > > >> >  cmd=listStatus    src=/tmp/hadoop-oracle/mapred/system
>  dst=null
> > > > >> >  perm=null
> > > > >> > 2009-08-04 02:54:38,755 INFO
> > > > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
> > > > >> > ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1
> > > > >> > cmd=delete
> > > > >> >    src=/tmp/hadoop-oracle/mapred/system    dst=null    perm=null
> > > > >> > 2009-08-04 02:54:38,773 INFO
> > > > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
> > > > >> > ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1
> > > > >> > cmd=mkdirs
> > > > >> >    src=/tmp/hadoop-oracle/mapred/system    dst=null
> > > > >> >  perm=oracle:supergroup:rwxr-xr-x
> > > > >> > 2009-08-04 02:54:38,785 INFO
> > > > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
> > > > >> > ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1
> > > > >> >  cmd=setPermission    src=/tmp/hadoop-oracle/mapred/system
> > >  dst=null
> > > > >> >  perm=oracle:supergroup:rwx-wx-wx
> > > > >> > 2009-08-04 02:54:38,862 INFO
> > > > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
> > > > >> > ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1
> > > > >> > cmd=create
> > > > >> >    src=/tmp/hadoop-oracle/mapred/system/jobtracker.info
>  dst=null
> > > > >> >  perm=oracle:supergroup:rw-r--r--
> > > > >> > 2009-08-04 02:54:38,900 INFO
> > > > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
> > > > >> > ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1
> > > > >> >  cmd=setPermission
> > > > >> > src=/tmp/hadoop-oracle/mapred/system/jobtracker.info   dst=null
> > > > >> > perm=oracle:supergroup:rw-------
> > > > >> > 2009-08-04 02:54:38,955 INFO org.apache.hadoop.ipc.Server: IPC
> > > Server
> > > > >> > handler 4 on 9000, call
> addBlock(/tmp/hadoop-oracle/mapred/system/
> > > > >> > jobtracker.info, DFSClient_-603868025) from 127.0.0.1:51803:
> error:
> > > > >> > java.io.IOException: File
> > > > >> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be
> > > replicated
> > > > >> > to 0 nodes, instead of 1
> > > > >> > java.io.IOException: File
> > > > >> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be
> > > replicated
> > > > >> > to 0 nodes, instead of 1
> > > > >> >    at
> > > > >> >
> > >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
> > > > >> >    at
> > > > >> >
> > >
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
> > > > >> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
> > > > >> >    at
> > > > >> >
> > >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> > > > >> >    at
> > > > >> >
> > >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> > > > >> >    at java.lang.reflect.Method.invoke(Method.java:597)
> > > > >> >    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> > > > >> >    at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> > > > >> >    at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> > > > >> >    at java.security.AccessController.doPrivileged(Native Method)
> > > > >> >    at javax.security.auth.Subject.doAs(Subject.java:396)
> > > > >> >    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> > > > >> > 2009-08-04 02:54:39,548 INFO org.apache.hadoop.ipc.Server: IPC
> > > Server
> > > > >> > handler 5 on 9000, call
> addBlock(/tmp/hadoop-oracle/mapred/system/
> > > > >> > jobtracker.info, DFSClient_-603868025) from 127.0.0.1:51803:
> error:
> > > > >> > java.io.IOException: File
> > > > >> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be
> > > replicated
> > > > >> > to 0 nodes, instead of 1
> > > > >> > java.io.IOException: File
> > > > >> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be
> > > replicated
> > > > >> > to 0 nodes, instead of 1
> > > > >> >    at
> > > > >> >
> > >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
> > > > >> >    at
> > > > >> >
> > >
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
> > > > >> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
> > > > >> >    at
> > > > >> >
> > >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> > > > >> >    at
> > > > >> >
> > >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> > > > >> >    at java.lang.reflect.Method.invoke(Method.java:597)
> > > > >> >    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> > > > >> >    at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> > > > >> >    at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> > > > >> >    at java.security.AccessController.doPrivileged(Native Method)
> > > > >> >    at javax.security.auth.Subject.doAs(Subject.java:396)
> > > > >> >    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> > > > >> > 2009-08-04 02:54:40,359 INFO org.apache.hadoop.ipc.Server: IPC
> > > Server
> > > > >> > handler 6 on 9000, call
> addBlock(/tmp/hadoop-oracle/mapred/system/
> > > > >> > jobtracker.info, DFSClient_-603868025) from 127.0.0.1:51803:
> error:
> > > > >> > java.io.IOException: File
> > > > >> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be
> > > replicated
> > > > >> > to 0 nodes, instead of 1
> > > > >> > java.io.IOException: File
> > > > >> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be
> > > replicated
> > > > >> > to 0 nodes, instead of 1
> > > > >> >    at
> > > > >> >
> > >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
> > > > >> >    at
> > > > >> >
> > >
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
> > > > >> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
> > > > >> >    at
> > > > >> >
> > >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> > > > >> >    at
> > > > >> >
> > >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> > > > >> >    at java.lang.reflect.Method.invoke(Method.java:597)
> > > > >> >    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> > > > >> >    at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> > > > >> >    at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> > > > >> >    at java.security.AccessController.doPrivileged(Native Method)
> > > > >> >    at javax.security.auth.Subject.doAs(Subject.java:396)
> > > > >> >    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> > > > >> > 2009-08-04 02:54:41,969 INFO org.apache.hadoop.ipc.Server: IPC
> > > Server
> > > > >> > handler 7 on 9000, call
> addBlock(/tmp/hadoop-oracle/mapred/system/
> > > > >> > jobtracker.info, DFSClient_-603868025) from 127.0.0.1:51803:
> error:
> > > > >> > java.io.IOException: File
> > > > >> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be
> > > replicated
> > > > >> > to 0 nodes, instead of 1
> > > > >> > java.io.IOException: File
> > > > >> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be
> > > replicated
> > > > >> > to 0 nodes, instead of 1
> > > > >> >    at
> > > > >> >
> > >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
> > > > >> >    at
> > > > >> >
> > >
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
> > > > >> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
> > > > >> >    at
> > > > >> >
> > >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> > > > >> >    at
> > > > >> >
> > >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> > > > >> >    at java.lang.reflect.Method.invoke(Method.java:597)
> > > > >> >    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> > > > >> >    at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> > > > >> >    at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> > > > >> >    at java.security.AccessController.doPrivileged(Native Method)
> > > > >> >    at javax.security.auth.Subject.doAs(Subject.java:396)
> > > > >> >    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> > > > >> > 2009-08-04 02:54:45,180 INFO org.apache.hadoop.ipc.Server: IPC
> > > Server
> > > > >> > handler 8 on 9000, call
> addBlock(/tmp/hadoop-oracle/mapred/system/
> > > > >> > jobtracker.info, DFSClient_-603868025) from 127.0.0.1:51803:
> error:
> > > > >> > java.io.IOException: File
> > > > >> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be
> > > replicated
> > > > >> > to 0 nodes, instead of 1
> > > > >> > java.io.IOException: File
> > > > >> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be
> > > replicated
> > > > >> > to 0 nodes, instead of 1
> > > > >> >    at
> > > > >> >
> > >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
> > > > >> >    at
> > > > >> >
> > >
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
> > > > >> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
> > > > >> >    at
> > > > >> >
> > >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> > > > >> >    at
> > > > >> >
> > >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> > > > >> >    at java.lang.reflect.Method.invoke(Method.java:597)
> > > > >> >    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> > > > >> >    at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> > > > >> >    at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> > > > >> >    at java.security.AccessController.doPrivileged(Native Method)
> > > > >> >    at javax.security.auth.Subject.doAs(Subject.java:396)
> > > > >> >    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> > > > >> >
> > > > >> >
> > > > >> >
> > > > >> >
> _________________________________________________________________
> > > > >> > Windows Live ile fotoğraflarınızı organize edebilir,
> düzenleyebilir
> > > ve
> > > > >> > paylaşabilirsiniz.
> > > > >> >
> > > > >> >
> > >
> http://www.microsoft.com/turkiye/windows/windowslive/products/photo-gallery-edit.aspx
> > > > >
> > > > > _________________________________________________________________
> > > > > Windows Live tüm arkadaşlarınızla tek bir yerden iletişim kurmanıza
> > > yardımcı
> > > > > olur.
> > > > >
> > >
> http://www.microsoft.com/turkiye/windows/windowslive/products/social-network-connector.aspx
> > > >
> > > >
> > > > --
> > > >
> > > >
> > > > Amandeep Khurana
> > > > Computer Science Graduate Student
> > > > University of California, Santa Cruz
> > >
> > > _________________________________________________________________
> > > Sadece e-posta iletilerinden daha fazlası: Diğer Windows Live(tm)
> > > özelliklerine göz atın.
> > > http://www.microsoft.com/turkiye/windows/windowslive/
> > >
>
> _________________________________________________________________
> Windows Live ile fotoğraflarınızı organize edebilir, düzenleyebilir ve
> paylaşabilirsiniz.
>
> http://www.microsoft.com/turkiye/windows/windowslive/products/photo-gallery-edit.aspx
>

RE: Problem with starting Hadoop in Pseudo Distributed Mode

Posted by Onur AKTAS <on...@live.com>.
Thank you very much!

I added tags below to conf/core-site.xml and reformatted it again.. 
it started without any problems and I also started HBase and connected it with a client! 

<property>
    <name>hadoop.tmp.dir</name>
    <value>/tmp/hadoop-onur</value>
    <description>A base for other temporary directories.</description>
   </property>

Thank you again..

> From: amansk@gmail.com
> Date: Mon, 3 Aug 2009 17:48:24 -0700
> Subject: Re: Problem with starting Hadoop in Pseudo Distributed Mode
> To: common-user@hadoop.apache.org
> 
> 1. The default xmls are in $HADOOP_HOME/build/classes
> 2. You have to ovverride the parameters and put them in the site.xml's so
> you can have it in some other directory and not /tmp
> 
> Do that and try starting hadoop.
> 
> 
> Amandeep Khurana
> Computer Science Graduate Student
> University of California, Santa Cruz
> 
> 
> 2009/8/3 Onur AKTAS <on...@live.com>
> 
> >
> > There is no default.xml in Hadoop 0.20.0, but luckly I also have release
> > 0.18.3 and found these..
> >
> > <property>
> >  <name>hadoop.tmp.dir</name>
> >  <value>/tmp/hadoop-${user.name}</value>
> >  <description>A base for other temporary directories.</description>
> > </property>
> >
> > It seems /tmp/hadoop-${user.name} is a temporary directory as description
> > indicates, then where is the real directory?
> > I deleted whole tmp directory and formatted it again.. Started the server,
> > checked the logs and still have same errors.
> >
> > > Date: Mon, 3 Aug 2009 17:29:52 -0700
> > > Subject: Re: Problem with starting Hadoop in Pseudo Distributed Mode
> > > From: amansk@gmail.com
> > > To: common-user@hadoop.apache.org
> > >
> > > Yes, you need to change these directories. The config is put in the
> > > hadoop-site.xml. Or in this case, separately in the 3 xmls. See the
> > > default xml for syntax and property name.
> > >
> > > On 8/3/09, Onur AKTAS <on...@live.com> wrote:
> > > >
> > > > Is it the directory that Hadoop uses?
> > > >
> > > > /tmp/hadoop-oracle
> > > > /tmp/hadoop-oracle/dfs/
> > > > /tmp/hadoop-oracle/mapred/
> > > >
> > > > If yes, how can I change the directory to anywhere else? I do not want
> > it to
> > > > be kept in /tmp folder.
> > > >
> > > >> From: amansk@gmail.com
> > > >> Date: Mon, 3 Aug 2009 17:02:50 -0700
> > > >> Subject: Re: Problem with starting Hadoop in Pseudo Distributed Mode
> > > >> To: common-user@hadoop.apache.org
> > > >>
> > > >> I'm assuming that you have no data in HDFS since it never came up...
> > So,
> > > >> go
> > > >> ahead and clean up the directory where you are storing the datanode's
> > data
> > > >> and the namenode's metadata. After that format the namenode and
> > restart
> > > >> hadoop.
> > > >>
> > > >>
> > > >> 2009/8/3 Onur AKTAS <on...@live.com>
> > > >>
> > > >> >
> > > >> > Hi,
> > > >> >
> > > >> > I'm having troubles with running Hadoop in RHEL 5, I did everything
> > as
> > > >> > documented in:
> > > >> > http://hadoop.apache.org/common/docs/r0.20.0/quickstart.html
> > > >> >
> > > >> > And configured:
> > > >> > conf/core-site.xml, conf/hdfs-site.xml,
> > > >> > conf/mapred-site.xml.
> > > >> >
> > > >> > Connected to "localhost" with ssh (did passphrase stuff etc.), then
> > I
> > > >> > did
> > > >> > the following:
> > > >> >
> > > >> > $ bin/hadoop namenode -format
> > > >> > $ bin/start-all.sh
> > > >> > starting namenode, logging to
> > > >> >
> > /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-namenode-localhost.localdomain.out
> > > >> > localhost: starting datanode, logging to
> > > >> >
> > /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-datanode-localhost.localdomain.out
> > > >> > localhost: starting secondarynamenode, logging to
> > > >> >
> > /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-secondarynamenode-localhost.localdomain.out
> > > >> > starting jobtracker, logging to
> > > >> >
> > /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-jobtracker-localhost.localdomain.out
> > > >> > localhost: starting tasktracker, logging to
> > > >> >
> > /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-tasktracker-localhost.localdomain.out
> > > >> >
> > > >> > Everything seems ok, but when I check the Hadoop Logs I see many
> > errors.
> > > >> > (and they all cause HBase connection problems.)
> > > >> > How can I solve this problem? Here are the Logs
> > > >> >
> > > >> >  hadoop-oracle-datanode-localhost.localdomain.log:
> > > >> > 2009-08-04 02:54:28,971 INFO
> > > >> > org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> > > >> > /************************************************************
> > > >> > STARTUP_MSG: Starting DataNode
> > > >> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> > > >> > STARTUP_MSG:   args = []
> > > >> > STARTUP_MSG:   version = 0.20.0
> > > >> > STARTUP_MSG:   build =
> > > >> > https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.20-r
> > > >> > 763504; compiled by 'ndaley' on Thu Apr  9 05:18:40 UTC 2009
> > > >> > ************************************************************/
> > > >> > 2009-08-04 02:54:29,562 ERROR
> > > >> > org.apache.hadoop.hdfs.server.datanode.DataNode:
> > java.io.IOException:
> > > >> > Incompatible namespaceIDs in /tmp/hadoop-oracle/dfs/data: namenode
> > > >> > namespaceID = 36527197; datanode namespaceID = 2138759529
> > > >> >    at
> > > >> >
> > org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:233)
> > > >> >    at
> > > >> >
> > org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:148)
> > > >> >    at
> > > >> >
> > org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:298)
> > > >> >    at
> > > >> >
> > org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:216)
> > > >> >    at
> > > >> >
> > org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1283)
> > > >> >    at
> > > >> >
> > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1238)
> > > >> >    at
> > > >> >
> > org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1246)
> > > >> >    at
> > > >> >
> > org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1368)
> > > >> >
> > > >> > 2009-08-04 02:54:29,563 INFO
> > > >> > org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> > > >> > /************************************************************
> > > >> > SHUTDOWN_MSG: Shutting down DataNode at localhost.localdomain/
> > 127.0.0.1
> > > >> > ************************************************************/
> > > >> >
> > > >> >
> > ------------------------------------------------------------------------------------------
> > > >> > hadoop-oracle-namenode-localhost.localdomain.log
> > > >> > 2009-08-04 02:54:26,987 INFO
> > > >> > org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
> > > >> > /************************************************************
> > > >> > STARTUP_MSG: Starting NameNode
> > > >> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> > > >> > STARTUP_MSG:   args = []
> > > >> > STARTUP_MSG:   version = 0.20.0
> > > >> > STARTUP_MSG:   build =
> > > >> > https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.20-r
> > > >> > 763504; compiled by 'ndaley' on Thu Apr  9 05:18:40 UTC 2009
> > > >> > ************************************************************/
> > > >> > 2009-08-04 02:54:27,116 INFO
> > org.apache.hadoop.ipc.metrics.RpcMetrics:
> > > >> > Initializing RPC Metrics with hostName=NameNode, port=9000
> > > >> > 2009-08-04 02:54:27,174 INFO
> > > >> > org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
> > > >> > localhost.localdomain/127.0.0.1:9000
> > > >> > 2009-08-04 02:54:27,179 INFO
> > org.apache.hadoop.metrics.jvm.JvmMetrics:
> > > >> > Initializing JVM Metrics with processName=NameNode, sessionId=null
> > > >> > 2009-08-04 02:54:27,180 INFO
> > > >> > org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics:
> > > >> > Initializing
> > > >> > NameNodeMeterics using context
> > > >> > object:org.apache.hadoop.metrics.spi.NullContext
> > > >> > 2009-08-04 02:54:27,278 INFO
> > > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> > > >> > fsOwner=oracle,oinstall,root,dba,oper,asmadmin
> > > >> > 2009-08-04 02:54:27,278 INFO
> > > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> > > >> > supergroup=supergroup
> > > >> > 2009-08-04 02:54:27,278 INFO
> > > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> > > >> > isPermissionEnabled=true
> > > >> > 2009-08-04 02:54:27,294 INFO
> > > >> > org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
> > > >> > Initializing FSNamesystemMetrics using context
> > > >> > object:org.apache.hadoop.metrics.spi.NullContext
> > > >> > 2009-08-04 02:54:27,297 INFO
> > > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> > > >> > FSNamesystemStatusMBean
> > > >> > 2009-08-04 02:54:27,341 INFO
> > > >> > org.apache.hadoop.hdfs.server.common.Storage:
> > > >> > Number of files = 8
> > > >> > 2009-08-04 02:54:27,348 INFO
> > > >> > org.apache.hadoop.hdfs.server.common.Storage:
> > > >> > Number of files under construction = 2
> > > >> > 2009-08-04 02:54:27,351 INFO
> > > >> > org.apache.hadoop.hdfs.server.common.Storage:
> > > >> > Image file of size 923 loaded in 0 seconds.
> > > >> > 2009-08-04 02:54:27,351 INFO
> > > >> > org.apache.hadoop.hdfs.server.common.Storage:
> > > >> > Edits file /tmp/hadoop-oracle/dfs/name/current/edits of size 4 edits
> > # 0
> > > >> > loaded in 0 seconds.
> > > >> > 2009-08-04 02:54:27,435 INFO
> > > >> > org.apache.hadoop.hdfs.server.common.Storage:
> > > >> > Image file of size 923 saved in 0 seconds.
> > > >> > 2009-08-04 02:54:27,495 INFO
> > > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished
> > loading
> > > >> > FSImage in 262 msecs
> > > >> > 2009-08-04 02:54:27,496 INFO
> > > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of
> > > >> > blocks
> > > >> > = 0
> > > >> > 2009-08-04 02:54:27,496 INFO
> > > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> > invalid
> > > >> > blocks = 0
> > > >> > 2009-08-04 02:54:27,497 INFO
> > > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> > > >> > under-replicated blocks = 0
> > > >> > 2009-08-04 02:54:27,497 INFO
> > > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> > > >> >  over-replicated blocks = 0
> > > >> > 2009-08-04 02:54:27,497 INFO org.apache.hadoop.hdfs.StateChange:
> > STATE*
> > > >> > Leaving safe mode after 0 secs.
> > > >> > 2009-08-04 02:54:27,497 INFO org.apache.hadoop.hdfs.StateChange:
> > STATE*
> > > >> > Network topology has 0 racks and 0 datanodes
> > > >> > 2009-08-04 02:54:27,497 INFO org.apache.hadoop.hdfs.StateChange:
> > STATE*
> > > >> > UnderReplicatedBlocks has 0 blocks
> > > >> > 2009-08-04 02:54:27,696 INFO org.mortbay.log: Logging to
> > > >> > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> > > >> > org.mortbay.log.Slf4jLog
> > > >> > 2009-08-04 02:54:27,775 INFO org.apache.hadoop.http.HttpServer:
> > Jetty
> > > >> > bound
> > > >> > to port 50070
> > > >> > 2009-08-04 02:54:27,775 INFO org.mortbay.log: jetty-6.1.14
> > > >> > 2009-08-04 02:54:28,277 INFO org.mortbay.log: Started
> > > >> > SelectChannelConnector@0.0.0.0:50070
> > > >> > 2009-08-04 02:54:28,278 INFO
> > > >> > org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
> > > >> > 0.0.0.0:50070
> > > >> > 2009-08-04 02:54:28,278 INFO org.apache.hadoop.ipc.Server: IPC
> > Server
> > > >> > Responder: starting
> > > >> > 2009-08-04 02:54:28,279 INFO org.apache.hadoop.ipc.Server: IPC
> > Server
> > > >> > listener on 9000: starting
> > > >> > 2009-08-04 02:54:28,280 INFO org.apache.hadoop.ipc.Server: IPC
> > Server
> > > >> > handler 0 on 9000: starting
> > > >> > 2009-08-04 02:54:28,280 INFO org.apache.hadoop.ipc.Server: IPC
> > Server
> > > >> > handler 1 on 9000: starting
> > > >> > 2009-08-04 02:54:28,316 INFO org.apache.hadoop.ipc.Server: IPC
> > Server
> > > >> > handler 2 on 9000: starting
> > > >> > 2009-08-04 02:54:28,316 INFO org.apache.hadoop.ipc.Server: IPC
> > Server
> > > >> > handler 3 on 9000: starting
> > > >> > 2009-08-04 02:54:28,321 INFO org.apache.hadoop.ipc.Server: IPC
> > Server
> > > >> > handler 4 on 9000: starting
> > > >> > 2009-08-04 02:54:28,321 INFO org.apache.hadoop.ipc.Server: IPC
> > Server
> > > >> > handler 5 on 9000: starting
> > > >> > 2009-08-04 02:54:28,328 INFO org.apache.hadoop.ipc.Server: IPC
> > Server
> > > >> > handler 6 on 9000: starting
> > > >> > 2009-08-04 02:54:28,361 INFO org.apache.hadoop.ipc.Server: IPC
> > Server
> > > >> > handler 7 on 9000: starting
> > > >> > 2009-08-04 02:54:28,362 INFO org.apache.hadoop.ipc.Server: IPC
> > Server
> > > >> > handler 8 on 9000: starting
> > > >> > 2009-08-04 02:54:28,366 INFO org.apache.hadoop.ipc.Server: IPC
> > Server
> > > >> > handler 9 on 9000: starting
> > > >> > 2009-08-04 02:54:38,433 INFO
> > > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
> > > >> > ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1
> > > >> >  cmd=listStatus    src=/tmp/hadoop-oracle/mapred/system    dst=null
> > > >> >  perm=null
> > > >> > 2009-08-04 02:54:38,755 INFO
> > > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
> > > >> > ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1
> > > >> > cmd=delete
> > > >> >    src=/tmp/hadoop-oracle/mapred/system    dst=null    perm=null
> > > >> > 2009-08-04 02:54:38,773 INFO
> > > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
> > > >> > ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1
> > > >> > cmd=mkdirs
> > > >> >    src=/tmp/hadoop-oracle/mapred/system    dst=null
> > > >> >  perm=oracle:supergroup:rwxr-xr-x
> > > >> > 2009-08-04 02:54:38,785 INFO
> > > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
> > > >> > ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1
> > > >> >  cmd=setPermission    src=/tmp/hadoop-oracle/mapred/system
> >  dst=null
> > > >> >  perm=oracle:supergroup:rwx-wx-wx
> > > >> > 2009-08-04 02:54:38,862 INFO
> > > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
> > > >> > ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1
> > > >> > cmd=create
> > > >> >    src=/tmp/hadoop-oracle/mapred/system/jobtracker.info    dst=null
> > > >> >  perm=oracle:supergroup:rw-r--r--
> > > >> > 2009-08-04 02:54:38,900 INFO
> > > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
> > > >> > ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1
> > > >> >  cmd=setPermission
> > > >> > src=/tmp/hadoop-oracle/mapred/system/jobtracker.info   dst=null
> > > >> > perm=oracle:supergroup:rw-------
> > > >> > 2009-08-04 02:54:38,955 INFO org.apache.hadoop.ipc.Server: IPC
> > Server
> > > >> > handler 4 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/
> > > >> > jobtracker.info, DFSClient_-603868025) from 127.0.0.1:51803: error:
> > > >> > java.io.IOException: File
> > > >> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be
> > replicated
> > > >> > to 0 nodes, instead of 1
> > > >> > java.io.IOException: File
> > > >> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be
> > replicated
> > > >> > to 0 nodes, instead of 1
> > > >> >    at
> > > >> >
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
> > > >> >    at
> > > >> >
> > org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
> > > >> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > > >> >    at
> > > >> >
> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> > > >> >    at
> > > >> >
> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> > > >> >    at java.lang.reflect.Method.invoke(Method.java:597)
> > > >> >    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> > > >> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> > > >> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> > > >> >    at java.security.AccessController.doPrivileged(Native Method)
> > > >> >    at javax.security.auth.Subject.doAs(Subject.java:396)
> > > >> >    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> > > >> > 2009-08-04 02:54:39,548 INFO org.apache.hadoop.ipc.Server: IPC
> > Server
> > > >> > handler 5 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/
> > > >> > jobtracker.info, DFSClient_-603868025) from 127.0.0.1:51803: error:
> > > >> > java.io.IOException: File
> > > >> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be
> > replicated
> > > >> > to 0 nodes, instead of 1
> > > >> > java.io.IOException: File
> > > >> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be
> > replicated
> > > >> > to 0 nodes, instead of 1
> > > >> >    at
> > > >> >
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
> > > >> >    at
> > > >> >
> > org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
> > > >> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > > >> >    at
> > > >> >
> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> > > >> >    at
> > > >> >
> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> > > >> >    at java.lang.reflect.Method.invoke(Method.java:597)
> > > >> >    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> > > >> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> > > >> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> > > >> >    at java.security.AccessController.doPrivileged(Native Method)
> > > >> >    at javax.security.auth.Subject.doAs(Subject.java:396)
> > > >> >    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> > > >> > 2009-08-04 02:54:40,359 INFO org.apache.hadoop.ipc.Server: IPC
> > Server
> > > >> > handler 6 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/
> > > >> > jobtracker.info, DFSClient_-603868025) from 127.0.0.1:51803: error:
> > > >> > java.io.IOException: File
> > > >> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be
> > replicated
> > > >> > to 0 nodes, instead of 1
> > > >> > java.io.IOException: File
> > > >> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be
> > replicated
> > > >> > to 0 nodes, instead of 1
> > > >> >    at
> > > >> >
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
> > > >> >    at
> > > >> >
> > org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
> > > >> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > > >> >    at
> > > >> >
> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> > > >> >    at
> > > >> >
> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> > > >> >    at java.lang.reflect.Method.invoke(Method.java:597)
> > > >> >    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> > > >> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> > > >> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> > > >> >    at java.security.AccessController.doPrivileged(Native Method)
> > > >> >    at javax.security.auth.Subject.doAs(Subject.java:396)
> > > >> >    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> > > >> > 2009-08-04 02:54:41,969 INFO org.apache.hadoop.ipc.Server: IPC
> > Server
> > > >> > handler 7 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/
> > > >> > jobtracker.info, DFSClient_-603868025) from 127.0.0.1:51803: error:
> > > >> > java.io.IOException: File
> > > >> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be
> > replicated
> > > >> > to 0 nodes, instead of 1
> > > >> > java.io.IOException: File
> > > >> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be
> > replicated
> > > >> > to 0 nodes, instead of 1
> > > >> >    at
> > > >> >
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
> > > >> >    at
> > > >> >
> > org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
> > > >> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > > >> >    at
> > > >> >
> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> > > >> >    at
> > > >> >
> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> > > >> >    at java.lang.reflect.Method.invoke(Method.java:597)
> > > >> >    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> > > >> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> > > >> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> > > >> >    at java.security.AccessController.doPrivileged(Native Method)
> > > >> >    at javax.security.auth.Subject.doAs(Subject.java:396)
> > > >> >    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> > > >> > 2009-08-04 02:54:45,180 INFO org.apache.hadoop.ipc.Server: IPC
> > Server
> > > >> > handler 8 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/
> > > >> > jobtracker.info, DFSClient_-603868025) from 127.0.0.1:51803: error:
> > > >> > java.io.IOException: File
> > > >> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be
> > replicated
> > > >> > to 0 nodes, instead of 1
> > > >> > java.io.IOException: File
> > > >> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be
> > replicated
> > > >> > to 0 nodes, instead of 1
> > > >> >    at
> > > >> >
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
> > > >> >    at
> > > >> >
> > org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
> > > >> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > > >> >    at
> > > >> >
> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> > > >> >    at
> > > >> >
> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> > > >> >    at java.lang.reflect.Method.invoke(Method.java:597)
> > > >> >    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> > > >> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> > > >> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> > > >> >    at java.security.AccessController.doPrivileged(Native Method)
> > > >> >    at javax.security.auth.Subject.doAs(Subject.java:396)
> > > >> >    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> > > >> >
> > > >> >
> > > >> >
> > > >> > _________________________________________________________________
> > > >> > Windows Live ile fotoğraflarınızı organize edebilir, düzenleyebilir
> > ve
> > > >> > paylaşabilirsiniz.
> > > >> >
> > > >> >
> > http://www.microsoft.com/turkiye/windows/windowslive/products/photo-gallery-edit.aspx
> > > >
> > > > _________________________________________________________________
> > > > Windows Live tüm arkadaşlarınızla tek bir yerden iletişim kurmanıza
> > yardımcı
> > > > olur.
> > > >
> > http://www.microsoft.com/turkiye/windows/windowslive/products/social-network-connector.aspx
> > >
> > >
> > > --
> > >
> > >
> > > Amandeep Khurana
> > > Computer Science Graduate Student
> > > University of California, Santa Cruz
> >
> > _________________________________________________________________
> > Sadece e-posta iletilerinden daha fazlası: Diğer Windows Live(tm)
> > özelliklerine göz atın.
> > http://www.microsoft.com/turkiye/windows/windowslive/
> >

_________________________________________________________________
Windows Live ile fotoğraflarınızı organize edebilir, düzenleyebilir ve paylaşabilirsiniz.
http://www.microsoft.com/turkiye/windows/windowslive/products/photo-gallery-edit.aspx

Re: Problem with starting Hadoop in Pseudo Distributed Mode

Posted by Amandeep Khurana <am...@gmail.com>.
1. The default xmls are in $HADOOP_HOME/build/classes
2. You have to ovverride the parameters and put them in the site.xml's so
you can have it in some other directory and not /tmp

Do that and try starting hadoop.


Amandeep Khurana
Computer Science Graduate Student
University of California, Santa Cruz


2009/8/3 Onur AKTAS <on...@live.com>

>
> There is no default.xml in Hadoop 0.20.0, but luckly I also have release
> 0.18.3 and found these..
>
> <property>
>  <name>hadoop.tmp.dir</name>
>  <value>/tmp/hadoop-${user.name}</value>
>  <description>A base for other temporary directories.</description>
> </property>
>
> It seems /tmp/hadoop-${user.name} is a temporary directory as description
> indicates, then where is the real directory?
> I deleted whole tmp directory and formatted it again.. Started the server,
> checked the logs and still have same errors.
>
> > Date: Mon, 3 Aug 2009 17:29:52 -0700
> > Subject: Re: Problem with starting Hadoop in Pseudo Distributed Mode
> > From: amansk@gmail.com
> > To: common-user@hadoop.apache.org
> >
> > Yes, you need to change these directories. The config is put in the
> > hadoop-site.xml. Or in this case, separately in the 3 xmls. See the
> > default xml for syntax and property name.
> >
> > On 8/3/09, Onur AKTAS <on...@live.com> wrote:
> > >
> > > Is it the directory that Hadoop uses?
> > >
> > > /tmp/hadoop-oracle
> > > /tmp/hadoop-oracle/dfs/
> > > /tmp/hadoop-oracle/mapred/
> > >
> > > If yes, how can I change the directory to anywhere else? I do not want
> it to
> > > be kept in /tmp folder.
> > >
> > >> From: amansk@gmail.com
> > >> Date: Mon, 3 Aug 2009 17:02:50 -0700
> > >> Subject: Re: Problem with starting Hadoop in Pseudo Distributed Mode
> > >> To: common-user@hadoop.apache.org
> > >>
> > >> I'm assuming that you have no data in HDFS since it never came up...
> So,
> > >> go
> > >> ahead and clean up the directory where you are storing the datanode's
> data
> > >> and the namenode's metadata. After that format the namenode and
> restart
> > >> hadoop.
> > >>
> > >>
> > >> 2009/8/3 Onur AKTAS <on...@live.com>
> > >>
> > >> >
> > >> > Hi,
> > >> >
> > >> > I'm having troubles with running Hadoop in RHEL 5, I did everything
> as
> > >> > documented in:
> > >> > http://hadoop.apache.org/common/docs/r0.20.0/quickstart.html
> > >> >
> > >> > And configured:
> > >> > conf/core-site.xml, conf/hdfs-site.xml,
> > >> > conf/mapred-site.xml.
> > >> >
> > >> > Connected to "localhost" with ssh (did passphrase stuff etc.), then
> I
> > >> > did
> > >> > the following:
> > >> >
> > >> > $ bin/hadoop namenode -format
> > >> > $ bin/start-all.sh
> > >> > starting namenode, logging to
> > >> >
> /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-namenode-localhost.localdomain.out
> > >> > localhost: starting datanode, logging to
> > >> >
> /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-datanode-localhost.localdomain.out
> > >> > localhost: starting secondarynamenode, logging to
> > >> >
> /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-secondarynamenode-localhost.localdomain.out
> > >> > starting jobtracker, logging to
> > >> >
> /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-jobtracker-localhost.localdomain.out
> > >> > localhost: starting tasktracker, logging to
> > >> >
> /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-tasktracker-localhost.localdomain.out
> > >> >
> > >> > Everything seems ok, but when I check the Hadoop Logs I see many
> errors.
> > >> > (and they all cause HBase connection problems.)
> > >> > How can I solve this problem? Here are the Logs
> > >> >
> > >> >  hadoop-oracle-datanode-localhost.localdomain.log:
> > >> > 2009-08-04 02:54:28,971 INFO
> > >> > org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> > >> > /************************************************************
> > >> > STARTUP_MSG: Starting DataNode
> > >> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> > >> > STARTUP_MSG:   args = []
> > >> > STARTUP_MSG:   version = 0.20.0
> > >> > STARTUP_MSG:   build =
> > >> > https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.20-r
> > >> > 763504; compiled by 'ndaley' on Thu Apr  9 05:18:40 UTC 2009
> > >> > ************************************************************/
> > >> > 2009-08-04 02:54:29,562 ERROR
> > >> > org.apache.hadoop.hdfs.server.datanode.DataNode:
> java.io.IOException:
> > >> > Incompatible namespaceIDs in /tmp/hadoop-oracle/dfs/data: namenode
> > >> > namespaceID = 36527197; datanode namespaceID = 2138759529
> > >> >    at
> > >> >
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:233)
> > >> >    at
> > >> >
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:148)
> > >> >    at
> > >> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:298)
> > >> >    at
> > >> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:216)
> > >> >    at
> > >> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1283)
> > >> >    at
> > >> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1238)
> > >> >    at
> > >> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1246)
> > >> >    at
> > >> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1368)
> > >> >
> > >> > 2009-08-04 02:54:29,563 INFO
> > >> > org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> > >> > /************************************************************
> > >> > SHUTDOWN_MSG: Shutting down DataNode at localhost.localdomain/
> 127.0.0.1
> > >> > ************************************************************/
> > >> >
> > >> >
> ------------------------------------------------------------------------------------------
> > >> > hadoop-oracle-namenode-localhost.localdomain.log
> > >> > 2009-08-04 02:54:26,987 INFO
> > >> > org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
> > >> > /************************************************************
> > >> > STARTUP_MSG: Starting NameNode
> > >> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> > >> > STARTUP_MSG:   args = []
> > >> > STARTUP_MSG:   version = 0.20.0
> > >> > STARTUP_MSG:   build =
> > >> > https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.20-r
> > >> > 763504; compiled by 'ndaley' on Thu Apr  9 05:18:40 UTC 2009
> > >> > ************************************************************/
> > >> > 2009-08-04 02:54:27,116 INFO
> org.apache.hadoop.ipc.metrics.RpcMetrics:
> > >> > Initializing RPC Metrics with hostName=NameNode, port=9000
> > >> > 2009-08-04 02:54:27,174 INFO
> > >> > org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
> > >> > localhost.localdomain/127.0.0.1:9000
> > >> > 2009-08-04 02:54:27,179 INFO
> org.apache.hadoop.metrics.jvm.JvmMetrics:
> > >> > Initializing JVM Metrics with processName=NameNode, sessionId=null
> > >> > 2009-08-04 02:54:27,180 INFO
> > >> > org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics:
> > >> > Initializing
> > >> > NameNodeMeterics using context
> > >> > object:org.apache.hadoop.metrics.spi.NullContext
> > >> > 2009-08-04 02:54:27,278 INFO
> > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> > >> > fsOwner=oracle,oinstall,root,dba,oper,asmadmin
> > >> > 2009-08-04 02:54:27,278 INFO
> > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> > >> > supergroup=supergroup
> > >> > 2009-08-04 02:54:27,278 INFO
> > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> > >> > isPermissionEnabled=true
> > >> > 2009-08-04 02:54:27,294 INFO
> > >> > org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
> > >> > Initializing FSNamesystemMetrics using context
> > >> > object:org.apache.hadoop.metrics.spi.NullContext
> > >> > 2009-08-04 02:54:27,297 INFO
> > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> > >> > FSNamesystemStatusMBean
> > >> > 2009-08-04 02:54:27,341 INFO
> > >> > org.apache.hadoop.hdfs.server.common.Storage:
> > >> > Number of files = 8
> > >> > 2009-08-04 02:54:27,348 INFO
> > >> > org.apache.hadoop.hdfs.server.common.Storage:
> > >> > Number of files under construction = 2
> > >> > 2009-08-04 02:54:27,351 INFO
> > >> > org.apache.hadoop.hdfs.server.common.Storage:
> > >> > Image file of size 923 loaded in 0 seconds.
> > >> > 2009-08-04 02:54:27,351 INFO
> > >> > org.apache.hadoop.hdfs.server.common.Storage:
> > >> > Edits file /tmp/hadoop-oracle/dfs/name/current/edits of size 4 edits
> # 0
> > >> > loaded in 0 seconds.
> > >> > 2009-08-04 02:54:27,435 INFO
> > >> > org.apache.hadoop.hdfs.server.common.Storage:
> > >> > Image file of size 923 saved in 0 seconds.
> > >> > 2009-08-04 02:54:27,495 INFO
> > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished
> loading
> > >> > FSImage in 262 msecs
> > >> > 2009-08-04 02:54:27,496 INFO
> > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of
> > >> > blocks
> > >> > = 0
> > >> > 2009-08-04 02:54:27,496 INFO
> > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> invalid
> > >> > blocks = 0
> > >> > 2009-08-04 02:54:27,497 INFO
> > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> > >> > under-replicated blocks = 0
> > >> > 2009-08-04 02:54:27,497 INFO
> > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> > >> >  over-replicated blocks = 0
> > >> > 2009-08-04 02:54:27,497 INFO org.apache.hadoop.hdfs.StateChange:
> STATE*
> > >> > Leaving safe mode after 0 secs.
> > >> > 2009-08-04 02:54:27,497 INFO org.apache.hadoop.hdfs.StateChange:
> STATE*
> > >> > Network topology has 0 racks and 0 datanodes
> > >> > 2009-08-04 02:54:27,497 INFO org.apache.hadoop.hdfs.StateChange:
> STATE*
> > >> > UnderReplicatedBlocks has 0 blocks
> > >> > 2009-08-04 02:54:27,696 INFO org.mortbay.log: Logging to
> > >> > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> > >> > org.mortbay.log.Slf4jLog
> > >> > 2009-08-04 02:54:27,775 INFO org.apache.hadoop.http.HttpServer:
> Jetty
> > >> > bound
> > >> > to port 50070
> > >> > 2009-08-04 02:54:27,775 INFO org.mortbay.log: jetty-6.1.14
> > >> > 2009-08-04 02:54:28,277 INFO org.mortbay.log: Started
> > >> > SelectChannelConnector@0.0.0.0:50070
> > >> > 2009-08-04 02:54:28,278 INFO
> > >> > org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
> > >> > 0.0.0.0:50070
> > >> > 2009-08-04 02:54:28,278 INFO org.apache.hadoop.ipc.Server: IPC
> Server
> > >> > Responder: starting
> > >> > 2009-08-04 02:54:28,279 INFO org.apache.hadoop.ipc.Server: IPC
> Server
> > >> > listener on 9000: starting
> > >> > 2009-08-04 02:54:28,280 INFO org.apache.hadoop.ipc.Server: IPC
> Server
> > >> > handler 0 on 9000: starting
> > >> > 2009-08-04 02:54:28,280 INFO org.apache.hadoop.ipc.Server: IPC
> Server
> > >> > handler 1 on 9000: starting
> > >> > 2009-08-04 02:54:28,316 INFO org.apache.hadoop.ipc.Server: IPC
> Server
> > >> > handler 2 on 9000: starting
> > >> > 2009-08-04 02:54:28,316 INFO org.apache.hadoop.ipc.Server: IPC
> Server
> > >> > handler 3 on 9000: starting
> > >> > 2009-08-04 02:54:28,321 INFO org.apache.hadoop.ipc.Server: IPC
> Server
> > >> > handler 4 on 9000: starting
> > >> > 2009-08-04 02:54:28,321 INFO org.apache.hadoop.ipc.Server: IPC
> Server
> > >> > handler 5 on 9000: starting
> > >> > 2009-08-04 02:54:28,328 INFO org.apache.hadoop.ipc.Server: IPC
> Server
> > >> > handler 6 on 9000: starting
> > >> > 2009-08-04 02:54:28,361 INFO org.apache.hadoop.ipc.Server: IPC
> Server
> > >> > handler 7 on 9000: starting
> > >> > 2009-08-04 02:54:28,362 INFO org.apache.hadoop.ipc.Server: IPC
> Server
> > >> > handler 8 on 9000: starting
> > >> > 2009-08-04 02:54:28,366 INFO org.apache.hadoop.ipc.Server: IPC
> Server
> > >> > handler 9 on 9000: starting
> > >> > 2009-08-04 02:54:38,433 INFO
> > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
> > >> > ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1
> > >> >  cmd=listStatus    src=/tmp/hadoop-oracle/mapred/system    dst=null
> > >> >  perm=null
> > >> > 2009-08-04 02:54:38,755 INFO
> > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
> > >> > ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1
> > >> > cmd=delete
> > >> >    src=/tmp/hadoop-oracle/mapred/system    dst=null    perm=null
> > >> > 2009-08-04 02:54:38,773 INFO
> > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
> > >> > ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1
> > >> > cmd=mkdirs
> > >> >    src=/tmp/hadoop-oracle/mapred/system    dst=null
> > >> >  perm=oracle:supergroup:rwxr-xr-x
> > >> > 2009-08-04 02:54:38,785 INFO
> > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
> > >> > ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1
> > >> >  cmd=setPermission    src=/tmp/hadoop-oracle/mapred/system
>  dst=null
> > >> >  perm=oracle:supergroup:rwx-wx-wx
> > >> > 2009-08-04 02:54:38,862 INFO
> > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
> > >> > ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1
> > >> > cmd=create
> > >> >    src=/tmp/hadoop-oracle/mapred/system/jobtracker.info    dst=null
> > >> >  perm=oracle:supergroup:rw-r--r--
> > >> > 2009-08-04 02:54:38,900 INFO
> > >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
> > >> > ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1
> > >> >  cmd=setPermission
> > >> > src=/tmp/hadoop-oracle/mapred/system/jobtracker.info   dst=null
> > >> > perm=oracle:supergroup:rw-------
> > >> > 2009-08-04 02:54:38,955 INFO org.apache.hadoop.ipc.Server: IPC
> Server
> > >> > handler 4 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/
> > >> > jobtracker.info, DFSClient_-603868025) from 127.0.0.1:51803: error:
> > >> > java.io.IOException: File
> > >> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be
> replicated
> > >> > to 0 nodes, instead of 1
> > >> > java.io.IOException: File
> > >> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be
> replicated
> > >> > to 0 nodes, instead of 1
> > >> >    at
> > >> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
> > >> >    at
> > >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
> > >> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > >> >    at
> > >> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> > >> >    at
> > >> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> > >> >    at java.lang.reflect.Method.invoke(Method.java:597)
> > >> >    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> > >> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> > >> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> > >> >    at java.security.AccessController.doPrivileged(Native Method)
> > >> >    at javax.security.auth.Subject.doAs(Subject.java:396)
> > >> >    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> > >> > 2009-08-04 02:54:39,548 INFO org.apache.hadoop.ipc.Server: IPC
> Server
> > >> > handler 5 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/
> > >> > jobtracker.info, DFSClient_-603868025) from 127.0.0.1:51803: error:
> > >> > java.io.IOException: File
> > >> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be
> replicated
> > >> > to 0 nodes, instead of 1
> > >> > java.io.IOException: File
> > >> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be
> replicated
> > >> > to 0 nodes, instead of 1
> > >> >    at
> > >> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
> > >> >    at
> > >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
> > >> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > >> >    at
> > >> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> > >> >    at
> > >> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> > >> >    at java.lang.reflect.Method.invoke(Method.java:597)
> > >> >    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> > >> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> > >> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> > >> >    at java.security.AccessController.doPrivileged(Native Method)
> > >> >    at javax.security.auth.Subject.doAs(Subject.java:396)
> > >> >    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> > >> > 2009-08-04 02:54:40,359 INFO org.apache.hadoop.ipc.Server: IPC
> Server
> > >> > handler 6 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/
> > >> > jobtracker.info, DFSClient_-603868025) from 127.0.0.1:51803: error:
> > >> > java.io.IOException: File
> > >> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be
> replicated
> > >> > to 0 nodes, instead of 1
> > >> > java.io.IOException: File
> > >> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be
> replicated
> > >> > to 0 nodes, instead of 1
> > >> >    at
> > >> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
> > >> >    at
> > >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
> > >> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > >> >    at
> > >> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> > >> >    at
> > >> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> > >> >    at java.lang.reflect.Method.invoke(Method.java:597)
> > >> >    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> > >> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> > >> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> > >> >    at java.security.AccessController.doPrivileged(Native Method)
> > >> >    at javax.security.auth.Subject.doAs(Subject.java:396)
> > >> >    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> > >> > 2009-08-04 02:54:41,969 INFO org.apache.hadoop.ipc.Server: IPC
> Server
> > >> > handler 7 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/
> > >> > jobtracker.info, DFSClient_-603868025) from 127.0.0.1:51803: error:
> > >> > java.io.IOException: File
> > >> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be
> replicated
> > >> > to 0 nodes, instead of 1
> > >> > java.io.IOException: File
> > >> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be
> replicated
> > >> > to 0 nodes, instead of 1
> > >> >    at
> > >> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
> > >> >    at
> > >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
> > >> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > >> >    at
> > >> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> > >> >    at
> > >> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> > >> >    at java.lang.reflect.Method.invoke(Method.java:597)
> > >> >    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> > >> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> > >> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> > >> >    at java.security.AccessController.doPrivileged(Native Method)
> > >> >    at javax.security.auth.Subject.doAs(Subject.java:396)
> > >> >    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> > >> > 2009-08-04 02:54:45,180 INFO org.apache.hadoop.ipc.Server: IPC
> Server
> > >> > handler 8 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/
> > >> > jobtracker.info, DFSClient_-603868025) from 127.0.0.1:51803: error:
> > >> > java.io.IOException: File
> > >> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be
> replicated
> > >> > to 0 nodes, instead of 1
> > >> > java.io.IOException: File
> > >> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be
> replicated
> > >> > to 0 nodes, instead of 1
> > >> >    at
> > >> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
> > >> >    at
> > >> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
> > >> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > >> >    at
> > >> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> > >> >    at
> > >> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> > >> >    at java.lang.reflect.Method.invoke(Method.java:597)
> > >> >    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> > >> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> > >> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> > >> >    at java.security.AccessController.doPrivileged(Native Method)
> > >> >    at javax.security.auth.Subject.doAs(Subject.java:396)
> > >> >    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> > >> >
> > >> >
> > >> >
> > >> > _________________________________________________________________
> > >> > Windows Live ile fotoğraflarınızı organize edebilir, düzenleyebilir
> ve
> > >> > paylaşabilirsiniz.
> > >> >
> > >> >
> http://www.microsoft.com/turkiye/windows/windowslive/products/photo-gallery-edit.aspx
> > >
> > > _________________________________________________________________
> > > Windows Live tüm arkadaşlarınızla tek bir yerden iletişim kurmanıza
> yardımcı
> > > olur.
> > >
> http://www.microsoft.com/turkiye/windows/windowslive/products/social-network-connector.aspx
> >
> >
> > --
> >
> >
> > Amandeep Khurana
> > Computer Science Graduate Student
> > University of California, Santa Cruz
>
> _________________________________________________________________
> Sadece e-posta iletilerinden daha fazlası: Diğer Windows Live(tm)
> özelliklerine göz atın.
> http://www.microsoft.com/turkiye/windows/windowslive/
>

RE: Problem with starting Hadoop in Pseudo Distributed Mode

Posted by Onur AKTAS <on...@live.com>.
There is no default.xml in Hadoop 0.20.0, but luckly I also have release 0.18.3 and found these..

<property>
  <name>hadoop.tmp.dir</name>
  <value>/tmp/hadoop-${user.name}</value>
  <description>A base for other temporary directories.</description>
</property>

It seems /tmp/hadoop-${user.name} is a temporary directory as description indicates, then where is the real directory?
I deleted whole tmp directory and formatted it again.. Started the server, checked the logs and still have same errors. 

> Date: Mon, 3 Aug 2009 17:29:52 -0700
> Subject: Re: Problem with starting Hadoop in Pseudo Distributed Mode
> From: amansk@gmail.com
> To: common-user@hadoop.apache.org
> 
> Yes, you need to change these directories. The config is put in the
> hadoop-site.xml. Or in this case, separately in the 3 xmls. See the
> default xml for syntax and property name.
> 
> On 8/3/09, Onur AKTAS <on...@live.com> wrote:
> >
> > Is it the directory that Hadoop uses?
> >
> > /tmp/hadoop-oracle
> > /tmp/hadoop-oracle/dfs/
> > /tmp/hadoop-oracle/mapred/
> >
> > If yes, how can I change the directory to anywhere else? I do not want it to
> > be kept in /tmp folder.
> >
> >> From: amansk@gmail.com
> >> Date: Mon, 3 Aug 2009 17:02:50 -0700
> >> Subject: Re: Problem with starting Hadoop in Pseudo Distributed Mode
> >> To: common-user@hadoop.apache.org
> >>
> >> I'm assuming that you have no data in HDFS since it never came up... So,
> >> go
> >> ahead and clean up the directory where you are storing the datanode's data
> >> and the namenode's metadata. After that format the namenode and restart
> >> hadoop.
> >>
> >>
> >> 2009/8/3 Onur AKTAS <on...@live.com>
> >>
> >> >
> >> > Hi,
> >> >
> >> > I'm having troubles with running Hadoop in RHEL 5, I did everything as
> >> > documented in:
> >> > http://hadoop.apache.org/common/docs/r0.20.0/quickstart.html
> >> >
> >> > And configured:
> >> > conf/core-site.xml, conf/hdfs-site.xml,
> >> > conf/mapred-site.xml.
> >> >
> >> > Connected to "localhost" with ssh (did passphrase stuff etc.), then I
> >> > did
> >> > the following:
> >> >
> >> > $ bin/hadoop namenode -format
> >> > $ bin/start-all.sh
> >> > starting namenode, logging to
> >> > /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-namenode-localhost.localdomain.out
> >> > localhost: starting datanode, logging to
> >> > /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-datanode-localhost.localdomain.out
> >> > localhost: starting secondarynamenode, logging to
> >> > /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-secondarynamenode-localhost.localdomain.out
> >> > starting jobtracker, logging to
> >> > /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-jobtracker-localhost.localdomain.out
> >> > localhost: starting tasktracker, logging to
> >> > /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-tasktracker-localhost.localdomain.out
> >> >
> >> > Everything seems ok, but when I check the Hadoop Logs I see many errors.
> >> > (and they all cause HBase connection problems.)
> >> > How can I solve this problem? Here are the Logs
> >> >
> >> >  hadoop-oracle-datanode-localhost.localdomain.log:
> >> > 2009-08-04 02:54:28,971 INFO
> >> > org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> >> > /************************************************************
> >> > STARTUP_MSG: Starting DataNode
> >> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> >> > STARTUP_MSG:   args = []
> >> > STARTUP_MSG:   version = 0.20.0
> >> > STARTUP_MSG:   build =
> >> > https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.20 -r
> >> > 763504; compiled by 'ndaley' on Thu Apr  9 05:18:40 UTC 2009
> >> > ************************************************************/
> >> > 2009-08-04 02:54:29,562 ERROR
> >> > org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
> >> > Incompatible namespaceIDs in /tmp/hadoop-oracle/dfs/data: namenode
> >> > namespaceID = 36527197; datanode namespaceID = 2138759529
> >> >    at
> >> > org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:233)
> >> >    at
> >> > org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:148)
> >> >    at
> >> > org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:298)
> >> >    at
> >> > org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:216)
> >> >    at
> >> > org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1283)
> >> >    at
> >> > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1238)
> >> >    at
> >> > org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1246)
> >> >    at
> >> > org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1368)
> >> >
> >> > 2009-08-04 02:54:29,563 INFO
> >> > org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> >> > /************************************************************
> >> > SHUTDOWN_MSG: Shutting down DataNode at localhost.localdomain/127.0.0.1
> >> > ************************************************************/
> >> >
> >> > ------------------------------------------------------------------------------------------
> >> > hadoop-oracle-namenode-localhost.localdomain.log
> >> > 2009-08-04 02:54:26,987 INFO
> >> > org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
> >> > /************************************************************
> >> > STARTUP_MSG: Starting NameNode
> >> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> >> > STARTUP_MSG:   args = []
> >> > STARTUP_MSG:   version = 0.20.0
> >> > STARTUP_MSG:   build =
> >> > https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.20 -r
> >> > 763504; compiled by 'ndaley' on Thu Apr  9 05:18:40 UTC 2009
> >> > ************************************************************/
> >> > 2009-08-04 02:54:27,116 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
> >> > Initializing RPC Metrics with hostName=NameNode, port=9000
> >> > 2009-08-04 02:54:27,174 INFO
> >> > org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
> >> > localhost.localdomain/127.0.0.1:9000
> >> > 2009-08-04 02:54:27,179 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> >> > Initializing JVM Metrics with processName=NameNode, sessionId=null
> >> > 2009-08-04 02:54:27,180 INFO
> >> > org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics:
> >> > Initializing
> >> > NameNodeMeterics using context
> >> > object:org.apache.hadoop.metrics.spi.NullContext
> >> > 2009-08-04 02:54:27,278 INFO
> >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> >> > fsOwner=oracle,oinstall,root,dba,oper,asmadmin
> >> > 2009-08-04 02:54:27,278 INFO
> >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> >> > supergroup=supergroup
> >> > 2009-08-04 02:54:27,278 INFO
> >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> >> > isPermissionEnabled=true
> >> > 2009-08-04 02:54:27,294 INFO
> >> > org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
> >> > Initializing FSNamesystemMetrics using context
> >> > object:org.apache.hadoop.metrics.spi.NullContext
> >> > 2009-08-04 02:54:27,297 INFO
> >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> >> > FSNamesystemStatusMBean
> >> > 2009-08-04 02:54:27,341 INFO
> >> > org.apache.hadoop.hdfs.server.common.Storage:
> >> > Number of files = 8
> >> > 2009-08-04 02:54:27,348 INFO
> >> > org.apache.hadoop.hdfs.server.common.Storage:
> >> > Number of files under construction = 2
> >> > 2009-08-04 02:54:27,351 INFO
> >> > org.apache.hadoop.hdfs.server.common.Storage:
> >> > Image file of size 923 loaded in 0 seconds.
> >> > 2009-08-04 02:54:27,351 INFO
> >> > org.apache.hadoop.hdfs.server.common.Storage:
> >> > Edits file /tmp/hadoop-oracle/dfs/name/current/edits of size 4 edits # 0
> >> > loaded in 0 seconds.
> >> > 2009-08-04 02:54:27,435 INFO
> >> > org.apache.hadoop.hdfs.server.common.Storage:
> >> > Image file of size 923 saved in 0 seconds.
> >> > 2009-08-04 02:54:27,495 INFO
> >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
> >> > FSImage in 262 msecs
> >> > 2009-08-04 02:54:27,496 INFO
> >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of
> >> > blocks
> >> > = 0
> >> > 2009-08-04 02:54:27,496 INFO
> >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
> >> > blocks = 0
> >> > 2009-08-04 02:54:27,497 INFO
> >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> >> > under-replicated blocks = 0
> >> > 2009-08-04 02:54:27,497 INFO
> >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> >> >  over-replicated blocks = 0
> >> > 2009-08-04 02:54:27,497 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> >> > Leaving safe mode after 0 secs.
> >> > 2009-08-04 02:54:27,497 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> >> > Network topology has 0 racks and 0 datanodes
> >> > 2009-08-04 02:54:27,497 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> >> > UnderReplicatedBlocks has 0 blocks
> >> > 2009-08-04 02:54:27,696 INFO org.mortbay.log: Logging to
> >> > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> >> > org.mortbay.log.Slf4jLog
> >> > 2009-08-04 02:54:27,775 INFO org.apache.hadoop.http.HttpServer: Jetty
> >> > bound
> >> > to port 50070
> >> > 2009-08-04 02:54:27,775 INFO org.mortbay.log: jetty-6.1.14
> >> > 2009-08-04 02:54:28,277 INFO org.mortbay.log: Started
> >> > SelectChannelConnector@0.0.0.0:50070
> >> > 2009-08-04 02:54:28,278 INFO
> >> > org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
> >> > 0.0.0.0:50070
> >> > 2009-08-04 02:54:28,278 INFO org.apache.hadoop.ipc.Server: IPC Server
> >> > Responder: starting
> >> > 2009-08-04 02:54:28,279 INFO org.apache.hadoop.ipc.Server: IPC Server
> >> > listener on 9000: starting
> >> > 2009-08-04 02:54:28,280 INFO org.apache.hadoop.ipc.Server: IPC Server
> >> > handler 0 on 9000: starting
> >> > 2009-08-04 02:54:28,280 INFO org.apache.hadoop.ipc.Server: IPC Server
> >> > handler 1 on 9000: starting
> >> > 2009-08-04 02:54:28,316 INFO org.apache.hadoop.ipc.Server: IPC Server
> >> > handler 2 on 9000: starting
> >> > 2009-08-04 02:54:28,316 INFO org.apache.hadoop.ipc.Server: IPC Server
> >> > handler 3 on 9000: starting
> >> > 2009-08-04 02:54:28,321 INFO org.apache.hadoop.ipc.Server: IPC Server
> >> > handler 4 on 9000: starting
> >> > 2009-08-04 02:54:28,321 INFO org.apache.hadoop.ipc.Server: IPC Server
> >> > handler 5 on 9000: starting
> >> > 2009-08-04 02:54:28,328 INFO org.apache.hadoop.ipc.Server: IPC Server
> >> > handler 6 on 9000: starting
> >> > 2009-08-04 02:54:28,361 INFO org.apache.hadoop.ipc.Server: IPC Server
> >> > handler 7 on 9000: starting
> >> > 2009-08-04 02:54:28,362 INFO org.apache.hadoop.ipc.Server: IPC Server
> >> > handler 8 on 9000: starting
> >> > 2009-08-04 02:54:28,366 INFO org.apache.hadoop.ipc.Server: IPC Server
> >> > handler 9 on 9000: starting
> >> > 2009-08-04 02:54:38,433 INFO
> >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
> >> > ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1
> >> >  cmd=listStatus    src=/tmp/hadoop-oracle/mapred/system    dst=null
> >> >  perm=null
> >> > 2009-08-04 02:54:38,755 INFO
> >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
> >> > ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1
> >> > cmd=delete
> >> >    src=/tmp/hadoop-oracle/mapred/system    dst=null    perm=null
> >> > 2009-08-04 02:54:38,773 INFO
> >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
> >> > ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1
> >> > cmd=mkdirs
> >> >    src=/tmp/hadoop-oracle/mapred/system    dst=null
> >> >  perm=oracle:supergroup:rwxr-xr-x
> >> > 2009-08-04 02:54:38,785 INFO
> >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
> >> > ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1
> >> >  cmd=setPermission    src=/tmp/hadoop-oracle/mapred/system    dst=null
> >> >  perm=oracle:supergroup:rwx-wx-wx
> >> > 2009-08-04 02:54:38,862 INFO
> >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
> >> > ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1
> >> > cmd=create
> >> >    src=/tmp/hadoop-oracle/mapred/system/jobtracker.info    dst=null
> >> >  perm=oracle:supergroup:rw-r--r--
> >> > 2009-08-04 02:54:38,900 INFO
> >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
> >> > ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1
> >> >  cmd=setPermission
> >> > src=/tmp/hadoop-oracle/mapred/system/jobtracker.info   dst=null
> >> > perm=oracle:supergroup:rw-------
> >> > 2009-08-04 02:54:38,955 INFO org.apache.hadoop.ipc.Server: IPC Server
> >> > handler 4 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/
> >> > jobtracker.info, DFSClient_-603868025) from 127.0.0.1:51803: error:
> >> > java.io.IOException: File
> >> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be replicated
> >> > to 0 nodes, instead of 1
> >> > java.io.IOException: File
> >> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be replicated
> >> > to 0 nodes, instead of 1
> >> >    at
> >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
> >> >    at
> >> > org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
> >> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >> >    at
> >> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >> >    at
> >> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> >    at java.lang.reflect.Method.invoke(Method.java:597)
> >> >    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> >> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> >> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> >> >    at java.security.AccessController.doPrivileged(Native Method)
> >> >    at javax.security.auth.Subject.doAs(Subject.java:396)
> >> >    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> >> > 2009-08-04 02:54:39,548 INFO org.apache.hadoop.ipc.Server: IPC Server
> >> > handler 5 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/
> >> > jobtracker.info, DFSClient_-603868025) from 127.0.0.1:51803: error:
> >> > java.io.IOException: File
> >> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be replicated
> >> > to 0 nodes, instead of 1
> >> > java.io.IOException: File
> >> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be replicated
> >> > to 0 nodes, instead of 1
> >> >    at
> >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
> >> >    at
> >> > org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
> >> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >> >    at
> >> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >> >    at
> >> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> >    at java.lang.reflect.Method.invoke(Method.java:597)
> >> >    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> >> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> >> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> >> >    at java.security.AccessController.doPrivileged(Native Method)
> >> >    at javax.security.auth.Subject.doAs(Subject.java:396)
> >> >    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> >> > 2009-08-04 02:54:40,359 INFO org.apache.hadoop.ipc.Server: IPC Server
> >> > handler 6 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/
> >> > jobtracker.info, DFSClient_-603868025) from 127.0.0.1:51803: error:
> >> > java.io.IOException: File
> >> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be replicated
> >> > to 0 nodes, instead of 1
> >> > java.io.IOException: File
> >> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be replicated
> >> > to 0 nodes, instead of 1
> >> >    at
> >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
> >> >    at
> >> > org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
> >> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >> >    at
> >> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >> >    at
> >> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> >    at java.lang.reflect.Method.invoke(Method.java:597)
> >> >    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> >> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> >> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> >> >    at java.security.AccessController.doPrivileged(Native Method)
> >> >    at javax.security.auth.Subject.doAs(Subject.java:396)
> >> >    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> >> > 2009-08-04 02:54:41,969 INFO org.apache.hadoop.ipc.Server: IPC Server
> >> > handler 7 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/
> >> > jobtracker.info, DFSClient_-603868025) from 127.0.0.1:51803: error:
> >> > java.io.IOException: File
> >> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be replicated
> >> > to 0 nodes, instead of 1
> >> > java.io.IOException: File
> >> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be replicated
> >> > to 0 nodes, instead of 1
> >> >    at
> >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
> >> >    at
> >> > org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
> >> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >> >    at
> >> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >> >    at
> >> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> >    at java.lang.reflect.Method.invoke(Method.java:597)
> >> >    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> >> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> >> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> >> >    at java.security.AccessController.doPrivileged(Native Method)
> >> >    at javax.security.auth.Subject.doAs(Subject.java:396)
> >> >    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> >> > 2009-08-04 02:54:45,180 INFO org.apache.hadoop.ipc.Server: IPC Server
> >> > handler 8 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/
> >> > jobtracker.info, DFSClient_-603868025) from 127.0.0.1:51803: error:
> >> > java.io.IOException: File
> >> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be replicated
> >> > to 0 nodes, instead of 1
> >> > java.io.IOException: File
> >> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be replicated
> >> > to 0 nodes, instead of 1
> >> >    at
> >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
> >> >    at
> >> > org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
> >> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >> >    at
> >> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >> >    at
> >> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> >    at java.lang.reflect.Method.invoke(Method.java:597)
> >> >    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> >> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> >> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> >> >    at java.security.AccessController.doPrivileged(Native Method)
> >> >    at javax.security.auth.Subject.doAs(Subject.java:396)
> >> >    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> >> >
> >> >
> >> >
> >> > _________________________________________________________________
> >> > Windows Live ile fotoğraflarınızı organize edebilir, düzenleyebilir ve
> >> > paylaşabilirsiniz.
> >> >
> >> > http://www.microsoft.com/turkiye/windows/windowslive/products/photo-gallery-edit.aspx
> >
> > _________________________________________________________________
> > Windows Live tüm arkadaşlarınızla tek bir yerden iletişim kurmanıza yardımcı
> > olur.
> > http://www.microsoft.com/turkiye/windows/windowslive/products/social-network-connector.aspx
> 
> 
> -- 
> 
> 
> Amandeep Khurana
> Computer Science Graduate Student
> University of California, Santa Cruz

_________________________________________________________________
Sadece e-posta iletilerinden daha fazlası: Diğer Windows Live™ özelliklerine göz atın.
http://www.microsoft.com/turkiye/windows/windowslive/

Re: Problem with starting Hadoop in Pseudo Distributed Mode

Posted by Amandeep Khurana <am...@gmail.com>.
Yes, you need to change these directories. The config is put in the
hadoop-site.xml. Or in this case, separately in the 3 xmls. See the
default xml for syntax and property name.

On 8/3/09, Onur AKTAS <on...@live.com> wrote:
>
> Is it the directory that Hadoop uses?
>
> /tmp/hadoop-oracle
> /tmp/hadoop-oracle/dfs/
> /tmp/hadoop-oracle/mapred/
>
> If yes, how can I change the directory to anywhere else? I do not want it to
> be kept in /tmp folder.
>
>> From: amansk@gmail.com
>> Date: Mon, 3 Aug 2009 17:02:50 -0700
>> Subject: Re: Problem with starting Hadoop in Pseudo Distributed Mode
>> To: common-user@hadoop.apache.org
>>
>> I'm assuming that you have no data in HDFS since it never came up... So,
>> go
>> ahead and clean up the directory where you are storing the datanode's data
>> and the namenode's metadata. After that format the namenode and restart
>> hadoop.
>>
>>
>> 2009/8/3 Onur AKTAS <on...@live.com>
>>
>> >
>> > Hi,
>> >
>> > I'm having troubles with running Hadoop in RHEL 5, I did everything as
>> > documented in:
>> > http://hadoop.apache.org/common/docs/r0.20.0/quickstart.html
>> >
>> > And configured:
>> > conf/core-site.xml, conf/hdfs-site.xml,
>> > conf/mapred-site.xml.
>> >
>> > Connected to "localhost" with ssh (did passphrase stuff etc.), then I
>> > did
>> > the following:
>> >
>> > $ bin/hadoop namenode -format
>> > $ bin/start-all.sh
>> > starting namenode, logging to
>> > /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-namenode-localhost.localdomain.out
>> > localhost: starting datanode, logging to
>> > /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-datanode-localhost.localdomain.out
>> > localhost: starting secondarynamenode, logging to
>> > /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-secondarynamenode-localhost.localdomain.out
>> > starting jobtracker, logging to
>> > /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-jobtracker-localhost.localdomain.out
>> > localhost: starting tasktracker, logging to
>> > /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-tasktracker-localhost.localdomain.out
>> >
>> > Everything seems ok, but when I check the Hadoop Logs I see many errors.
>> > (and they all cause HBase connection problems.)
>> > How can I solve this problem? Here are the Logs
>> >
>> >  hadoop-oracle-datanode-localhost.localdomain.log:
>> > 2009-08-04 02:54:28,971 INFO
>> > org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
>> > /************************************************************
>> > STARTUP_MSG: Starting DataNode
>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>> > STARTUP_MSG:   args = []
>> > STARTUP_MSG:   version = 0.20.0
>> > STARTUP_MSG:   build =
>> > https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.20 -r
>> > 763504; compiled by 'ndaley' on Thu Apr  9 05:18:40 UTC 2009
>> > ************************************************************/
>> > 2009-08-04 02:54:29,562 ERROR
>> > org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
>> > Incompatible namespaceIDs in /tmp/hadoop-oracle/dfs/data: namenode
>> > namespaceID = 36527197; datanode namespaceID = 2138759529
>> >    at
>> > org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:233)
>> >    at
>> > org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:148)
>> >    at
>> > org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:298)
>> >    at
>> > org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:216)
>> >    at
>> > org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1283)
>> >    at
>> > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1238)
>> >    at
>> > org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1246)
>> >    at
>> > org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1368)
>> >
>> > 2009-08-04 02:54:29,563 INFO
>> > org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>> > /************************************************************
>> > SHUTDOWN_MSG: Shutting down DataNode at localhost.localdomain/127.0.0.1
>> > ************************************************************/
>> >
>> > ------------------------------------------------------------------------------------------
>> > hadoop-oracle-namenode-localhost.localdomain.log
>> > 2009-08-04 02:54:26,987 INFO
>> > org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>> > /************************************************************
>> > STARTUP_MSG: Starting NameNode
>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>> > STARTUP_MSG:   args = []
>> > STARTUP_MSG:   version = 0.20.0
>> > STARTUP_MSG:   build =
>> > https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.20 -r
>> > 763504; compiled by 'ndaley' on Thu Apr  9 05:18:40 UTC 2009
>> > ************************************************************/
>> > 2009-08-04 02:54:27,116 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
>> > Initializing RPC Metrics with hostName=NameNode, port=9000
>> > 2009-08-04 02:54:27,174 INFO
>> > org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
>> > localhost.localdomain/127.0.0.1:9000
>> > 2009-08-04 02:54:27,179 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
>> > Initializing JVM Metrics with processName=NameNode, sessionId=null
>> > 2009-08-04 02:54:27,180 INFO
>> > org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics:
>> > Initializing
>> > NameNodeMeterics using context
>> > object:org.apache.hadoop.metrics.spi.NullContext
>> > 2009-08-04 02:54:27,278 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> > fsOwner=oracle,oinstall,root,dba,oper,asmadmin
>> > 2009-08-04 02:54:27,278 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> > supergroup=supergroup
>> > 2009-08-04 02:54:27,278 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> > isPermissionEnabled=true
>> > 2009-08-04 02:54:27,294 INFO
>> > org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
>> > Initializing FSNamesystemMetrics using context
>> > object:org.apache.hadoop.metrics.spi.NullContext
>> > 2009-08-04 02:54:27,297 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>> > FSNamesystemStatusMBean
>> > 2009-08-04 02:54:27,341 INFO
>> > org.apache.hadoop.hdfs.server.common.Storage:
>> > Number of files = 8
>> > 2009-08-04 02:54:27,348 INFO
>> > org.apache.hadoop.hdfs.server.common.Storage:
>> > Number of files under construction = 2
>> > 2009-08-04 02:54:27,351 INFO
>> > org.apache.hadoop.hdfs.server.common.Storage:
>> > Image file of size 923 loaded in 0 seconds.
>> > 2009-08-04 02:54:27,351 INFO
>> > org.apache.hadoop.hdfs.server.common.Storage:
>> > Edits file /tmp/hadoop-oracle/dfs/name/current/edits of size 4 edits # 0
>> > loaded in 0 seconds.
>> > 2009-08-04 02:54:27,435 INFO
>> > org.apache.hadoop.hdfs.server.common.Storage:
>> > Image file of size 923 saved in 0 seconds.
>> > 2009-08-04 02:54:27,495 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
>> > FSImage in 262 msecs
>> > 2009-08-04 02:54:27,496 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of
>> > blocks
>> > = 0
>> > 2009-08-04 02:54:27,496 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
>> > blocks = 0
>> > 2009-08-04 02:54:27,497 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
>> > under-replicated blocks = 0
>> > 2009-08-04 02:54:27,497 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
>> >  over-replicated blocks = 0
>> > 2009-08-04 02:54:27,497 INFO org.apache.hadoop.hdfs.StateChange: STATE*
>> > Leaving safe mode after 0 secs.
>> > 2009-08-04 02:54:27,497 INFO org.apache.hadoop.hdfs.StateChange: STATE*
>> > Network topology has 0 racks and 0 datanodes
>> > 2009-08-04 02:54:27,497 INFO org.apache.hadoop.hdfs.StateChange: STATE*
>> > UnderReplicatedBlocks has 0 blocks
>> > 2009-08-04 02:54:27,696 INFO org.mortbay.log: Logging to
>> > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
>> > org.mortbay.log.Slf4jLog
>> > 2009-08-04 02:54:27,775 INFO org.apache.hadoop.http.HttpServer: Jetty
>> > bound
>> > to port 50070
>> > 2009-08-04 02:54:27,775 INFO org.mortbay.log: jetty-6.1.14
>> > 2009-08-04 02:54:28,277 INFO org.mortbay.log: Started
>> > SelectChannelConnector@0.0.0.0:50070
>> > 2009-08-04 02:54:28,278 INFO
>> > org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
>> > 0.0.0.0:50070
>> > 2009-08-04 02:54:28,278 INFO org.apache.hadoop.ipc.Server: IPC Server
>> > Responder: starting
>> > 2009-08-04 02:54:28,279 INFO org.apache.hadoop.ipc.Server: IPC Server
>> > listener on 9000: starting
>> > 2009-08-04 02:54:28,280 INFO org.apache.hadoop.ipc.Server: IPC Server
>> > handler 0 on 9000: starting
>> > 2009-08-04 02:54:28,280 INFO org.apache.hadoop.ipc.Server: IPC Server
>> > handler 1 on 9000: starting
>> > 2009-08-04 02:54:28,316 INFO org.apache.hadoop.ipc.Server: IPC Server
>> > handler 2 on 9000: starting
>> > 2009-08-04 02:54:28,316 INFO org.apache.hadoop.ipc.Server: IPC Server
>> > handler 3 on 9000: starting
>> > 2009-08-04 02:54:28,321 INFO org.apache.hadoop.ipc.Server: IPC Server
>> > handler 4 on 9000: starting
>> > 2009-08-04 02:54:28,321 INFO org.apache.hadoop.ipc.Server: IPC Server
>> > handler 5 on 9000: starting
>> > 2009-08-04 02:54:28,328 INFO org.apache.hadoop.ipc.Server: IPC Server
>> > handler 6 on 9000: starting
>> > 2009-08-04 02:54:28,361 INFO org.apache.hadoop.ipc.Server: IPC Server
>> > handler 7 on 9000: starting
>> > 2009-08-04 02:54:28,362 INFO org.apache.hadoop.ipc.Server: IPC Server
>> > handler 8 on 9000: starting
>> > 2009-08-04 02:54:28,366 INFO org.apache.hadoop.ipc.Server: IPC Server
>> > handler 9 on 9000: starting
>> > 2009-08-04 02:54:38,433 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
>> > ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1
>> >  cmd=listStatus    src=/tmp/hadoop-oracle/mapred/system    dst=null
>> >  perm=null
>> > 2009-08-04 02:54:38,755 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
>> > ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1
>> > cmd=delete
>> >    src=/tmp/hadoop-oracle/mapred/system    dst=null    perm=null
>> > 2009-08-04 02:54:38,773 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
>> > ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1
>> > cmd=mkdirs
>> >    src=/tmp/hadoop-oracle/mapred/system    dst=null
>> >  perm=oracle:supergroup:rwxr-xr-x
>> > 2009-08-04 02:54:38,785 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
>> > ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1
>> >  cmd=setPermission    src=/tmp/hadoop-oracle/mapred/system    dst=null
>> >  perm=oracle:supergroup:rwx-wx-wx
>> > 2009-08-04 02:54:38,862 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
>> > ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1
>> > cmd=create
>> >    src=/tmp/hadoop-oracle/mapred/system/jobtracker.info    dst=null
>> >  perm=oracle:supergroup:rw-r--r--
>> > 2009-08-04 02:54:38,900 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
>> > ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1
>> >  cmd=setPermission
>> > src=/tmp/hadoop-oracle/mapred/system/jobtracker.info   dst=null
>> > perm=oracle:supergroup:rw-------
>> > 2009-08-04 02:54:38,955 INFO org.apache.hadoop.ipc.Server: IPC Server
>> > handler 4 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/
>> > jobtracker.info, DFSClient_-603868025) from 127.0.0.1:51803: error:
>> > java.io.IOException: File
>> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be replicated
>> > to 0 nodes, instead of 1
>> > java.io.IOException: File
>> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be replicated
>> > to 0 nodes, instead of 1
>> >    at
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
>> >    at
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >    at
>> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >    at
>> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >    at java.lang.reflect.Method.invoke(Method.java:597)
>> >    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>> >    at java.security.AccessController.doPrivileged(Native Method)
>> >    at javax.security.auth.Subject.doAs(Subject.java:396)
>> >    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>> > 2009-08-04 02:54:39,548 INFO org.apache.hadoop.ipc.Server: IPC Server
>> > handler 5 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/
>> > jobtracker.info, DFSClient_-603868025) from 127.0.0.1:51803: error:
>> > java.io.IOException: File
>> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be replicated
>> > to 0 nodes, instead of 1
>> > java.io.IOException: File
>> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be replicated
>> > to 0 nodes, instead of 1
>> >    at
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
>> >    at
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >    at
>> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >    at
>> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >    at java.lang.reflect.Method.invoke(Method.java:597)
>> >    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>> >    at java.security.AccessController.doPrivileged(Native Method)
>> >    at javax.security.auth.Subject.doAs(Subject.java:396)
>> >    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>> > 2009-08-04 02:54:40,359 INFO org.apache.hadoop.ipc.Server: IPC Server
>> > handler 6 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/
>> > jobtracker.info, DFSClient_-603868025) from 127.0.0.1:51803: error:
>> > java.io.IOException: File
>> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be replicated
>> > to 0 nodes, instead of 1
>> > java.io.IOException: File
>> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be replicated
>> > to 0 nodes, instead of 1
>> >    at
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
>> >    at
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >    at
>> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >    at
>> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >    at java.lang.reflect.Method.invoke(Method.java:597)
>> >    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>> >    at java.security.AccessController.doPrivileged(Native Method)
>> >    at javax.security.auth.Subject.doAs(Subject.java:396)
>> >    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>> > 2009-08-04 02:54:41,969 INFO org.apache.hadoop.ipc.Server: IPC Server
>> > handler 7 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/
>> > jobtracker.info, DFSClient_-603868025) from 127.0.0.1:51803: error:
>> > java.io.IOException: File
>> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be replicated
>> > to 0 nodes, instead of 1
>> > java.io.IOException: File
>> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be replicated
>> > to 0 nodes, instead of 1
>> >    at
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
>> >    at
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >    at
>> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >    at
>> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >    at java.lang.reflect.Method.invoke(Method.java:597)
>> >    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>> >    at java.security.AccessController.doPrivileged(Native Method)
>> >    at javax.security.auth.Subject.doAs(Subject.java:396)
>> >    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>> > 2009-08-04 02:54:45,180 INFO org.apache.hadoop.ipc.Server: IPC Server
>> > handler 8 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/
>> > jobtracker.info, DFSClient_-603868025) from 127.0.0.1:51803: error:
>> > java.io.IOException: File
>> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be replicated
>> > to 0 nodes, instead of 1
>> > java.io.IOException: File
>> > /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be replicated
>> > to 0 nodes, instead of 1
>> >    at
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
>> >    at
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >    at
>> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >    at
>> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >    at java.lang.reflect.Method.invoke(Method.java:597)
>> >    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>> >    at java.security.AccessController.doPrivileged(Native Method)
>> >    at javax.security.auth.Subject.doAs(Subject.java:396)
>> >    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>> >
>> >
>> >
>> > _________________________________________________________________
>> > Windows Live ile fotoğraflarınızı organize edebilir, düzenleyebilir ve
>> > paylaşabilirsiniz.
>> >
>> > http://www.microsoft.com/turkiye/windows/windowslive/products/photo-gallery-edit.aspx
>
> _________________________________________________________________
> Windows Live tüm arkadaşlarınızla tek bir yerden iletişim kurmanıza yardımcı
> olur.
> http://www.microsoft.com/turkiye/windows/windowslive/products/social-network-connector.aspx


-- 


Amandeep Khurana
Computer Science Graduate Student
University of California, Santa Cruz

RE: Problem with starting Hadoop in Pseudo Distributed Mode

Posted by Onur AKTAS <on...@live.com>.
Is it the directory that Hadoop uses?

/tmp/hadoop-oracle
/tmp/hadoop-oracle/dfs/ 
/tmp/hadoop-oracle/mapred/

If yes, how can I change the directory to anywhere else? I do not want it to be kept in /tmp folder.

> From: amansk@gmail.com
> Date: Mon, 3 Aug 2009 17:02:50 -0700
> Subject: Re: Problem with starting Hadoop in Pseudo Distributed Mode
> To: common-user@hadoop.apache.org
> 
> I'm assuming that you have no data in HDFS since it never came up... So, go
> ahead and clean up the directory where you are storing the datanode's data
> and the namenode's metadata. After that format the namenode and restart
> hadoop.
> 
> 
> 2009/8/3 Onur AKTAS <on...@live.com>
> 
> >
> > Hi,
> >
> > I'm having troubles with running Hadoop in RHEL 5, I did everything as
> > documented in:
> > http://hadoop.apache.org/common/docs/r0.20.0/quickstart.html
> >
> > And configured:
> > conf/core-site.xml, conf/hdfs-site.xml,
> > conf/mapred-site.xml.
> >
> > Connected to "localhost" with ssh (did passphrase stuff etc.), then I did
> > the following:
> >
> > $ bin/hadoop namenode -format
> > $ bin/start-all.sh
> > starting namenode, logging to
> > /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-namenode-localhost.localdomain.out
> > localhost: starting datanode, logging to
> > /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-datanode-localhost.localdomain.out
> > localhost: starting secondarynamenode, logging to
> > /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-secondarynamenode-localhost.localdomain.out
> > starting jobtracker, logging to
> > /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-jobtracker-localhost.localdomain.out
> > localhost: starting tasktracker, logging to
> > /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-tasktracker-localhost.localdomain.out
> >
> > Everything seems ok, but when I check the Hadoop Logs I see many errors.
> > (and they all cause HBase connection problems.)
> > How can I solve this problem? Here are the Logs
> >
> >  hadoop-oracle-datanode-localhost.localdomain.log:
> > 2009-08-04 02:54:28,971 INFO
> > org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> > /************************************************************
> > STARTUP_MSG: Starting DataNode
> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> > STARTUP_MSG:   args = []
> > STARTUP_MSG:   version = 0.20.0
> > STARTUP_MSG:   build =
> > https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.20 -r
> > 763504; compiled by 'ndaley' on Thu Apr  9 05:18:40 UTC 2009
> > ************************************************************/
> > 2009-08-04 02:54:29,562 ERROR
> > org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
> > Incompatible namespaceIDs in /tmp/hadoop-oracle/dfs/data: namenode
> > namespaceID = 36527197; datanode namespaceID = 2138759529
> >    at
> > org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:233)
> >    at
> > org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:148)
> >    at
> > org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:298)
> >    at
> > org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:216)
> >    at
> > org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1283)
> >    at
> > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1238)
> >    at
> > org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1246)
> >    at
> > org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1368)
> >
> > 2009-08-04 02:54:29,563 INFO
> > org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> > /************************************************************
> > SHUTDOWN_MSG: Shutting down DataNode at localhost.localdomain/127.0.0.1
> > ************************************************************/
> >
> > ------------------------------------------------------------------------------------------
> > hadoop-oracle-namenode-localhost.localdomain.log
> > 2009-08-04 02:54:26,987 INFO
> > org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
> > /************************************************************
> > STARTUP_MSG: Starting NameNode
> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> > STARTUP_MSG:   args = []
> > STARTUP_MSG:   version = 0.20.0
> > STARTUP_MSG:   build =
> > https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.20 -r
> > 763504; compiled by 'ndaley' on Thu Apr  9 05:18:40 UTC 2009
> > ************************************************************/
> > 2009-08-04 02:54:27,116 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
> > Initializing RPC Metrics with hostName=NameNode, port=9000
> > 2009-08-04 02:54:27,174 INFO
> > org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
> > localhost.localdomain/127.0.0.1:9000
> > 2009-08-04 02:54:27,179 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> > Initializing JVM Metrics with processName=NameNode, sessionId=null
> > 2009-08-04 02:54:27,180 INFO
> > org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing
> > NameNodeMeterics using context
> > object:org.apache.hadoop.metrics.spi.NullContext
> > 2009-08-04 02:54:27,278 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> > fsOwner=oracle,oinstall,root,dba,oper,asmadmin
> > 2009-08-04 02:54:27,278 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
> > 2009-08-04 02:54:27,278 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> > isPermissionEnabled=true
> > 2009-08-04 02:54:27,294 INFO
> > org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
> > Initializing FSNamesystemMetrics using context
> > object:org.apache.hadoop.metrics.spi.NullContext
> > 2009-08-04 02:54:27,297 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> > FSNamesystemStatusMBean
> > 2009-08-04 02:54:27,341 INFO org.apache.hadoop.hdfs.server.common.Storage:
> > Number of files = 8
> > 2009-08-04 02:54:27,348 INFO org.apache.hadoop.hdfs.server.common.Storage:
> > Number of files under construction = 2
> > 2009-08-04 02:54:27,351 INFO org.apache.hadoop.hdfs.server.common.Storage:
> > Image file of size 923 loaded in 0 seconds.
> > 2009-08-04 02:54:27,351 INFO org.apache.hadoop.hdfs.server.common.Storage:
> > Edits file /tmp/hadoop-oracle/dfs/name/current/edits of size 4 edits # 0
> > loaded in 0 seconds.
> > 2009-08-04 02:54:27,435 INFO org.apache.hadoop.hdfs.server.common.Storage:
> > Image file of size 923 saved in 0 seconds.
> > 2009-08-04 02:54:27,495 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
> > FSImage in 262 msecs
> > 2009-08-04 02:54:27,496 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks
> > = 0
> > 2009-08-04 02:54:27,496 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
> > blocks = 0
> > 2009-08-04 02:54:27,497 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> > under-replicated blocks = 0
> > 2009-08-04 02:54:27,497 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> >  over-replicated blocks = 0
> > 2009-08-04 02:54:27,497 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> > Leaving safe mode after 0 secs.
> > 2009-08-04 02:54:27,497 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> > Network topology has 0 racks and 0 datanodes
> > 2009-08-04 02:54:27,497 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> > UnderReplicatedBlocks has 0 blocks
> > 2009-08-04 02:54:27,696 INFO org.mortbay.log: Logging to
> > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> > org.mortbay.log.Slf4jLog
> > 2009-08-04 02:54:27,775 INFO org.apache.hadoop.http.HttpServer: Jetty bound
> > to port 50070
> > 2009-08-04 02:54:27,775 INFO org.mortbay.log: jetty-6.1.14
> > 2009-08-04 02:54:28,277 INFO org.mortbay.log: Started
> > SelectChannelConnector@0.0.0.0:50070
> > 2009-08-04 02:54:28,278 INFO
> > org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
> > 0.0.0.0:50070
> > 2009-08-04 02:54:28,278 INFO org.apache.hadoop.ipc.Server: IPC Server
> > Responder: starting
> > 2009-08-04 02:54:28,279 INFO org.apache.hadoop.ipc.Server: IPC Server
> > listener on 9000: starting
> > 2009-08-04 02:54:28,280 INFO org.apache.hadoop.ipc.Server: IPC Server
> > handler 0 on 9000: starting
> > 2009-08-04 02:54:28,280 INFO org.apache.hadoop.ipc.Server: IPC Server
> > handler 1 on 9000: starting
> > 2009-08-04 02:54:28,316 INFO org.apache.hadoop.ipc.Server: IPC Server
> > handler 2 on 9000: starting
> > 2009-08-04 02:54:28,316 INFO org.apache.hadoop.ipc.Server: IPC Server
> > handler 3 on 9000: starting
> > 2009-08-04 02:54:28,321 INFO org.apache.hadoop.ipc.Server: IPC Server
> > handler 4 on 9000: starting
> > 2009-08-04 02:54:28,321 INFO org.apache.hadoop.ipc.Server: IPC Server
> > handler 5 on 9000: starting
> > 2009-08-04 02:54:28,328 INFO org.apache.hadoop.ipc.Server: IPC Server
> > handler 6 on 9000: starting
> > 2009-08-04 02:54:28,361 INFO org.apache.hadoop.ipc.Server: IPC Server
> > handler 7 on 9000: starting
> > 2009-08-04 02:54:28,362 INFO org.apache.hadoop.ipc.Server: IPC Server
> > handler 8 on 9000: starting
> > 2009-08-04 02:54:28,366 INFO org.apache.hadoop.ipc.Server: IPC Server
> > handler 9 on 9000: starting
> > 2009-08-04 02:54:38,433 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
> > ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1
> >  cmd=listStatus    src=/tmp/hadoop-oracle/mapred/system    dst=null
> >  perm=null
> > 2009-08-04 02:54:38,755 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
> > ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1    cmd=delete
> >    src=/tmp/hadoop-oracle/mapred/system    dst=null    perm=null
> > 2009-08-04 02:54:38,773 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
> > ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1    cmd=mkdirs
> >    src=/tmp/hadoop-oracle/mapred/system    dst=null
> >  perm=oracle:supergroup:rwxr-xr-x
> > 2009-08-04 02:54:38,785 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
> > ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1
> >  cmd=setPermission    src=/tmp/hadoop-oracle/mapred/system    dst=null
> >  perm=oracle:supergroup:rwx-wx-wx
> > 2009-08-04 02:54:38,862 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
> > ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1    cmd=create
> >    src=/tmp/hadoop-oracle/mapred/system/jobtracker.info    dst=null
> >  perm=oracle:supergroup:rw-r--r--
> > 2009-08-04 02:54:38,900 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
> > ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1
> >  cmd=setPermission    src=/tmp/hadoop-oracle/mapred/system/jobtracker.info   dst=null    perm=oracle:supergroup:rw-------
> > 2009-08-04 02:54:38,955 INFO org.apache.hadoop.ipc.Server: IPC Server
> > handler 4 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/
> > jobtracker.info, DFSClient_-603868025) from 127.0.0.1:51803: error:
> > java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be replicated to 0 nodes, instead of 1
> > java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be replicated to 0 nodes, instead of 1
> >    at
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
> >    at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >    at
> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >    at
> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >    at java.lang.reflect.Method.invoke(Method.java:597)
> >    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> >    at java.security.AccessController.doPrivileged(Native Method)
> >    at javax.security.auth.Subject.doAs(Subject.java:396)
> >    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> > 2009-08-04 02:54:39,548 INFO org.apache.hadoop.ipc.Server: IPC Server
> > handler 5 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/
> > jobtracker.info, DFSClient_-603868025) from 127.0.0.1:51803: error:
> > java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be replicated to 0 nodes, instead of 1
> > java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be replicated to 0 nodes, instead of 1
> >    at
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
> >    at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >    at
> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >    at
> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >    at java.lang.reflect.Method.invoke(Method.java:597)
> >    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> >    at java.security.AccessController.doPrivileged(Native Method)
> >    at javax.security.auth.Subject.doAs(Subject.java:396)
> >    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> > 2009-08-04 02:54:40,359 INFO org.apache.hadoop.ipc.Server: IPC Server
> > handler 6 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/
> > jobtracker.info, DFSClient_-603868025) from 127.0.0.1:51803: error:
> > java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be replicated to 0 nodes, instead of 1
> > java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be replicated to 0 nodes, instead of 1
> >    at
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
> >    at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >    at
> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >    at
> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >    at java.lang.reflect.Method.invoke(Method.java:597)
> >    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> >    at java.security.AccessController.doPrivileged(Native Method)
> >    at javax.security.auth.Subject.doAs(Subject.java:396)
> >    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> > 2009-08-04 02:54:41,969 INFO org.apache.hadoop.ipc.Server: IPC Server
> > handler 7 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/
> > jobtracker.info, DFSClient_-603868025) from 127.0.0.1:51803: error:
> > java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be replicated to 0 nodes, instead of 1
> > java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be replicated to 0 nodes, instead of 1
> >    at
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
> >    at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >    at
> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >    at
> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >    at java.lang.reflect.Method.invoke(Method.java:597)
> >    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> >    at java.security.AccessController.doPrivileged(Native Method)
> >    at javax.security.auth.Subject.doAs(Subject.java:396)
> >    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> > 2009-08-04 02:54:45,180 INFO org.apache.hadoop.ipc.Server: IPC Server
> > handler 8 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/
> > jobtracker.info, DFSClient_-603868025) from 127.0.0.1:51803: error:
> > java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be replicated to 0 nodes, instead of 1
> > java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be replicated to 0 nodes, instead of 1
> >    at
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
> >    at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >    at
> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >    at
> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >    at java.lang.reflect.Method.invoke(Method.java:597)
> >    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> >    at java.security.AccessController.doPrivileged(Native Method)
> >    at javax.security.auth.Subject.doAs(Subject.java:396)
> >    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> >
> >
> >
> > _________________________________________________________________
> > Windows Live ile fotoğraflarınızı organize edebilir, düzenleyebilir ve
> > paylaşabilirsiniz.
> >
> > http://www.microsoft.com/turkiye/windows/windowslive/products/photo-gallery-edit.aspx

_________________________________________________________________
Windows Live tüm arkadaşlarınızla tek bir yerden iletişim kurmanıza yardımcı olur.
http://www.microsoft.com/turkiye/windows/windowslive/products/social-network-connector.aspx

Re: Problem with starting Hadoop in Pseudo Distributed Mode

Posted by Amandeep Khurana <am...@gmail.com>.
I'm assuming that you have no data in HDFS since it never came up... So, go
ahead and clean up the directory where you are storing the datanode's data
and the namenode's metadata. After that format the namenode and restart
hadoop.


2009/8/3 Onur AKTAS <on...@live.com>

>
> Hi,
>
> I'm having troubles with running Hadoop in RHEL 5, I did everything as
> documented in:
> http://hadoop.apache.org/common/docs/r0.20.0/quickstart.html
>
> And configured:
> conf/core-site.xml, conf/hdfs-site.xml,
> conf/mapred-site.xml.
>
> Connected to "localhost" with ssh (did passphrase stuff etc.), then I did
> the following:
>
> $ bin/hadoop namenode -format
> $ bin/start-all.sh
> starting namenode, logging to
> /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-namenode-localhost.localdomain.out
> localhost: starting datanode, logging to
> /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-datanode-localhost.localdomain.out
> localhost: starting secondarynamenode, logging to
> /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-secondarynamenode-localhost.localdomain.out
> starting jobtracker, logging to
> /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-jobtracker-localhost.localdomain.out
> localhost: starting tasktracker, logging to
> /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-tasktracker-localhost.localdomain.out
>
> Everything seems ok, but when I check the Hadoop Logs I see many errors.
> (and they all cause HBase connection problems.)
> How can I solve this problem? Here are the Logs
>
>  hadoop-oracle-datanode-localhost.localdomain.log:
> 2009-08-04 02:54:28,971 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.0
> STARTUP_MSG:   build =
> https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.20 -r
> 763504; compiled by 'ndaley' on Thu Apr  9 05:18:40 UTC 2009
> ************************************************************/
> 2009-08-04 02:54:29,562 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
> Incompatible namespaceIDs in /tmp/hadoop-oracle/dfs/data: namenode
> namespaceID = 36527197; datanode namespaceID = 2138759529
>    at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:233)
>    at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:148)
>    at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:298)
>    at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:216)
>    at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1283)
>    at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1238)
>    at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1246)
>    at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1368)
>
> 2009-08-04 02:54:29,563 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at localhost.localdomain/127.0.0.1
> ************************************************************/
>
> ------------------------------------------------------------------------------------------
> hadoop-oracle-namenode-localhost.localdomain.log
> 2009-08-04 02:54:26,987 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.0
> STARTUP_MSG:   build =
> https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.20 -r
> 763504; compiled by 'ndaley' on Thu Apr  9 05:18:40 UTC 2009
> ************************************************************/
> 2009-08-04 02:54:27,116 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
> Initializing RPC Metrics with hostName=NameNode, port=9000
> 2009-08-04 02:54:27,174 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
> localhost.localdomain/127.0.0.1:9000
> 2009-08-04 02:54:27,179 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=NameNode, sessionId=null
> 2009-08-04 02:54:27,180 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing
> NameNodeMeterics using context
> object:org.apache.hadoop.metrics.spi.NullContext
> 2009-08-04 02:54:27,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> fsOwner=oracle,oinstall,root,dba,oper,asmadmin
> 2009-08-04 02:54:27,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
> 2009-08-04 02:54:27,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isPermissionEnabled=true
> 2009-08-04 02:54:27,294 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
> Initializing FSNamesystemMetrics using context
> object:org.apache.hadoop.metrics.spi.NullContext
> 2009-08-04 02:54:27,297 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> FSNamesystemStatusMBean
> 2009-08-04 02:54:27,341 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files = 8
> 2009-08-04 02:54:27,348 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files under construction = 2
> 2009-08-04 02:54:27,351 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Image file of size 923 loaded in 0 seconds.
> 2009-08-04 02:54:27,351 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Edits file /tmp/hadoop-oracle/dfs/name/current/edits of size 4 edits # 0
> loaded in 0 seconds.
> 2009-08-04 02:54:27,435 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Image file of size 923 saved in 0 seconds.
> 2009-08-04 02:54:27,495 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
> FSImage in 262 msecs
> 2009-08-04 02:54:27,496 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks
> = 0
> 2009-08-04 02:54:27,496 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
> blocks = 0
> 2009-08-04 02:54:27,497 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> under-replicated blocks = 0
> 2009-08-04 02:54:27,497 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
>  over-replicated blocks = 0
> 2009-08-04 02:54:27,497 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> Leaving safe mode after 0 secs.
> 2009-08-04 02:54:27,497 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> Network topology has 0 racks and 0 datanodes
> 2009-08-04 02:54:27,497 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> UnderReplicatedBlocks has 0 blocks
> 2009-08-04 02:54:27,696 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2009-08-04 02:54:27,775 INFO org.apache.hadoop.http.HttpServer: Jetty bound
> to port 50070
> 2009-08-04 02:54:27,775 INFO org.mortbay.log: jetty-6.1.14
> 2009-08-04 02:54:28,277 INFO org.mortbay.log: Started
> SelectChannelConnector@0.0.0.0:50070
> 2009-08-04 02:54:28,278 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
> 0.0.0.0:50070
> 2009-08-04 02:54:28,278 INFO org.apache.hadoop.ipc.Server: IPC Server
> Responder: starting
> 2009-08-04 02:54:28,279 INFO org.apache.hadoop.ipc.Server: IPC Server
> listener on 9000: starting
> 2009-08-04 02:54:28,280 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 9000: starting
> 2009-08-04 02:54:28,280 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 9000: starting
> 2009-08-04 02:54:28,316 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 2 on 9000: starting
> 2009-08-04 02:54:28,316 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 3 on 9000: starting
> 2009-08-04 02:54:28,321 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 4 on 9000: starting
> 2009-08-04 02:54:28,321 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 5 on 9000: starting
> 2009-08-04 02:54:28,328 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 6 on 9000: starting
> 2009-08-04 02:54:28,361 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 7 on 9000: starting
> 2009-08-04 02:54:28,362 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 8 on 9000: starting
> 2009-08-04 02:54:28,366 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 9 on 9000: starting
> 2009-08-04 02:54:38,433 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
> ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1
>  cmd=listStatus    src=/tmp/hadoop-oracle/mapred/system    dst=null
>  perm=null
> 2009-08-04 02:54:38,755 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
> ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1    cmd=delete
>    src=/tmp/hadoop-oracle/mapred/system    dst=null    perm=null
> 2009-08-04 02:54:38,773 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
> ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1    cmd=mkdirs
>    src=/tmp/hadoop-oracle/mapred/system    dst=null
>  perm=oracle:supergroup:rwxr-xr-x
> 2009-08-04 02:54:38,785 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
> ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1
>  cmd=setPermission    src=/tmp/hadoop-oracle/mapred/system    dst=null
>  perm=oracle:supergroup:rwx-wx-wx
> 2009-08-04 02:54:38,862 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
> ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1    cmd=create
>    src=/tmp/hadoop-oracle/mapred/system/jobtracker.info    dst=null
>  perm=oracle:supergroup:rw-r--r--
> 2009-08-04 02:54:38,900 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
> ugi=oracle,oinstall,root,dba,oper,asmadmin    ip=/127.0.0.1
>  cmd=setPermission    src=/tmp/hadoop-oracle/mapred/system/jobtracker.info   dst=null    perm=oracle:supergroup:rw-------
> 2009-08-04 02:54:38,955 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 4 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/
> jobtracker.info, DFSClient_-603868025) from 127.0.0.1:51803: error:
> java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be replicated to 0 nodes, instead of 1
> java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be replicated to 0 nodes, instead of 1
>    at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
>    at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>    at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>    at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> 2009-08-04 02:54:39,548 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 5 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/
> jobtracker.info, DFSClient_-603868025) from 127.0.0.1:51803: error:
> java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be replicated to 0 nodes, instead of 1
> java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be replicated to 0 nodes, instead of 1
>    at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
>    at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>    at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>    at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> 2009-08-04 02:54:40,359 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 6 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/
> jobtracker.info, DFSClient_-603868025) from 127.0.0.1:51803: error:
> java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be replicated to 0 nodes, instead of 1
> java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be replicated to 0 nodes, instead of 1
>    at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
>    at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>    at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>    at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> 2009-08-04 02:54:41,969 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 7 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/
> jobtracker.info, DFSClient_-603868025) from 127.0.0.1:51803: error:
> java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be replicated to 0 nodes, instead of 1
> java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be replicated to 0 nodes, instead of 1
>    at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
>    at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>    at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>    at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> 2009-08-04 02:54:45,180 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 8 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/
> jobtracker.info, DFSClient_-603868025) from 127.0.0.1:51803: error:
> java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be replicated to 0 nodes, instead of 1
> java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.infocould only be replicated to 0 nodes, instead of 1
>    at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
>    at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>    at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>    at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>
>
>
> _________________________________________________________________
> Windows Live ile fotoğraflarınızı organize edebilir, düzenleyebilir ve
> paylaşabilirsiniz.
>
> http://www.microsoft.com/turkiye/windows/windowslive/products/photo-gallery-edit.aspx