You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Krishna Kumar <kr...@nechclst.in> on 2009/11/27 06:20:02 UTC
please help in setting hadoop
Dear All,
Can anybody please help me in getting out from these error messages:
[root@master hadoop]# hadoop jar
/usr/lib/hadoop/hadoop-0.18.3-14.cloudera.CH0_3-examples.jar wordcount
test test-op
09/11/26 17:15:45 INFO mapred.FileInputFormat: Total input paths to
process : 4
09/11/26 17:15:45 INFO mapred.FileInputFormat: Total input paths to
process : 4
org.apache.hadoop.ipc.RemoteException: java.io.IOException: No valid
local directories in property: mapred.local.dir
at
org.apache.hadoop.conf.Configuration.getLocalPath(Configuration.java:730
)
at
org.apache.hadoop.mapred.JobConf.getLocalPath(JobConf.java:222)
at
org.apache.hadoop.mapred.JobInProgress.<init>(JobInProgress.java:194)
at
org.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.java:1557)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.jav
a:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessor
Impl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:481)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:890)
I am running the hadoop cluster as root user on two server nodes: master
and slave. My hadoop-site.xml file format is as follows :
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://master:54310</value>
</property>
<property>
<name>mapred.job.tracker</name>
<value>master:54311</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/hadoop-${user.name}</value>
<!-- <value>/var/lib/hadoop/cache/${user.name}</value> -->
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<!-- specify this so that running 'hadoop namenode -format' formats
the right dir -->
<name>dfs.name.dir</name>
<value>/home/hadoop/dfs/name</value>
</property>
</configuration>
Further the o/p of ls command is as follows:
[root@master hadoop]# ls -l /home/hadoop/hadoop-root/
total 8
drwxr-xr-x 4 root root 4096 Nov 26 16:48 dfs
drwxr-xr-x 3 root root 4096 Nov 26 16:49 mapred
[root@master hadoop]#
[root@master hadoop]#
[root@master hadoop]# ls -l /home/hadoop/hadoop-root/mapred/
total 4
drwxr-xr-x 2 root root 4096 Nov 26 16:49 local
[root@master hadoop]#
[root@master hadoop]# ls -l /home/hadoop/hadoop-root/mapred/local/
total 0
Thanks and Best Regards,
Krishna Kumar
Senior Storage Engineer
Why do we have to die? If we had to die, and everything is gone after
that, then nothing else matters on this earth - everything is temporary,
at least relative to me.
DISCLAIMER:
-----------------------------------------------------------------------------------------------------------------------
The contents of this e-mail and any attachment(s) are confidential and
intended
for the named recipient(s) only.
It shall not attach any liability on the originator or NECHCL or its
affiliates. Any views or opinions presented in
this email are solely those of the author and may not necessarily reflect the
opinions of NECHCL or its affiliates.
Any form of reproduction, dissemination, copying, disclosure, modification,
distribution and / or publication of
this message without the prior written consent of the author of this e-mail is
strictly prohibited. If you have
received this email in error please delete it and notify the sender
immediately. .
-----------------------------------------------------------------------------------------------------------------------