You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Kirk Hunter <kh...@ptpnow.com> on 2009/05/04 22:54:01 UTC

Wrong FS Exception

Can someone tell me how to resolve the following error message found in the
job tracker log file when trying to start map reduce.

grep FATAL *
hadoop-hadoop-jobtracker-hadoop-1.log:2009-05-04 16:35:14,176 FATAL
org.apache.hadoop.mapred.JobTracker: java.lang.IllegalArgumentException:
Wrong FS: hdfs://usr/local/hadoop-datastore/hadoop-hadoop/mapred/system,
expected: hdfs://localhost:54310



Here is my hadoop-site.xml as well


<configuration>

<property>
<name>hadoop.tmp.dir</name>
<value>//usr/local/hadoop-datastore/hadoop-${user.name}</value>
<description>A base for other temporary directories.</description>
</property>
<property> <!--OH: this is to solve HADOOP-1212 bug that causes
"Incompatible na
mespaceIDs" in datanode log -->
<name>dfs.data.dir</name>
<value>/usr/local/hadoop-datastore/hadoop-${user.name}/dfs/data</value>
</property>
<!-- if incompatible problem persists, %rm -r
/usr/local/hadoop-datastore/hadoop
-hadoop/dfs/data from problematic datanode and reformat namenode -->
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description> The name of the default file system> A URI whose scheme and
author
ity determines the File System implementation> The uri's scheme determines
the config
 property (fs.SCHEME.impl) naming the File System implementation class.

The uri's authority is used to determine the host, port, etc. For a
filesystem.</desc
ription>
</property>

<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description> The host and port that the MapREduce job tracker runs at.  If
"local", 
then jobs are run in-process as a single map and reduce task. </description>
</property>
</configuration>

-- 
View this message in context: http://www.nabble.com/Wrong-FS-Exception-tp23376486p23376486.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.


Re: Wrong FS Exception

Posted by Bradford Stephens <br...@gmail.com>.
Are you trying to run a distributed cluster? Does everything have the
same config file? If so, every node is going to look at "localhost"
instead of the correct host for fs.default.name, mapred.job.tracker,
etc

On Mon, May 4, 2009 at 1:54 PM, Kirk Hunter <kh...@ptpnow.com> wrote:
>
> Can someone tell me how to resolve the following error message found in the
> job tracker log file when trying to start map reduce.
>
> grep FATAL *
> hadoop-hadoop-jobtracker-hadoop-1.log:2009-05-04 16:35:14,176 FATAL
> org.apache.hadoop.mapred.JobTracker: java.lang.IllegalArgumentException:
> Wrong FS: hdfs://usr/local/hadoop-datastore/hadoop-hadoop/mapred/system,
> expected: hdfs://localhost:54310
>
>
>
> Here is my hadoop-site.xml as well
>
>
> <configuration>
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>//usr/local/hadoop-datastore/hadoop-${user.name}</value>
> <description>A base for other temporary directories.</description>
> </property>
> <property> <!--OH: this is to solve HADOOP-1212 bug that causes
> "Incompatible na
> mespaceIDs" in datanode log -->
> <name>dfs.data.dir</name>
> <value>/usr/local/hadoop-datastore/hadoop-${user.name}/dfs/data</value>
> </property>
> <!-- if incompatible problem persists, %rm -r
> /usr/local/hadoop-datastore/hadoop
> -hadoop/dfs/data from problematic datanode and reformat namenode -->
> <property>
> <name>fs.default.name</name>
> <value>hdfs://localhost:54310</value>
> <description> The name of the default file system> A URI whose scheme and
> author
> ity determines the File System implementation> The uri's scheme determines
> the config
>  property (fs.SCHEME.impl) naming the File System implementation class.
>
> The uri's authority is used to determine the host, port, etc. For a
> filesystem.</desc
> ription>
> </property>
>
> <property>
> <name>mapred.job.tracker</name>
> <value>localhost:54311</value>
> <description> The host and port that the MapREduce job tracker runs at.  If
> "local",
> then jobs are run in-process as a single map and reduce task. </description>
> </property>
> </configuration>
>
> --
> View this message in context: http://www.nabble.com/Wrong-FS-Exception-tp23376486p23376486.html
> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>
>