You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Kevin Burton <rk...@charter.net> on 2013/04/30 18:36:11 UTC

Can't initialize cluster

I have a simple MapReduce job that I am trying to get to run on my cluster.
When I run it I get:

 

13/04/30 11:27:45 INFO mapreduce.Cluster: Failed to use
org.apache.hadoop.mapred.LocalClientProtocolProvider due to error: Invalid
"mapreduce.jobtracker.address" configuration value for LocalJobRunner :
"devubuntu05:9001"

13/04/30 11:27:45 ERROR security.UserGroupInformation:
PriviledgedActionException as:kevin (auth:SIMPLE) cause:java.io.IOException:
Cannot initialize Cluster. Please check your configuration for
mapreduce.framework.name and the correspond server addresses.

Exception in thread "main" java.io.IOException: Cannot initialize Cluster.
Please check your configuration for mapreduce.framework.name and the
correspond server addresses.

 

My core-site.xml looks like:

 

<property>

  <name>fs.default.name</name>

  <value>hdfs://devubuntu05:9000</value>

  <description>The name of the default file system. A URI whose scheme and
authority determine the FileSystem implementation. </description>

</property>

 

So I am unclear as to why it is looking at devubuntu05:9001?

 

Here is the code:

 

    public static void WordCount( String[] args )  throws Exception {

        Configuration conf = new Configuration();

        String[] otherArgs = new GenericOptionsParser(conf,
args).getRemainingArgs();

        if (otherArgs.length != 2) {

            System.err.println("Usage: wordcount <in> <out>");

            System.exit(2);

        }

        Job job = new Job(conf, "word count");

        job.setJarByClass(WordCount.class);

        job.setMapperClass(WordCount.TokenizerMapper.class);

        job.setCombinerClass(WordCount.IntSumReducer.class);

        job.setReducerClass(WordCount.IntSumReducer.class);

        job.setOutputKeyClass(Text.class);

        job.setOutputValueClass(IntWritable.class);

 
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.addInputPath(job, new
Path(otherArgs[0]));

 
org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.setOutputPath(job,
new Path(otherArgs[1]));

        System.exit(job.waitForCompletion(true) ? 0 : 1);

 

Ideas?