You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by Harsh J <ha...@cloudera.com> on 2013/05/01 08:02:25 UTC

Re: Can't initialize cluster

When you run with java -jar, as previously stated on another thread,
you aren't loading any configs present on the installation (that
configure HDFS to be the default filesystem).

When you run with "hadoop jar", the configs under /etc/hadoop/conf get
applied automatically to your program, making it (1) use HDFS as
default FS and (2) run job in distributed mode, as opposed to local
with your java -jar config-less invocation.

On Tue, Apr 30, 2013 at 11:36 PM, Kevin Burton <rk...@charter.net> wrote:
> We/I are/am making progress. Now I get the error:
>
>
>
> 13/04/30 12:59:40 WARN mapred.JobClient: Use GenericOptionsParser for
> parsing the arguments. Applications should implement Tool for the same.
>
> 13/04/30 12:59:40 INFO mapred.JobClient: Cleaning up the staging area
> hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/staging/kevin/.staging/job_201304301251_0003
>
> 13/04/30 12:59:40 ERROR security.UserGroupInformation:
> PriviledgedActionException as:kevin (auth:SIMPLE)
> cause:org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input
> path does not exist: hdfs://devubuntu05:9000/user/kevin/input
>
> Exception in thread "main"
> org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does
> not exist: hdfs://devubuntu05:9000/user/kevin/input
>
>
>
> When I run it with java –jar the input and output is the local folder. When
> running it with hadoop jar it seems to be expecting the folders (input and
> output) to be on the HDFS file system. I am not sure why these two methods
> of invocation don’t make the same file system assumptions.
>
>
>
> It is
>
>
>
> hadoop jar WordCount.jar input output (which gives the above exception)
>
>
>
> versus
>
>
>
> java –jar WordCount.jar input output (which outputs the wordcount statistics
> to the output folder)
>
>
>
> This is run in the local /home/kevin/WordCount folder.
>
>
>
> Kevin
>
>
>
> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> Sent: Tuesday, April 30, 2013 12:33 PM
> To: user@hadoop.apache.org
> Subject: Re: Can't initialize cluster
>
>
>
> Set "HADOOP_MAPRED_HOME" in your hadoop-env.sh file and re-run the job. See
> if it helps.
>
>
> Warm Regards,
>
> Tariq
>
> https://mtariq.jux.com/
>
> cloudfront.blogspot.com
>
>
>
> On Tue, Apr 30, 2013 at 10:10 PM, Kevin Burton <rk...@charter.net>
> wrote:
>
> To be clear when this code is run with ‘java –jar’ it runs without
> exception. The exception occurs when I run with ‘hadoop jar’.
>
>
>
> From: Kevin Burton [mailto:rkevinburton@charter.net]
> Sent: Tuesday, April 30, 2013 11:36 AM
> To: user@hadoop.apache.org
> Subject: Can't initialize cluster
>
>
>
> I have a simple MapReduce job that I am trying to get to run on my cluster.
> When I run it I get:
>
>
>
> 13/04/30 11:27:45 INFO mapreduce.Cluster: Failed to use
> org.apache.hadoop.mapred.LocalClientProtocolProvider due to error: Invalid
> "mapreduce.jobtracker.address" configuration value for LocalJobRunner :
> "devubuntu05:9001"
>
> 13/04/30 11:27:45 ERROR security.UserGroupInformation:
> PriviledgedActionException as:kevin (auth:SIMPLE) cause:java.io.IOException:
> Cannot initialize Cluster. Please check your configuration for
> mapreduce.framework.name and the correspond server addresses.
>
> Exception in thread "main" java.io.IOException: Cannot initialize Cluster.
> Please check your configuration for mapreduce.framework.name and the
> correspond server addresses.
>
>
>
> My core-site.xml looks like:
>
>
>
> <property>
>
>   <name>fs.default.name</name>
>
>   <value>hdfs://devubuntu05:9000</value>
>
>   <description>The name of the default file system. A URI whose scheme and
> authority determine the FileSystem implementation. </description>
>
> </property>
>
>
>
> So I am unclear as to why it is looking at devubuntu05:9001?
>
>
>
> Here is the code:
>
>
>
>     public static void WordCount( String[] args )  throws Exception {
>
>         Configuration conf = new Configuration();
>
>         String[] otherArgs = new GenericOptionsParser(conf,
> args).getRemainingArgs();
>
>         if (otherArgs.length != 2) {
>
>             System.err.println("Usage: wordcount <in> <out>");
>
>             System.exit(2);
>
>         }
>
>         Job job = new Job(conf, "word count");
>
>         job.setJarByClass(WordCount.class);
>
>         job.setMapperClass(WordCount.TokenizerMapper.class);
>
>         job.setCombinerClass(WordCount.IntSumReducer.class);
>
>         job.setReducerClass(WordCount.IntSumReducer.class);
>
>         job.setOutputKeyClass(Text.class);
>
>         job.setOutputValueClass(IntWritable.class);
>
>
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.addInputPath(job, new
> Path(otherArgs[0]));
>
>
> org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.setOutputPath(job,
> new Path(otherArgs[1]));
>
>         System.exit(job.waitForCompletion(true) ? 0 : 1);
>
>
>
> Ideas?
>
>



-- 
Harsh J