You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by jamal sasha <ja...@gmail.com> on 2013/01/22 00:52:23 UTC

passing arguments to hadoop job

Hi,
  Lets say I have the standard helloworld program
http://hadoop.apache.org/docs/r0.17.0/mapred_tutorial.html#Example%3A+WordCount+v2.0

Now, lets say, I want to start the counting not from zero but from 200000.
So my reference line is 200000.

I modified the Reduce code as following:
 public static class Reduce extends MapReduceBase implements Reducer<Text,
IntWritable, Text, IntWritable> {
     *private static int baseSum ;*
*      public void configure(JobConf job){*
*      baseSum = Integer.parseInt(job.get("basecount"));*
*      *
*      }*
      public void reduce(Text key, Iterator<IntWritable> values,
OutputCollector<Text, IntWritable> output, Reporter reporter) throws
IOException {
        int sum =* baseSum*;
        while (values.hasNext()) {
          sum += values.next().get();
        }
        output.collect(key, new IntWritable(sum));
      }
    }


And in main added:
   conf.setInt("basecount",200000);



So my hope was this should have done the trick..
But its not working. the code is running normally :(
How do i resolve this?
Thanks

Re: passing arguments to hadoop job

Posted by Hemanth Yamijala <yh...@thoughtworks.com>.
OK. The easiest way I can think of for debugging this is to add a
System.out.println in your Reduce.configure code. The output will come in
the logs specific to your reduce tasks. You can access these logs from the
web ui of the jobtracker. Navigate to your job page from the Jobtracker UI
> reduce >  select any task > click on the task log links. Look under
'stdout'.

Thanks
Hemanth


On Tue, Jan 22, 2013 at 11:19 AM, jamal sasha <ja...@gmail.com> wrote:

> Hi,
>   The driver code is actually the same as of java word count old example:
> copying from site
> public static void main(String[] args) throws Exception {
>     JobConf conf = new JobConf(WordCount.class);
>      conf.setJobName("wordcount");
>
>      conf.setOutputKeyClass(Text.class);
>      conf.setOutputValueClass(IntWritable.class);
>      * conf.setInt("basecount",200000); // added this line*
>      conf.setMapperClass(Map.class);
>      conf.setCombinerClass(Reduce.class);
>      conf.setReducerClass(Reduce.class);
>
>      conf.setInputFormat(TextInputFormat.class);
>      conf.setOutputFormat(TextOutputFormat.class);
>
>      FileInputFormat.setInputPaths(conf, new Path(args[0]));
>      FileOutputFormat.setOutputPath(conf, new Path(args[1]));
>
>      JobClient.runJob(conf);
>    }
>
>
> Reducer class
>  public static class Reduce extends MapReduceBase implements Reducer<Text,
> IntWritable, Text, IntWritable> {
>      *private static int baseSum ;*
> *      public void configure(JobConf job){*
> *      baseSum = Integer.parseInt(job.get("basecount"));*
> *      *
> *      }*
>       public void reduce(Text key, Iterator<IntWritable> values,
> OutputCollector<Text, IntWritable> output, Reporter reporter) throws
> IOException {
>          int sum =* baseSum*;
>         while (values.hasNext()) {
>            sum += values.next().get();
>         }
>          output.collect(key, new IntWritable(sum));
>       }
>      }
>
> On Mon, Jan 21, 2013 at 8:29 PM, Hemanth Yamijala <
> yhemanth@thoughtworks.com> wrote:
> >
> > Hi,
> >
> > Please note that you are referring to a very old version of Hadoop. the
> current stable release is Hadoop 1.x. The API has changed in 1.x. Take a
> look at the wordcount example here:
> http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html#Example%3A+WordCount+v2.0
> >
> > But, in principle your method should work. I wrote it using the new API
> in a similar fashion and it worked fine. Can you show the code of your
> driver program (i.e. where you have main) ?
> >
> > Thanks
> > hemanth
> >
> >
> >
> > On Tue, Jan 22, 2013 at 5:22 AM, jamal sasha <ja...@gmail.com>
> wrote:
> >>
> >> Hi,
> >>   Lets say I have the standard helloworld program
> >>
> http://hadoop.apache.org/docs/r0.17.0/mapred_tutorial.html#Example%3A+WordCount+v2.0
> >>
> >> Now, lets say, I want to start the counting not from zero but from
> 200000.
> >> So my reference line is 200000.
> >>
> >> I modified the Reduce code as following:
> >>  public static class Reduce extends MapReduceBase implements
> Reducer<Text, IntWritable, Text, IntWritable> {
> >>      private static int baseSum ;
> >>      public void configure(JobConf job){
> >>      baseSum = Integer.parseInt(job.get("basecount"));
> >>
> >>      }
> >>       public void reduce(Text key, Iterator<IntWritable> values,
> OutputCollector<Text, IntWritable> output, Reporter reporter) throws
> IOException {
> >>         int sum = baseSum;
> >>         while (values.hasNext()) {
> >>           sum += values.next().get();
> >>         }
> >>         output.collect(key, new IntWritable(sum));
> >>       }
> >>     }
> >>
> >>
> >> And in main added:
> >>    conf.setInt("basecount",200000);
> >>
> >>
> >>
> >> So my hope was this should have done the trick..
> >> But its not working. the code is running normally :(
> >> How do i resolve this?
> >> Thanks
> >
> >
>

Re: passing arguments to hadoop job

Posted by Hemanth Yamijala <yh...@thoughtworks.com>.
OK. The easiest way I can think of for debugging this is to add a
System.out.println in your Reduce.configure code. The output will come in
the logs specific to your reduce tasks. You can access these logs from the
web ui of the jobtracker. Navigate to your job page from the Jobtracker UI
> reduce >  select any task > click on the task log links. Look under
'stdout'.

Thanks
Hemanth


On Tue, Jan 22, 2013 at 11:19 AM, jamal sasha <ja...@gmail.com> wrote:

> Hi,
>   The driver code is actually the same as of java word count old example:
> copying from site
> public static void main(String[] args) throws Exception {
>     JobConf conf = new JobConf(WordCount.class);
>      conf.setJobName("wordcount");
>
>      conf.setOutputKeyClass(Text.class);
>      conf.setOutputValueClass(IntWritable.class);
>      * conf.setInt("basecount",200000); // added this line*
>      conf.setMapperClass(Map.class);
>      conf.setCombinerClass(Reduce.class);
>      conf.setReducerClass(Reduce.class);
>
>      conf.setInputFormat(TextInputFormat.class);
>      conf.setOutputFormat(TextOutputFormat.class);
>
>      FileInputFormat.setInputPaths(conf, new Path(args[0]));
>      FileOutputFormat.setOutputPath(conf, new Path(args[1]));
>
>      JobClient.runJob(conf);
>    }
>
>
> Reducer class
>  public static class Reduce extends MapReduceBase implements Reducer<Text,
> IntWritable, Text, IntWritable> {
>      *private static int baseSum ;*
> *      public void configure(JobConf job){*
> *      baseSum = Integer.parseInt(job.get("basecount"));*
> *      *
> *      }*
>       public void reduce(Text key, Iterator<IntWritable> values,
> OutputCollector<Text, IntWritable> output, Reporter reporter) throws
> IOException {
>          int sum =* baseSum*;
>         while (values.hasNext()) {
>            sum += values.next().get();
>         }
>          output.collect(key, new IntWritable(sum));
>       }
>      }
>
> On Mon, Jan 21, 2013 at 8:29 PM, Hemanth Yamijala <
> yhemanth@thoughtworks.com> wrote:
> >
> > Hi,
> >
> > Please note that you are referring to a very old version of Hadoop. the
> current stable release is Hadoop 1.x. The API has changed in 1.x. Take a
> look at the wordcount example here:
> http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html#Example%3A+WordCount+v2.0
> >
> > But, in principle your method should work. I wrote it using the new API
> in a similar fashion and it worked fine. Can you show the code of your
> driver program (i.e. where you have main) ?
> >
> > Thanks
> > hemanth
> >
> >
> >
> > On Tue, Jan 22, 2013 at 5:22 AM, jamal sasha <ja...@gmail.com>
> wrote:
> >>
> >> Hi,
> >>   Lets say I have the standard helloworld program
> >>
> http://hadoop.apache.org/docs/r0.17.0/mapred_tutorial.html#Example%3A+WordCount+v2.0
> >>
> >> Now, lets say, I want to start the counting not from zero but from
> 200000.
> >> So my reference line is 200000.
> >>
> >> I modified the Reduce code as following:
> >>  public static class Reduce extends MapReduceBase implements
> Reducer<Text, IntWritable, Text, IntWritable> {
> >>      private static int baseSum ;
> >>      public void configure(JobConf job){
> >>      baseSum = Integer.parseInt(job.get("basecount"));
> >>
> >>      }
> >>       public void reduce(Text key, Iterator<IntWritable> values,
> OutputCollector<Text, IntWritable> output, Reporter reporter) throws
> IOException {
> >>         int sum = baseSum;
> >>         while (values.hasNext()) {
> >>           sum += values.next().get();
> >>         }
> >>         output.collect(key, new IntWritable(sum));
> >>       }
> >>     }
> >>
> >>
> >> And in main added:
> >>    conf.setInt("basecount",200000);
> >>
> >>
> >>
> >> So my hope was this should have done the trick..
> >> But its not working. the code is running normally :(
> >> How do i resolve this?
> >> Thanks
> >
> >
>

Re: passing arguments to hadoop job

Posted by Hemanth Yamijala <yh...@thoughtworks.com>.
OK. The easiest way I can think of for debugging this is to add a
System.out.println in your Reduce.configure code. The output will come in
the logs specific to your reduce tasks. You can access these logs from the
web ui of the jobtracker. Navigate to your job page from the Jobtracker UI
> reduce >  select any task > click on the task log links. Look under
'stdout'.

Thanks
Hemanth


On Tue, Jan 22, 2013 at 11:19 AM, jamal sasha <ja...@gmail.com> wrote:

> Hi,
>   The driver code is actually the same as of java word count old example:
> copying from site
> public static void main(String[] args) throws Exception {
>     JobConf conf = new JobConf(WordCount.class);
>      conf.setJobName("wordcount");
>
>      conf.setOutputKeyClass(Text.class);
>      conf.setOutputValueClass(IntWritable.class);
>      * conf.setInt("basecount",200000); // added this line*
>      conf.setMapperClass(Map.class);
>      conf.setCombinerClass(Reduce.class);
>      conf.setReducerClass(Reduce.class);
>
>      conf.setInputFormat(TextInputFormat.class);
>      conf.setOutputFormat(TextOutputFormat.class);
>
>      FileInputFormat.setInputPaths(conf, new Path(args[0]));
>      FileOutputFormat.setOutputPath(conf, new Path(args[1]));
>
>      JobClient.runJob(conf);
>    }
>
>
> Reducer class
>  public static class Reduce extends MapReduceBase implements Reducer<Text,
> IntWritable, Text, IntWritable> {
>      *private static int baseSum ;*
> *      public void configure(JobConf job){*
> *      baseSum = Integer.parseInt(job.get("basecount"));*
> *      *
> *      }*
>       public void reduce(Text key, Iterator<IntWritable> values,
> OutputCollector<Text, IntWritable> output, Reporter reporter) throws
> IOException {
>          int sum =* baseSum*;
>         while (values.hasNext()) {
>            sum += values.next().get();
>         }
>          output.collect(key, new IntWritable(sum));
>       }
>      }
>
> On Mon, Jan 21, 2013 at 8:29 PM, Hemanth Yamijala <
> yhemanth@thoughtworks.com> wrote:
> >
> > Hi,
> >
> > Please note that you are referring to a very old version of Hadoop. the
> current stable release is Hadoop 1.x. The API has changed in 1.x. Take a
> look at the wordcount example here:
> http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html#Example%3A+WordCount+v2.0
> >
> > But, in principle your method should work. I wrote it using the new API
> in a similar fashion and it worked fine. Can you show the code of your
> driver program (i.e. where you have main) ?
> >
> > Thanks
> > hemanth
> >
> >
> >
> > On Tue, Jan 22, 2013 at 5:22 AM, jamal sasha <ja...@gmail.com>
> wrote:
> >>
> >> Hi,
> >>   Lets say I have the standard helloworld program
> >>
> http://hadoop.apache.org/docs/r0.17.0/mapred_tutorial.html#Example%3A+WordCount+v2.0
> >>
> >> Now, lets say, I want to start the counting not from zero but from
> 200000.
> >> So my reference line is 200000.
> >>
> >> I modified the Reduce code as following:
> >>  public static class Reduce extends MapReduceBase implements
> Reducer<Text, IntWritable, Text, IntWritable> {
> >>      private static int baseSum ;
> >>      public void configure(JobConf job){
> >>      baseSum = Integer.parseInt(job.get("basecount"));
> >>
> >>      }
> >>       public void reduce(Text key, Iterator<IntWritable> values,
> OutputCollector<Text, IntWritable> output, Reporter reporter) throws
> IOException {
> >>         int sum = baseSum;
> >>         while (values.hasNext()) {
> >>           sum += values.next().get();
> >>         }
> >>         output.collect(key, new IntWritable(sum));
> >>       }
> >>     }
> >>
> >>
> >> And in main added:
> >>    conf.setInt("basecount",200000);
> >>
> >>
> >>
> >> So my hope was this should have done the trick..
> >> But its not working. the code is running normally :(
> >> How do i resolve this?
> >> Thanks
> >
> >
>

Re: passing arguments to hadoop job

Posted by Mohammad Tariq <do...@gmail.com>.
Hello Jamal,

    When you set something using "conf.set("extparam", "value")", you need
to read it by using "context.getConfiguration().get("extparam")" in your
mapper or reducer. Also, no need to declare it as a global variable.

One more thing, try to use the new API.

HTH

Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com


On Tue, Jan 22, 2013 at 11:19 AM, jamal sasha <ja...@gmail.com> wrote:

> Hi,
>   The driver code is actually the same as of java word count old example:
> copying from site
> public static void main(String[] args) throws Exception {
>     JobConf conf = new JobConf(WordCount.class);
>      conf.setJobName("wordcount");
>
>      conf.setOutputKeyClass(Text.class);
>      conf.setOutputValueClass(IntWritable.class);
>      * conf.setInt("basecount",200000); // added this line*
>      conf.setMapperClass(Map.class);
>      conf.setCombinerClass(Reduce.class);
>      conf.setReducerClass(Reduce.class);
>
>      conf.setInputFormat(TextInputFormat.class);
>      conf.setOutputFormat(TextOutputFormat.class);
>
>      FileInputFormat.setInputPaths(conf, new Path(args[0]));
>      FileOutputFormat.setOutputPath(conf, new Path(args[1]));
>
>      JobClient.runJob(conf);
>    }
>
>
> Reducer class
>   public static class Reduce extends MapReduceBase implements
> Reducer<Text, IntWritable, Text, IntWritable> {
>      *private static int baseSum ;*
> *      public void configure(JobConf job){*
> *      baseSum = Integer.parseInt(job.get("basecount"));*
> *      *
> *      }*
>       public void reduce(Text key, Iterator<IntWritable> values,
> OutputCollector<Text, IntWritable> output, Reporter reporter) throws
> IOException {
>          int sum =* baseSum*;
>         while (values.hasNext()) {
>            sum += values.next().get();
>         }
>          output.collect(key, new IntWritable(sum));
>       }
>      }
>
> On Mon, Jan 21, 2013 at 8:29 PM, Hemanth Yamijala <
> yhemanth@thoughtworks.com> wrote:
> >
> > Hi,
> >
> > Please note that you are referring to a very old version of Hadoop. the
> current stable release is Hadoop 1.x. The API has changed in 1.x. Take a
> look at the wordcount example here:
> http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html#Example%3A+WordCount+v2.0
> >
> > But, in principle your method should work. I wrote it using the new API
> in a similar fashion and it worked fine. Can you show the code of your
> driver program (i.e. where you have main) ?
> >
> > Thanks
> > hemanth
> >
> >
> >
> > On Tue, Jan 22, 2013 at 5:22 AM, jamal sasha <ja...@gmail.com>
> wrote:
> >>
> >> Hi,
> >>   Lets say I have the standard helloworld program
> >>
> http://hadoop.apache.org/docs/r0.17.0/mapred_tutorial.html#Example%3A+WordCount+v2.0
> >>
> >> Now, lets say, I want to start the counting not from zero but from
> 200000.
> >> So my reference line is 200000.
> >>
> >> I modified the Reduce code as following:
> >>  public static class Reduce extends MapReduceBase implements
> Reducer<Text, IntWritable, Text, IntWritable> {
> >>      private static int baseSum ;
> >>      public void configure(JobConf job){
> >>      baseSum = Integer.parseInt(job.get("basecount"));
> >>
> >>      }
> >>       public void reduce(Text key, Iterator<IntWritable> values,
> OutputCollector<Text, IntWritable> output, Reporter reporter) throws
> IOException {
> >>         int sum = baseSum;
> >>         while (values.hasNext()) {
> >>           sum += values.next().get();
> >>         }
> >>         output.collect(key, new IntWritable(sum));
> >>       }
> >>     }
> >>
> >>
> >> And in main added:
> >>    conf.setInt("basecount",200000);
> >>
> >>
> >>
> >> So my hope was this should have done the trick..
> >> But its not working. the code is running normally :(
> >> How do i resolve this?
> >> Thanks
> >
> >
>

Re: passing arguments to hadoop job

Posted by Mohammad Tariq <do...@gmail.com>.
Hello Jamal,

    When you set something using "conf.set("extparam", "value")", you need
to read it by using "context.getConfiguration().get("extparam")" in your
mapper or reducer. Also, no need to declare it as a global variable.

One more thing, try to use the new API.

HTH

Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com


On Tue, Jan 22, 2013 at 11:19 AM, jamal sasha <ja...@gmail.com> wrote:

> Hi,
>   The driver code is actually the same as of java word count old example:
> copying from site
> public static void main(String[] args) throws Exception {
>     JobConf conf = new JobConf(WordCount.class);
>      conf.setJobName("wordcount");
>
>      conf.setOutputKeyClass(Text.class);
>      conf.setOutputValueClass(IntWritable.class);
>      * conf.setInt("basecount",200000); // added this line*
>      conf.setMapperClass(Map.class);
>      conf.setCombinerClass(Reduce.class);
>      conf.setReducerClass(Reduce.class);
>
>      conf.setInputFormat(TextInputFormat.class);
>      conf.setOutputFormat(TextOutputFormat.class);
>
>      FileInputFormat.setInputPaths(conf, new Path(args[0]));
>      FileOutputFormat.setOutputPath(conf, new Path(args[1]));
>
>      JobClient.runJob(conf);
>    }
>
>
> Reducer class
>   public static class Reduce extends MapReduceBase implements
> Reducer<Text, IntWritable, Text, IntWritable> {
>      *private static int baseSum ;*
> *      public void configure(JobConf job){*
> *      baseSum = Integer.parseInt(job.get("basecount"));*
> *      *
> *      }*
>       public void reduce(Text key, Iterator<IntWritable> values,
> OutputCollector<Text, IntWritable> output, Reporter reporter) throws
> IOException {
>          int sum =* baseSum*;
>         while (values.hasNext()) {
>            sum += values.next().get();
>         }
>          output.collect(key, new IntWritable(sum));
>       }
>      }
>
> On Mon, Jan 21, 2013 at 8:29 PM, Hemanth Yamijala <
> yhemanth@thoughtworks.com> wrote:
> >
> > Hi,
> >
> > Please note that you are referring to a very old version of Hadoop. the
> current stable release is Hadoop 1.x. The API has changed in 1.x. Take a
> look at the wordcount example here:
> http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html#Example%3A+WordCount+v2.0
> >
> > But, in principle your method should work. I wrote it using the new API
> in a similar fashion and it worked fine. Can you show the code of your
> driver program (i.e. where you have main) ?
> >
> > Thanks
> > hemanth
> >
> >
> >
> > On Tue, Jan 22, 2013 at 5:22 AM, jamal sasha <ja...@gmail.com>
> wrote:
> >>
> >> Hi,
> >>   Lets say I have the standard helloworld program
> >>
> http://hadoop.apache.org/docs/r0.17.0/mapred_tutorial.html#Example%3A+WordCount+v2.0
> >>
> >> Now, lets say, I want to start the counting not from zero but from
> 200000.
> >> So my reference line is 200000.
> >>
> >> I modified the Reduce code as following:
> >>  public static class Reduce extends MapReduceBase implements
> Reducer<Text, IntWritable, Text, IntWritable> {
> >>      private static int baseSum ;
> >>      public void configure(JobConf job){
> >>      baseSum = Integer.parseInt(job.get("basecount"));
> >>
> >>      }
> >>       public void reduce(Text key, Iterator<IntWritable> values,
> OutputCollector<Text, IntWritable> output, Reporter reporter) throws
> IOException {
> >>         int sum = baseSum;
> >>         while (values.hasNext()) {
> >>           sum += values.next().get();
> >>         }
> >>         output.collect(key, new IntWritable(sum));
> >>       }
> >>     }
> >>
> >>
> >> And in main added:
> >>    conf.setInt("basecount",200000);
> >>
> >>
> >>
> >> So my hope was this should have done the trick..
> >> But its not working. the code is running normally :(
> >> How do i resolve this?
> >> Thanks
> >
> >
>

Re: passing arguments to hadoop job

Posted by Hemanth Yamijala <yh...@thoughtworks.com>.
OK. The easiest way I can think of for debugging this is to add a
System.out.println in your Reduce.configure code. The output will come in
the logs specific to your reduce tasks. You can access these logs from the
web ui of the jobtracker. Navigate to your job page from the Jobtracker UI
> reduce >  select any task > click on the task log links. Look under
'stdout'.

Thanks
Hemanth


On Tue, Jan 22, 2013 at 11:19 AM, jamal sasha <ja...@gmail.com> wrote:

> Hi,
>   The driver code is actually the same as of java word count old example:
> copying from site
> public static void main(String[] args) throws Exception {
>     JobConf conf = new JobConf(WordCount.class);
>      conf.setJobName("wordcount");
>
>      conf.setOutputKeyClass(Text.class);
>      conf.setOutputValueClass(IntWritable.class);
>      * conf.setInt("basecount",200000); // added this line*
>      conf.setMapperClass(Map.class);
>      conf.setCombinerClass(Reduce.class);
>      conf.setReducerClass(Reduce.class);
>
>      conf.setInputFormat(TextInputFormat.class);
>      conf.setOutputFormat(TextOutputFormat.class);
>
>      FileInputFormat.setInputPaths(conf, new Path(args[0]));
>      FileOutputFormat.setOutputPath(conf, new Path(args[1]));
>
>      JobClient.runJob(conf);
>    }
>
>
> Reducer class
>  public static class Reduce extends MapReduceBase implements Reducer<Text,
> IntWritable, Text, IntWritable> {
>      *private static int baseSum ;*
> *      public void configure(JobConf job){*
> *      baseSum = Integer.parseInt(job.get("basecount"));*
> *      *
> *      }*
>       public void reduce(Text key, Iterator<IntWritable> values,
> OutputCollector<Text, IntWritable> output, Reporter reporter) throws
> IOException {
>          int sum =* baseSum*;
>         while (values.hasNext()) {
>            sum += values.next().get();
>         }
>          output.collect(key, new IntWritable(sum));
>       }
>      }
>
> On Mon, Jan 21, 2013 at 8:29 PM, Hemanth Yamijala <
> yhemanth@thoughtworks.com> wrote:
> >
> > Hi,
> >
> > Please note that you are referring to a very old version of Hadoop. the
> current stable release is Hadoop 1.x. The API has changed in 1.x. Take a
> look at the wordcount example here:
> http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html#Example%3A+WordCount+v2.0
> >
> > But, in principle your method should work. I wrote it using the new API
> in a similar fashion and it worked fine. Can you show the code of your
> driver program (i.e. where you have main) ?
> >
> > Thanks
> > hemanth
> >
> >
> >
> > On Tue, Jan 22, 2013 at 5:22 AM, jamal sasha <ja...@gmail.com>
> wrote:
> >>
> >> Hi,
> >>   Lets say I have the standard helloworld program
> >>
> http://hadoop.apache.org/docs/r0.17.0/mapred_tutorial.html#Example%3A+WordCount+v2.0
> >>
> >> Now, lets say, I want to start the counting not from zero but from
> 200000.
> >> So my reference line is 200000.
> >>
> >> I modified the Reduce code as following:
> >>  public static class Reduce extends MapReduceBase implements
> Reducer<Text, IntWritable, Text, IntWritable> {
> >>      private static int baseSum ;
> >>      public void configure(JobConf job){
> >>      baseSum = Integer.parseInt(job.get("basecount"));
> >>
> >>      }
> >>       public void reduce(Text key, Iterator<IntWritable> values,
> OutputCollector<Text, IntWritable> output, Reporter reporter) throws
> IOException {
> >>         int sum = baseSum;
> >>         while (values.hasNext()) {
> >>           sum += values.next().get();
> >>         }
> >>         output.collect(key, new IntWritable(sum));
> >>       }
> >>     }
> >>
> >>
> >> And in main added:
> >>    conf.setInt("basecount",200000);
> >>
> >>
> >>
> >> So my hope was this should have done the trick..
> >> But its not working. the code is running normally :(
> >> How do i resolve this?
> >> Thanks
> >
> >
>

Re: passing arguments to hadoop job

Posted by Mohammad Tariq <do...@gmail.com>.
Hello Jamal,

    When you set something using "conf.set("extparam", "value")", you need
to read it by using "context.getConfiguration().get("extparam")" in your
mapper or reducer. Also, no need to declare it as a global variable.

One more thing, try to use the new API.

HTH

Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com


On Tue, Jan 22, 2013 at 11:19 AM, jamal sasha <ja...@gmail.com> wrote:

> Hi,
>   The driver code is actually the same as of java word count old example:
> copying from site
> public static void main(String[] args) throws Exception {
>     JobConf conf = new JobConf(WordCount.class);
>      conf.setJobName("wordcount");
>
>      conf.setOutputKeyClass(Text.class);
>      conf.setOutputValueClass(IntWritable.class);
>      * conf.setInt("basecount",200000); // added this line*
>      conf.setMapperClass(Map.class);
>      conf.setCombinerClass(Reduce.class);
>      conf.setReducerClass(Reduce.class);
>
>      conf.setInputFormat(TextInputFormat.class);
>      conf.setOutputFormat(TextOutputFormat.class);
>
>      FileInputFormat.setInputPaths(conf, new Path(args[0]));
>      FileOutputFormat.setOutputPath(conf, new Path(args[1]));
>
>      JobClient.runJob(conf);
>    }
>
>
> Reducer class
>   public static class Reduce extends MapReduceBase implements
> Reducer<Text, IntWritable, Text, IntWritable> {
>      *private static int baseSum ;*
> *      public void configure(JobConf job){*
> *      baseSum = Integer.parseInt(job.get("basecount"));*
> *      *
> *      }*
>       public void reduce(Text key, Iterator<IntWritable> values,
> OutputCollector<Text, IntWritable> output, Reporter reporter) throws
> IOException {
>          int sum =* baseSum*;
>         while (values.hasNext()) {
>            sum += values.next().get();
>         }
>          output.collect(key, new IntWritable(sum));
>       }
>      }
>
> On Mon, Jan 21, 2013 at 8:29 PM, Hemanth Yamijala <
> yhemanth@thoughtworks.com> wrote:
> >
> > Hi,
> >
> > Please note that you are referring to a very old version of Hadoop. the
> current stable release is Hadoop 1.x. The API has changed in 1.x. Take a
> look at the wordcount example here:
> http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html#Example%3A+WordCount+v2.0
> >
> > But, in principle your method should work. I wrote it using the new API
> in a similar fashion and it worked fine. Can you show the code of your
> driver program (i.e. where you have main) ?
> >
> > Thanks
> > hemanth
> >
> >
> >
> > On Tue, Jan 22, 2013 at 5:22 AM, jamal sasha <ja...@gmail.com>
> wrote:
> >>
> >> Hi,
> >>   Lets say I have the standard helloworld program
> >>
> http://hadoop.apache.org/docs/r0.17.0/mapred_tutorial.html#Example%3A+WordCount+v2.0
> >>
> >> Now, lets say, I want to start the counting not from zero but from
> 200000.
> >> So my reference line is 200000.
> >>
> >> I modified the Reduce code as following:
> >>  public static class Reduce extends MapReduceBase implements
> Reducer<Text, IntWritable, Text, IntWritable> {
> >>      private static int baseSum ;
> >>      public void configure(JobConf job){
> >>      baseSum = Integer.parseInt(job.get("basecount"));
> >>
> >>      }
> >>       public void reduce(Text key, Iterator<IntWritable> values,
> OutputCollector<Text, IntWritable> output, Reporter reporter) throws
> IOException {
> >>         int sum = baseSum;
> >>         while (values.hasNext()) {
> >>           sum += values.next().get();
> >>         }
> >>         output.collect(key, new IntWritable(sum));
> >>       }
> >>     }
> >>
> >>
> >> And in main added:
> >>    conf.setInt("basecount",200000);
> >>
> >>
> >>
> >> So my hope was this should have done the trick..
> >> But its not working. the code is running normally :(
> >> How do i resolve this?
> >> Thanks
> >
> >
>

Re: passing arguments to hadoop job

Posted by Mohammad Tariq <do...@gmail.com>.
Hello Jamal,

    When you set something using "conf.set("extparam", "value")", you need
to read it by using "context.getConfiguration().get("extparam")" in your
mapper or reducer. Also, no need to declare it as a global variable.

One more thing, try to use the new API.

HTH

Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com


On Tue, Jan 22, 2013 at 11:19 AM, jamal sasha <ja...@gmail.com> wrote:

> Hi,
>   The driver code is actually the same as of java word count old example:
> copying from site
> public static void main(String[] args) throws Exception {
>     JobConf conf = new JobConf(WordCount.class);
>      conf.setJobName("wordcount");
>
>      conf.setOutputKeyClass(Text.class);
>      conf.setOutputValueClass(IntWritable.class);
>      * conf.setInt("basecount",200000); // added this line*
>      conf.setMapperClass(Map.class);
>      conf.setCombinerClass(Reduce.class);
>      conf.setReducerClass(Reduce.class);
>
>      conf.setInputFormat(TextInputFormat.class);
>      conf.setOutputFormat(TextOutputFormat.class);
>
>      FileInputFormat.setInputPaths(conf, new Path(args[0]));
>      FileOutputFormat.setOutputPath(conf, new Path(args[1]));
>
>      JobClient.runJob(conf);
>    }
>
>
> Reducer class
>   public static class Reduce extends MapReduceBase implements
> Reducer<Text, IntWritable, Text, IntWritable> {
>      *private static int baseSum ;*
> *      public void configure(JobConf job){*
> *      baseSum = Integer.parseInt(job.get("basecount"));*
> *      *
> *      }*
>       public void reduce(Text key, Iterator<IntWritable> values,
> OutputCollector<Text, IntWritable> output, Reporter reporter) throws
> IOException {
>          int sum =* baseSum*;
>         while (values.hasNext()) {
>            sum += values.next().get();
>         }
>          output.collect(key, new IntWritable(sum));
>       }
>      }
>
> On Mon, Jan 21, 2013 at 8:29 PM, Hemanth Yamijala <
> yhemanth@thoughtworks.com> wrote:
> >
> > Hi,
> >
> > Please note that you are referring to a very old version of Hadoop. the
> current stable release is Hadoop 1.x. The API has changed in 1.x. Take a
> look at the wordcount example here:
> http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html#Example%3A+WordCount+v2.0
> >
> > But, in principle your method should work. I wrote it using the new API
> in a similar fashion and it worked fine. Can you show the code of your
> driver program (i.e. where you have main) ?
> >
> > Thanks
> > hemanth
> >
> >
> >
> > On Tue, Jan 22, 2013 at 5:22 AM, jamal sasha <ja...@gmail.com>
> wrote:
> >>
> >> Hi,
> >>   Lets say I have the standard helloworld program
> >>
> http://hadoop.apache.org/docs/r0.17.0/mapred_tutorial.html#Example%3A+WordCount+v2.0
> >>
> >> Now, lets say, I want to start the counting not from zero but from
> 200000.
> >> So my reference line is 200000.
> >>
> >> I modified the Reduce code as following:
> >>  public static class Reduce extends MapReduceBase implements
> Reducer<Text, IntWritable, Text, IntWritable> {
> >>      private static int baseSum ;
> >>      public void configure(JobConf job){
> >>      baseSum = Integer.parseInt(job.get("basecount"));
> >>
> >>      }
> >>       public void reduce(Text key, Iterator<IntWritable> values,
> OutputCollector<Text, IntWritable> output, Reporter reporter) throws
> IOException {
> >>         int sum = baseSum;
> >>         while (values.hasNext()) {
> >>           sum += values.next().get();
> >>         }
> >>         output.collect(key, new IntWritable(sum));
> >>       }
> >>     }
> >>
> >>
> >> And in main added:
> >>    conf.setInt("basecount",200000);
> >>
> >>
> >>
> >> So my hope was this should have done the trick..
> >> But its not working. the code is running normally :(
> >> How do i resolve this?
> >> Thanks
> >
> >
>

Re: passing arguments to hadoop job

Posted by jamal sasha <ja...@gmail.com>.
Hi,
  The driver code is actually the same as of java word count old example:
copying from site
public static void main(String[] args) throws Exception {
    JobConf conf = new JobConf(WordCount.class);
     conf.setJobName("wordcount");

     conf.setOutputKeyClass(Text.class);
     conf.setOutputValueClass(IntWritable.class);
     * conf.setInt("basecount",200000); // added this line*
     conf.setMapperClass(Map.class);
     conf.setCombinerClass(Reduce.class);
     conf.setReducerClass(Reduce.class);

     conf.setInputFormat(TextInputFormat.class);
     conf.setOutputFormat(TextOutputFormat.class);

     FileInputFormat.setInputPaths(conf, new Path(args[0]));
     FileOutputFormat.setOutputPath(conf, new Path(args[1]));

     JobClient.runJob(conf);
   }


Reducer class
 public static class Reduce extends MapReduceBase implements Reducer<Text,
IntWritable, Text, IntWritable> {
     *private static int baseSum ;*
*      public void configure(JobConf job){*
*      baseSum = Integer.parseInt(job.get("basecount"));*
*      *
*      }*
      public void reduce(Text key, Iterator<IntWritable> values,
OutputCollector<Text, IntWritable> output, Reporter reporter) throws
IOException {
        int sum =* baseSum*;
        while (values.hasNext()) {
          sum += values.next().get();
        }
        output.collect(key, new IntWritable(sum));
      }
    }

On Mon, Jan 21, 2013 at 8:29 PM, Hemanth Yamijala <yh...@thoughtworks.com>
wrote:
>
> Hi,
>
> Please note that you are referring to a very old version of Hadoop. the
current stable release is Hadoop 1.x. The API has changed in 1.x. Take a
look at the wordcount example here:
http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html#Example%3A+WordCount+v2.0
>
> But, in principle your method should work. I wrote it using the new API
in a similar fashion and it worked fine. Can you show the code of your
driver program (i.e. where you have main) ?
>
> Thanks
> hemanth
>
>
>
> On Tue, Jan 22, 2013 at 5:22 AM, jamal sasha <ja...@gmail.com>
wrote:
>>
>> Hi,
>>   Lets say I have the standard helloworld program
>>
http://hadoop.apache.org/docs/r0.17.0/mapred_tutorial.html#Example%3A+WordCount+v2.0
>>
>> Now, lets say, I want to start the counting not from zero but from
200000.
>> So my reference line is 200000.
>>
>> I modified the Reduce code as following:
>>  public static class Reduce extends MapReduceBase implements
Reducer<Text, IntWritable, Text, IntWritable> {
>>      private static int baseSum ;
>>      public void configure(JobConf job){
>>      baseSum = Integer.parseInt(job.get("basecount"));
>>
>>      }
>>       public void reduce(Text key, Iterator<IntWritable> values,
OutputCollector<Text, IntWritable> output, Reporter reporter) throws
IOException {
>>         int sum = baseSum;
>>         while (values.hasNext()) {
>>           sum += values.next().get();
>>         }
>>         output.collect(key, new IntWritable(sum));
>>       }
>>     }
>>
>>
>> And in main added:
>>    conf.setInt("basecount",200000);
>>
>>
>>
>> So my hope was this should have done the trick..
>> But its not working. the code is running normally :(
>> How do i resolve this?
>> Thanks
>
>

Re: passing arguments to hadoop job

Posted by jamal sasha <ja...@gmail.com>.
Hi,
  The driver code is actually the same as of java word count old example:
copying from site
public static void main(String[] args) throws Exception {
    JobConf conf = new JobConf(WordCount.class);
     conf.setJobName("wordcount");

     conf.setOutputKeyClass(Text.class);
     conf.setOutputValueClass(IntWritable.class);
     * conf.setInt("basecount",200000); // added this line*
     conf.setMapperClass(Map.class);
     conf.setCombinerClass(Reduce.class);
     conf.setReducerClass(Reduce.class);

     conf.setInputFormat(TextInputFormat.class);
     conf.setOutputFormat(TextOutputFormat.class);

     FileInputFormat.setInputPaths(conf, new Path(args[0]));
     FileOutputFormat.setOutputPath(conf, new Path(args[1]));

     JobClient.runJob(conf);
   }


Reducer class
 public static class Reduce extends MapReduceBase implements Reducer<Text,
IntWritable, Text, IntWritable> {
     *private static int baseSum ;*
*      public void configure(JobConf job){*
*      baseSum = Integer.parseInt(job.get("basecount"));*
*      *
*      }*
      public void reduce(Text key, Iterator<IntWritable> values,
OutputCollector<Text, IntWritable> output, Reporter reporter) throws
IOException {
        int sum =* baseSum*;
        while (values.hasNext()) {
          sum += values.next().get();
        }
        output.collect(key, new IntWritable(sum));
      }
    }

On Mon, Jan 21, 2013 at 8:29 PM, Hemanth Yamijala <yh...@thoughtworks.com>
wrote:
>
> Hi,
>
> Please note that you are referring to a very old version of Hadoop. the
current stable release is Hadoop 1.x. The API has changed in 1.x. Take a
look at the wordcount example here:
http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html#Example%3A+WordCount+v2.0
>
> But, in principle your method should work. I wrote it using the new API
in a similar fashion and it worked fine. Can you show the code of your
driver program (i.e. where you have main) ?
>
> Thanks
> hemanth
>
>
>
> On Tue, Jan 22, 2013 at 5:22 AM, jamal sasha <ja...@gmail.com>
wrote:
>>
>> Hi,
>>   Lets say I have the standard helloworld program
>>
http://hadoop.apache.org/docs/r0.17.0/mapred_tutorial.html#Example%3A+WordCount+v2.0
>>
>> Now, lets say, I want to start the counting not from zero but from
200000.
>> So my reference line is 200000.
>>
>> I modified the Reduce code as following:
>>  public static class Reduce extends MapReduceBase implements
Reducer<Text, IntWritable, Text, IntWritable> {
>>      private static int baseSum ;
>>      public void configure(JobConf job){
>>      baseSum = Integer.parseInt(job.get("basecount"));
>>
>>      }
>>       public void reduce(Text key, Iterator<IntWritable> values,
OutputCollector<Text, IntWritable> output, Reporter reporter) throws
IOException {
>>         int sum = baseSum;
>>         while (values.hasNext()) {
>>           sum += values.next().get();
>>         }
>>         output.collect(key, new IntWritable(sum));
>>       }
>>     }
>>
>>
>> And in main added:
>>    conf.setInt("basecount",200000);
>>
>>
>>
>> So my hope was this should have done the trick..
>> But its not working. the code is running normally :(
>> How do i resolve this?
>> Thanks
>
>

Re: passing arguments to hadoop job

Posted by jamal sasha <ja...@gmail.com>.
Hi,
  The driver code is actually the same as of java word count old example:
copying from site
public static void main(String[] args) throws Exception {
    JobConf conf = new JobConf(WordCount.class);
     conf.setJobName("wordcount");

     conf.setOutputKeyClass(Text.class);
     conf.setOutputValueClass(IntWritable.class);
     * conf.setInt("basecount",200000); // added this line*
     conf.setMapperClass(Map.class);
     conf.setCombinerClass(Reduce.class);
     conf.setReducerClass(Reduce.class);

     conf.setInputFormat(TextInputFormat.class);
     conf.setOutputFormat(TextOutputFormat.class);

     FileInputFormat.setInputPaths(conf, new Path(args[0]));
     FileOutputFormat.setOutputPath(conf, new Path(args[1]));

     JobClient.runJob(conf);
   }


Reducer class
 public static class Reduce extends MapReduceBase implements Reducer<Text,
IntWritable, Text, IntWritable> {
     *private static int baseSum ;*
*      public void configure(JobConf job){*
*      baseSum = Integer.parseInt(job.get("basecount"));*
*      *
*      }*
      public void reduce(Text key, Iterator<IntWritable> values,
OutputCollector<Text, IntWritable> output, Reporter reporter) throws
IOException {
        int sum =* baseSum*;
        while (values.hasNext()) {
          sum += values.next().get();
        }
        output.collect(key, new IntWritable(sum));
      }
    }

On Mon, Jan 21, 2013 at 8:29 PM, Hemanth Yamijala <yh...@thoughtworks.com>
wrote:
>
> Hi,
>
> Please note that you are referring to a very old version of Hadoop. the
current stable release is Hadoop 1.x. The API has changed in 1.x. Take a
look at the wordcount example here:
http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html#Example%3A+WordCount+v2.0
>
> But, in principle your method should work. I wrote it using the new API
in a similar fashion and it worked fine. Can you show the code of your
driver program (i.e. where you have main) ?
>
> Thanks
> hemanth
>
>
>
> On Tue, Jan 22, 2013 at 5:22 AM, jamal sasha <ja...@gmail.com>
wrote:
>>
>> Hi,
>>   Lets say I have the standard helloworld program
>>
http://hadoop.apache.org/docs/r0.17.0/mapred_tutorial.html#Example%3A+WordCount+v2.0
>>
>> Now, lets say, I want to start the counting not from zero but from
200000.
>> So my reference line is 200000.
>>
>> I modified the Reduce code as following:
>>  public static class Reduce extends MapReduceBase implements
Reducer<Text, IntWritable, Text, IntWritable> {
>>      private static int baseSum ;
>>      public void configure(JobConf job){
>>      baseSum = Integer.parseInt(job.get("basecount"));
>>
>>      }
>>       public void reduce(Text key, Iterator<IntWritable> values,
OutputCollector<Text, IntWritable> output, Reporter reporter) throws
IOException {
>>         int sum = baseSum;
>>         while (values.hasNext()) {
>>           sum += values.next().get();
>>         }
>>         output.collect(key, new IntWritable(sum));
>>       }
>>     }
>>
>>
>> And in main added:
>>    conf.setInt("basecount",200000);
>>
>>
>>
>> So my hope was this should have done the trick..
>> But its not working. the code is running normally :(
>> How do i resolve this?
>> Thanks
>
>

Re: passing arguments to hadoop job

Posted by jamal sasha <ja...@gmail.com>.
Hi,
  The driver code is actually the same as of java word count old example:
copying from site
public static void main(String[] args) throws Exception {
    JobConf conf = new JobConf(WordCount.class);
     conf.setJobName("wordcount");

     conf.setOutputKeyClass(Text.class);
     conf.setOutputValueClass(IntWritable.class);
     * conf.setInt("basecount",200000); // added this line*
     conf.setMapperClass(Map.class);
     conf.setCombinerClass(Reduce.class);
     conf.setReducerClass(Reduce.class);

     conf.setInputFormat(TextInputFormat.class);
     conf.setOutputFormat(TextOutputFormat.class);

     FileInputFormat.setInputPaths(conf, new Path(args[0]));
     FileOutputFormat.setOutputPath(conf, new Path(args[1]));

     JobClient.runJob(conf);
   }


Reducer class
 public static class Reduce extends MapReduceBase implements Reducer<Text,
IntWritable, Text, IntWritable> {
     *private static int baseSum ;*
*      public void configure(JobConf job){*
*      baseSum = Integer.parseInt(job.get("basecount"));*
*      *
*      }*
      public void reduce(Text key, Iterator<IntWritable> values,
OutputCollector<Text, IntWritable> output, Reporter reporter) throws
IOException {
        int sum =* baseSum*;
        while (values.hasNext()) {
          sum += values.next().get();
        }
        output.collect(key, new IntWritable(sum));
      }
    }

On Mon, Jan 21, 2013 at 8:29 PM, Hemanth Yamijala <yh...@thoughtworks.com>
wrote:
>
> Hi,
>
> Please note that you are referring to a very old version of Hadoop. the
current stable release is Hadoop 1.x. The API has changed in 1.x. Take a
look at the wordcount example here:
http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html#Example%3A+WordCount+v2.0
>
> But, in principle your method should work. I wrote it using the new API
in a similar fashion and it worked fine. Can you show the code of your
driver program (i.e. where you have main) ?
>
> Thanks
> hemanth
>
>
>
> On Tue, Jan 22, 2013 at 5:22 AM, jamal sasha <ja...@gmail.com>
wrote:
>>
>> Hi,
>>   Lets say I have the standard helloworld program
>>
http://hadoop.apache.org/docs/r0.17.0/mapred_tutorial.html#Example%3A+WordCount+v2.0
>>
>> Now, lets say, I want to start the counting not from zero but from
200000.
>> So my reference line is 200000.
>>
>> I modified the Reduce code as following:
>>  public static class Reduce extends MapReduceBase implements
Reducer<Text, IntWritable, Text, IntWritable> {
>>      private static int baseSum ;
>>      public void configure(JobConf job){
>>      baseSum = Integer.parseInt(job.get("basecount"));
>>
>>      }
>>       public void reduce(Text key, Iterator<IntWritable> values,
OutputCollector<Text, IntWritable> output, Reporter reporter) throws
IOException {
>>         int sum = baseSum;
>>         while (values.hasNext()) {
>>           sum += values.next().get();
>>         }
>>         output.collect(key, new IntWritable(sum));
>>       }
>>     }
>>
>>
>> And in main added:
>>    conf.setInt("basecount",200000);
>>
>>
>>
>> So my hope was this should have done the trick..
>> But its not working. the code is running normally :(
>> How do i resolve this?
>> Thanks
>
>

Re: passing arguments to hadoop job

Posted by Hemanth Yamijala <yh...@thoughtworks.com>.
Hi,

Please note that you are referring to a very old version of Hadoop. the
current stable release is Hadoop 1.x. The API has changed in 1.x. Take a
look at the wordcount example here:
http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html#Example%3A+WordCount+v2.0


But, in principle your method should work. I wrote it using the new API in
a similar fashion and it worked fine. Can you show the code of your driver
program (i.e. where you have main) ?

Thanks
hemanth



On Tue, Jan 22, 2013 at 5:22 AM, jamal sasha <ja...@gmail.com> wrote:

> Hi,
>   Lets say I have the standard helloworld program
>
> http://hadoop.apache.org/docs/r0.17.0/mapred_tutorial.html#Example%3A+WordCount+v2.0
>
> Now, lets say, I want to start the counting not from zero but from 200000.
> So my reference line is 200000.
>
> I modified the Reduce code as following:
>  public static class Reduce extends MapReduceBase implements Reducer<Text,
> IntWritable, Text, IntWritable> {
>      *private static int baseSum ;*
> *      public void configure(JobConf job){*
> *      baseSum = Integer.parseInt(job.get("basecount"));*
> *      *
> *      }*
>        public void reduce(Text key, Iterator<IntWritable> values,
> OutputCollector<Text, IntWritable> output, Reporter reporter) throws
> IOException {
>          int sum =* baseSum*;
>         while (values.hasNext()) {
>           sum += values.next().get();
>          }
>         output.collect(key, new IntWritable(sum));
>       }
>      }
>
>
> And in main added:
>    conf.setInt("basecount",200000);
>
>
>
> So my hope was this should have done the trick..
> But its not working. the code is running normally :(
> How do i resolve this?
> Thanks
>

Re: passing arguments to hadoop job

Posted by Hemanth Yamijala <yh...@thoughtworks.com>.
Hi,

Please note that you are referring to a very old version of Hadoop. the
current stable release is Hadoop 1.x. The API has changed in 1.x. Take a
look at the wordcount example here:
http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html#Example%3A+WordCount+v2.0


But, in principle your method should work. I wrote it using the new API in
a similar fashion and it worked fine. Can you show the code of your driver
program (i.e. where you have main) ?

Thanks
hemanth



On Tue, Jan 22, 2013 at 5:22 AM, jamal sasha <ja...@gmail.com> wrote:

> Hi,
>   Lets say I have the standard helloworld program
>
> http://hadoop.apache.org/docs/r0.17.0/mapred_tutorial.html#Example%3A+WordCount+v2.0
>
> Now, lets say, I want to start the counting not from zero but from 200000.
> So my reference line is 200000.
>
> I modified the Reduce code as following:
>  public static class Reduce extends MapReduceBase implements Reducer<Text,
> IntWritable, Text, IntWritable> {
>      *private static int baseSum ;*
> *      public void configure(JobConf job){*
> *      baseSum = Integer.parseInt(job.get("basecount"));*
> *      *
> *      }*
>        public void reduce(Text key, Iterator<IntWritable> values,
> OutputCollector<Text, IntWritable> output, Reporter reporter) throws
> IOException {
>          int sum =* baseSum*;
>         while (values.hasNext()) {
>           sum += values.next().get();
>          }
>         output.collect(key, new IntWritable(sum));
>       }
>      }
>
>
> And in main added:
>    conf.setInt("basecount",200000);
>
>
>
> So my hope was this should have done the trick..
> But its not working. the code is running normally :(
> How do i resolve this?
> Thanks
>

Re: passing arguments to hadoop job

Posted by Hemanth Yamijala <yh...@thoughtworks.com>.
Hi,

Please note that you are referring to a very old version of Hadoop. the
current stable release is Hadoop 1.x. The API has changed in 1.x. Take a
look at the wordcount example here:
http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html#Example%3A+WordCount+v2.0


But, in principle your method should work. I wrote it using the new API in
a similar fashion and it worked fine. Can you show the code of your driver
program (i.e. where you have main) ?

Thanks
hemanth



On Tue, Jan 22, 2013 at 5:22 AM, jamal sasha <ja...@gmail.com> wrote:

> Hi,
>   Lets say I have the standard helloworld program
>
> http://hadoop.apache.org/docs/r0.17.0/mapred_tutorial.html#Example%3A+WordCount+v2.0
>
> Now, lets say, I want to start the counting not from zero but from 200000.
> So my reference line is 200000.
>
> I modified the Reduce code as following:
>  public static class Reduce extends MapReduceBase implements Reducer<Text,
> IntWritable, Text, IntWritable> {
>      *private static int baseSum ;*
> *      public void configure(JobConf job){*
> *      baseSum = Integer.parseInt(job.get("basecount"));*
> *      *
> *      }*
>        public void reduce(Text key, Iterator<IntWritable> values,
> OutputCollector<Text, IntWritable> output, Reporter reporter) throws
> IOException {
>          int sum =* baseSum*;
>         while (values.hasNext()) {
>           sum += values.next().get();
>          }
>         output.collect(key, new IntWritable(sum));
>       }
>      }
>
>
> And in main added:
>    conf.setInt("basecount",200000);
>
>
>
> So my hope was this should have done the trick..
> But its not working. the code is running normally :(
> How do i resolve this?
> Thanks
>

Re: passing arguments to hadoop job

Posted by Hemanth Yamijala <yh...@thoughtworks.com>.
Hi,

Please note that you are referring to a very old version of Hadoop. the
current stable release is Hadoop 1.x. The API has changed in 1.x. Take a
look at the wordcount example here:
http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html#Example%3A+WordCount+v2.0


But, in principle your method should work. I wrote it using the new API in
a similar fashion and it worked fine. Can you show the code of your driver
program (i.e. where you have main) ?

Thanks
hemanth



On Tue, Jan 22, 2013 at 5:22 AM, jamal sasha <ja...@gmail.com> wrote:

> Hi,
>   Lets say I have the standard helloworld program
>
> http://hadoop.apache.org/docs/r0.17.0/mapred_tutorial.html#Example%3A+WordCount+v2.0
>
> Now, lets say, I want to start the counting not from zero but from 200000.
> So my reference line is 200000.
>
> I modified the Reduce code as following:
>  public static class Reduce extends MapReduceBase implements Reducer<Text,
> IntWritable, Text, IntWritable> {
>      *private static int baseSum ;*
> *      public void configure(JobConf job){*
> *      baseSum = Integer.parseInt(job.get("basecount"));*
> *      *
> *      }*
>        public void reduce(Text key, Iterator<IntWritable> values,
> OutputCollector<Text, IntWritable> output, Reporter reporter) throws
> IOException {
>          int sum =* baseSum*;
>         while (values.hasNext()) {
>           sum += values.next().get();
>          }
>         output.collect(key, new IntWritable(sum));
>       }
>      }
>
>
> And in main added:
>    conf.setInt("basecount",200000);
>
>
>
> So my hope was this should have done the trick..
> But its not working. the code is running normally :(
> How do i resolve this?
> Thanks
>

Re: passing arguments to hadoop job

Posted by Satbeer Lamba <sa...@gmail.com>.
Just add
System.out.println (baseCount);
To check if expected value is getting displayed. If not then the value is
not set propely in jobconf.

Also you don't need to remove static. Sorry about that, it doesn't change
the expected output, though it's not required.
 On Jan 22, 2013 7:34 AM, "jamal sasha" <ja...@gmail.com> wrote:

> Hi,
>   Thanks for taking out time.
> Will this not work : conf.setInt("basecount",200000); ??
> I am not sure how to add loger or syso (new to both java and hadoop :( )
>
>
>
> On Mon, Jan 21, 2013 at 5:55 PM, Satbeer Lamba <sa...@gmail.com>wrote:
>
>> Please be sure that you are getting the value of baseSum in reducer by
>> adding a logger or syso.
>>
>> Also consider removing static in declaration of baseSum as it would add
>> counts of previous keys.
>> On Jan 22, 2013 7:17 AM, "jamal sasha" <ja...@gmail.com> wrote:
>>
>>> The second one.
>>> If the word hello appears once, its count is  2000001.
>>> :)
>>>
>>>
>>> On Mon, Jan 21, 2013 at 5:40 PM, Satbeer Lamba <sa...@gmail.com>wrote:
>>>
>>>> Do you mean to say you want to count the words from 200000 th line
>>>> onwards?
>>>>
>>>> OR
>>>>
>>>> You want to start counting from 2000000?
>>>> For example if HELLO appears once it's count is 2000001.
>>>>
>>>> Please clarify
>>>> On Jan 22, 2013 5:22 AM, "jamal sasha" <ja...@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>>   Lets say I have the standard helloworld program
>>>>>
>>>>> http://hadoop.apache.org/docs/r0.17.0/mapred_tutorial.html#Example%3A+WordCount+v2.0
>>>>>
>>>>> Now, lets say, I want to start the counting not from zero but from
>>>>> 200000.
>>>>> So my reference line is 200000.
>>>>>
>>>>> I modified the Reduce code as following:
>>>>>  public static class Reduce extends MapReduceBase implements
>>>>> Reducer<Text, IntWritable, Text, IntWritable> {
>>>>>      *private static int baseSum ;*
>>>>> *      public void configure(JobConf job){*
>>>>> *      baseSum = Integer.parseInt(job.get("basecount"));*
>>>>> *      *
>>>>> *      }*
>>>>>        public void reduce(Text key, Iterator<IntWritable> values,
>>>>> OutputCollector<Text, IntWritable> output, Reporter reporter) throws
>>>>> IOException {
>>>>>          int sum =* baseSum*;
>>>>>         while (values.hasNext()) {
>>>>>           sum += values.next().get();
>>>>>          }
>>>>>         output.collect(key, new IntWritable(sum));
>>>>>       }
>>>>>      }
>>>>>
>>>>>
>>>>> And in main added:
>>>>>    conf.setInt("basecount",200000);
>>>>>
>>>>>
>>>>>
>>>>> So my hope was this should have done the trick..
>>>>> But its not working. the code is running normally :(
>>>>> How do i resolve this?
>>>>> Thanks
>>>>>
>>>>
>>>
>

Re: passing arguments to hadoop job

Posted by Satbeer Lamba <sa...@gmail.com>.
Just add
System.out.println (baseCount);
To check if expected value is getting displayed. If not then the value is
not set propely in jobconf.

Also you don't need to remove static. Sorry about that, it doesn't change
the expected output, though it's not required.
 On Jan 22, 2013 7:34 AM, "jamal sasha" <ja...@gmail.com> wrote:

> Hi,
>   Thanks for taking out time.
> Will this not work : conf.setInt("basecount",200000); ??
> I am not sure how to add loger or syso (new to both java and hadoop :( )
>
>
>
> On Mon, Jan 21, 2013 at 5:55 PM, Satbeer Lamba <sa...@gmail.com>wrote:
>
>> Please be sure that you are getting the value of baseSum in reducer by
>> adding a logger or syso.
>>
>> Also consider removing static in declaration of baseSum as it would add
>> counts of previous keys.
>> On Jan 22, 2013 7:17 AM, "jamal sasha" <ja...@gmail.com> wrote:
>>
>>> The second one.
>>> If the word hello appears once, its count is  2000001.
>>> :)
>>>
>>>
>>> On Mon, Jan 21, 2013 at 5:40 PM, Satbeer Lamba <sa...@gmail.com>wrote:
>>>
>>>> Do you mean to say you want to count the words from 200000 th line
>>>> onwards?
>>>>
>>>> OR
>>>>
>>>> You want to start counting from 2000000?
>>>> For example if HELLO appears once it's count is 2000001.
>>>>
>>>> Please clarify
>>>> On Jan 22, 2013 5:22 AM, "jamal sasha" <ja...@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>>   Lets say I have the standard helloworld program
>>>>>
>>>>> http://hadoop.apache.org/docs/r0.17.0/mapred_tutorial.html#Example%3A+WordCount+v2.0
>>>>>
>>>>> Now, lets say, I want to start the counting not from zero but from
>>>>> 200000.
>>>>> So my reference line is 200000.
>>>>>
>>>>> I modified the Reduce code as following:
>>>>>  public static class Reduce extends MapReduceBase implements
>>>>> Reducer<Text, IntWritable, Text, IntWritable> {
>>>>>      *private static int baseSum ;*
>>>>> *      public void configure(JobConf job){*
>>>>> *      baseSum = Integer.parseInt(job.get("basecount"));*
>>>>> *      *
>>>>> *      }*
>>>>>        public void reduce(Text key, Iterator<IntWritable> values,
>>>>> OutputCollector<Text, IntWritable> output, Reporter reporter) throws
>>>>> IOException {
>>>>>          int sum =* baseSum*;
>>>>>         while (values.hasNext()) {
>>>>>           sum += values.next().get();
>>>>>          }
>>>>>         output.collect(key, new IntWritable(sum));
>>>>>       }
>>>>>      }
>>>>>
>>>>>
>>>>> And in main added:
>>>>>    conf.setInt("basecount",200000);
>>>>>
>>>>>
>>>>>
>>>>> So my hope was this should have done the trick..
>>>>> But its not working. the code is running normally :(
>>>>> How do i resolve this?
>>>>> Thanks
>>>>>
>>>>
>>>
>

Re: passing arguments to hadoop job

Posted by Satbeer Lamba <sa...@gmail.com>.
Just add
System.out.println (baseCount);
To check if expected value is getting displayed. If not then the value is
not set propely in jobconf.

Also you don't need to remove static. Sorry about that, it doesn't change
the expected output, though it's not required.
 On Jan 22, 2013 7:34 AM, "jamal sasha" <ja...@gmail.com> wrote:

> Hi,
>   Thanks for taking out time.
> Will this not work : conf.setInt("basecount",200000); ??
> I am not sure how to add loger or syso (new to both java and hadoop :( )
>
>
>
> On Mon, Jan 21, 2013 at 5:55 PM, Satbeer Lamba <sa...@gmail.com>wrote:
>
>> Please be sure that you are getting the value of baseSum in reducer by
>> adding a logger or syso.
>>
>> Also consider removing static in declaration of baseSum as it would add
>> counts of previous keys.
>> On Jan 22, 2013 7:17 AM, "jamal sasha" <ja...@gmail.com> wrote:
>>
>>> The second one.
>>> If the word hello appears once, its count is  2000001.
>>> :)
>>>
>>>
>>> On Mon, Jan 21, 2013 at 5:40 PM, Satbeer Lamba <sa...@gmail.com>wrote:
>>>
>>>> Do you mean to say you want to count the words from 200000 th line
>>>> onwards?
>>>>
>>>> OR
>>>>
>>>> You want to start counting from 2000000?
>>>> For example if HELLO appears once it's count is 2000001.
>>>>
>>>> Please clarify
>>>> On Jan 22, 2013 5:22 AM, "jamal sasha" <ja...@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>>   Lets say I have the standard helloworld program
>>>>>
>>>>> http://hadoop.apache.org/docs/r0.17.0/mapred_tutorial.html#Example%3A+WordCount+v2.0
>>>>>
>>>>> Now, lets say, I want to start the counting not from zero but from
>>>>> 200000.
>>>>> So my reference line is 200000.
>>>>>
>>>>> I modified the Reduce code as following:
>>>>>  public static class Reduce extends MapReduceBase implements
>>>>> Reducer<Text, IntWritable, Text, IntWritable> {
>>>>>      *private static int baseSum ;*
>>>>> *      public void configure(JobConf job){*
>>>>> *      baseSum = Integer.parseInt(job.get("basecount"));*
>>>>> *      *
>>>>> *      }*
>>>>>        public void reduce(Text key, Iterator<IntWritable> values,
>>>>> OutputCollector<Text, IntWritable> output, Reporter reporter) throws
>>>>> IOException {
>>>>>          int sum =* baseSum*;
>>>>>         while (values.hasNext()) {
>>>>>           sum += values.next().get();
>>>>>          }
>>>>>         output.collect(key, new IntWritable(sum));
>>>>>       }
>>>>>      }
>>>>>
>>>>>
>>>>> And in main added:
>>>>>    conf.setInt("basecount",200000);
>>>>>
>>>>>
>>>>>
>>>>> So my hope was this should have done the trick..
>>>>> But its not working. the code is running normally :(
>>>>> How do i resolve this?
>>>>> Thanks
>>>>>
>>>>
>>>
>

Re: passing arguments to hadoop job

Posted by Satbeer Lamba <sa...@gmail.com>.
Just add
System.out.println (baseCount);
To check if expected value is getting displayed. If not then the value is
not set propely in jobconf.

Also you don't need to remove static. Sorry about that, it doesn't change
the expected output, though it's not required.
 On Jan 22, 2013 7:34 AM, "jamal sasha" <ja...@gmail.com> wrote:

> Hi,
>   Thanks for taking out time.
> Will this not work : conf.setInt("basecount",200000); ??
> I am not sure how to add loger or syso (new to both java and hadoop :( )
>
>
>
> On Mon, Jan 21, 2013 at 5:55 PM, Satbeer Lamba <sa...@gmail.com>wrote:
>
>> Please be sure that you are getting the value of baseSum in reducer by
>> adding a logger or syso.
>>
>> Also consider removing static in declaration of baseSum as it would add
>> counts of previous keys.
>> On Jan 22, 2013 7:17 AM, "jamal sasha" <ja...@gmail.com> wrote:
>>
>>> The second one.
>>> If the word hello appears once, its count is  2000001.
>>> :)
>>>
>>>
>>> On Mon, Jan 21, 2013 at 5:40 PM, Satbeer Lamba <sa...@gmail.com>wrote:
>>>
>>>> Do you mean to say you want to count the words from 200000 th line
>>>> onwards?
>>>>
>>>> OR
>>>>
>>>> You want to start counting from 2000000?
>>>> For example if HELLO appears once it's count is 2000001.
>>>>
>>>> Please clarify
>>>> On Jan 22, 2013 5:22 AM, "jamal sasha" <ja...@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>>   Lets say I have the standard helloworld program
>>>>>
>>>>> http://hadoop.apache.org/docs/r0.17.0/mapred_tutorial.html#Example%3A+WordCount+v2.0
>>>>>
>>>>> Now, lets say, I want to start the counting not from zero but from
>>>>> 200000.
>>>>> So my reference line is 200000.
>>>>>
>>>>> I modified the Reduce code as following:
>>>>>  public static class Reduce extends MapReduceBase implements
>>>>> Reducer<Text, IntWritable, Text, IntWritable> {
>>>>>      *private static int baseSum ;*
>>>>> *      public void configure(JobConf job){*
>>>>> *      baseSum = Integer.parseInt(job.get("basecount"));*
>>>>> *      *
>>>>> *      }*
>>>>>        public void reduce(Text key, Iterator<IntWritable> values,
>>>>> OutputCollector<Text, IntWritable> output, Reporter reporter) throws
>>>>> IOException {
>>>>>          int sum =* baseSum*;
>>>>>         while (values.hasNext()) {
>>>>>           sum += values.next().get();
>>>>>          }
>>>>>         output.collect(key, new IntWritable(sum));
>>>>>       }
>>>>>      }
>>>>>
>>>>>
>>>>> And in main added:
>>>>>    conf.setInt("basecount",200000);
>>>>>
>>>>>
>>>>>
>>>>> So my hope was this should have done the trick..
>>>>> But its not working. the code is running normally :(
>>>>> How do i resolve this?
>>>>> Thanks
>>>>>
>>>>
>>>
>

Re: passing arguments to hadoop job

Posted by jamal sasha <ja...@gmail.com>.
Hi,
  Thanks for taking out time.
Will this not work : conf.setInt("basecount",200000); ??
I am not sure how to add loger or syso (new to both java and hadoop :( )



On Mon, Jan 21, 2013 at 5:55 PM, Satbeer Lamba <sa...@gmail.com>wrote:

> Please be sure that you are getting the value of baseSum in reducer by
> adding a logger or syso.
>
> Also consider removing static in declaration of baseSum as it would add
> counts of previous keys.
> On Jan 22, 2013 7:17 AM, "jamal sasha" <ja...@gmail.com> wrote:
>
>> The second one.
>> If the word hello appears once, its count is  2000001.
>> :)
>>
>>
>> On Mon, Jan 21, 2013 at 5:40 PM, Satbeer Lamba <sa...@gmail.com>wrote:
>>
>>> Do you mean to say you want to count the words from 200000 th line
>>> onwards?
>>>
>>> OR
>>>
>>> You want to start counting from 2000000?
>>> For example if HELLO appears once it's count is 2000001.
>>>
>>> Please clarify
>>> On Jan 22, 2013 5:22 AM, "jamal sasha" <ja...@gmail.com> wrote:
>>>
>>>> Hi,
>>>>   Lets say I have the standard helloworld program
>>>>
>>>> http://hadoop.apache.org/docs/r0.17.0/mapred_tutorial.html#Example%3A+WordCount+v2.0
>>>>
>>>> Now, lets say, I want to start the counting not from zero but from
>>>> 200000.
>>>> So my reference line is 200000.
>>>>
>>>> I modified the Reduce code as following:
>>>>  public static class Reduce extends MapReduceBase implements
>>>> Reducer<Text, IntWritable, Text, IntWritable> {
>>>>      *private static int baseSum ;*
>>>> *      public void configure(JobConf job){*
>>>> *      baseSum = Integer.parseInt(job.get("basecount"));*
>>>> *      *
>>>> *      }*
>>>>        public void reduce(Text key, Iterator<IntWritable> values,
>>>> OutputCollector<Text, IntWritable> output, Reporter reporter) throws
>>>> IOException {
>>>>          int sum =* baseSum*;
>>>>         while (values.hasNext()) {
>>>>           sum += values.next().get();
>>>>          }
>>>>         output.collect(key, new IntWritable(sum));
>>>>       }
>>>>      }
>>>>
>>>>
>>>> And in main added:
>>>>    conf.setInt("basecount",200000);
>>>>
>>>>
>>>>
>>>> So my hope was this should have done the trick..
>>>> But its not working. the code is running normally :(
>>>> How do i resolve this?
>>>> Thanks
>>>>
>>>
>>

Re: passing arguments to hadoop job

Posted by jamal sasha <ja...@gmail.com>.
Hi,
  Thanks for taking out time.
Will this not work : conf.setInt("basecount",200000); ??
I am not sure how to add loger or syso (new to both java and hadoop :( )



On Mon, Jan 21, 2013 at 5:55 PM, Satbeer Lamba <sa...@gmail.com>wrote:

> Please be sure that you are getting the value of baseSum in reducer by
> adding a logger or syso.
>
> Also consider removing static in declaration of baseSum as it would add
> counts of previous keys.
> On Jan 22, 2013 7:17 AM, "jamal sasha" <ja...@gmail.com> wrote:
>
>> The second one.
>> If the word hello appears once, its count is  2000001.
>> :)
>>
>>
>> On Mon, Jan 21, 2013 at 5:40 PM, Satbeer Lamba <sa...@gmail.com>wrote:
>>
>>> Do you mean to say you want to count the words from 200000 th line
>>> onwards?
>>>
>>> OR
>>>
>>> You want to start counting from 2000000?
>>> For example if HELLO appears once it's count is 2000001.
>>>
>>> Please clarify
>>> On Jan 22, 2013 5:22 AM, "jamal sasha" <ja...@gmail.com> wrote:
>>>
>>>> Hi,
>>>>   Lets say I have the standard helloworld program
>>>>
>>>> http://hadoop.apache.org/docs/r0.17.0/mapred_tutorial.html#Example%3A+WordCount+v2.0
>>>>
>>>> Now, lets say, I want to start the counting not from zero but from
>>>> 200000.
>>>> So my reference line is 200000.
>>>>
>>>> I modified the Reduce code as following:
>>>>  public static class Reduce extends MapReduceBase implements
>>>> Reducer<Text, IntWritable, Text, IntWritable> {
>>>>      *private static int baseSum ;*
>>>> *      public void configure(JobConf job){*
>>>> *      baseSum = Integer.parseInt(job.get("basecount"));*
>>>> *      *
>>>> *      }*
>>>>        public void reduce(Text key, Iterator<IntWritable> values,
>>>> OutputCollector<Text, IntWritable> output, Reporter reporter) throws
>>>> IOException {
>>>>          int sum =* baseSum*;
>>>>         while (values.hasNext()) {
>>>>           sum += values.next().get();
>>>>          }
>>>>         output.collect(key, new IntWritable(sum));
>>>>       }
>>>>      }
>>>>
>>>>
>>>> And in main added:
>>>>    conf.setInt("basecount",200000);
>>>>
>>>>
>>>>
>>>> So my hope was this should have done the trick..
>>>> But its not working. the code is running normally :(
>>>> How do i resolve this?
>>>> Thanks
>>>>
>>>
>>

Re: passing arguments to hadoop job

Posted by jamal sasha <ja...@gmail.com>.
Hi,
  Thanks for taking out time.
Will this not work : conf.setInt("basecount",200000); ??
I am not sure how to add loger or syso (new to both java and hadoop :( )



On Mon, Jan 21, 2013 at 5:55 PM, Satbeer Lamba <sa...@gmail.com>wrote:

> Please be sure that you are getting the value of baseSum in reducer by
> adding a logger or syso.
>
> Also consider removing static in declaration of baseSum as it would add
> counts of previous keys.
> On Jan 22, 2013 7:17 AM, "jamal sasha" <ja...@gmail.com> wrote:
>
>> The second one.
>> If the word hello appears once, its count is  2000001.
>> :)
>>
>>
>> On Mon, Jan 21, 2013 at 5:40 PM, Satbeer Lamba <sa...@gmail.com>wrote:
>>
>>> Do you mean to say you want to count the words from 200000 th line
>>> onwards?
>>>
>>> OR
>>>
>>> You want to start counting from 2000000?
>>> For example if HELLO appears once it's count is 2000001.
>>>
>>> Please clarify
>>> On Jan 22, 2013 5:22 AM, "jamal sasha" <ja...@gmail.com> wrote:
>>>
>>>> Hi,
>>>>   Lets say I have the standard helloworld program
>>>>
>>>> http://hadoop.apache.org/docs/r0.17.0/mapred_tutorial.html#Example%3A+WordCount+v2.0
>>>>
>>>> Now, lets say, I want to start the counting not from zero but from
>>>> 200000.
>>>> So my reference line is 200000.
>>>>
>>>> I modified the Reduce code as following:
>>>>  public static class Reduce extends MapReduceBase implements
>>>> Reducer<Text, IntWritable, Text, IntWritable> {
>>>>      *private static int baseSum ;*
>>>> *      public void configure(JobConf job){*
>>>> *      baseSum = Integer.parseInt(job.get("basecount"));*
>>>> *      *
>>>> *      }*
>>>>        public void reduce(Text key, Iterator<IntWritable> values,
>>>> OutputCollector<Text, IntWritable> output, Reporter reporter) throws
>>>> IOException {
>>>>          int sum =* baseSum*;
>>>>         while (values.hasNext()) {
>>>>           sum += values.next().get();
>>>>          }
>>>>         output.collect(key, new IntWritable(sum));
>>>>       }
>>>>      }
>>>>
>>>>
>>>> And in main added:
>>>>    conf.setInt("basecount",200000);
>>>>
>>>>
>>>>
>>>> So my hope was this should have done the trick..
>>>> But its not working. the code is running normally :(
>>>> How do i resolve this?
>>>> Thanks
>>>>
>>>
>>

Re: passing arguments to hadoop job

Posted by jamal sasha <ja...@gmail.com>.
Hi,
  Thanks for taking out time.
Will this not work : conf.setInt("basecount",200000); ??
I am not sure how to add loger or syso (new to both java and hadoop :( )



On Mon, Jan 21, 2013 at 5:55 PM, Satbeer Lamba <sa...@gmail.com>wrote:

> Please be sure that you are getting the value of baseSum in reducer by
> adding a logger or syso.
>
> Also consider removing static in declaration of baseSum as it would add
> counts of previous keys.
> On Jan 22, 2013 7:17 AM, "jamal sasha" <ja...@gmail.com> wrote:
>
>> The second one.
>> If the word hello appears once, its count is  2000001.
>> :)
>>
>>
>> On Mon, Jan 21, 2013 at 5:40 PM, Satbeer Lamba <sa...@gmail.com>wrote:
>>
>>> Do you mean to say you want to count the words from 200000 th line
>>> onwards?
>>>
>>> OR
>>>
>>> You want to start counting from 2000000?
>>> For example if HELLO appears once it's count is 2000001.
>>>
>>> Please clarify
>>> On Jan 22, 2013 5:22 AM, "jamal sasha" <ja...@gmail.com> wrote:
>>>
>>>> Hi,
>>>>   Lets say I have the standard helloworld program
>>>>
>>>> http://hadoop.apache.org/docs/r0.17.0/mapred_tutorial.html#Example%3A+WordCount+v2.0
>>>>
>>>> Now, lets say, I want to start the counting not from zero but from
>>>> 200000.
>>>> So my reference line is 200000.
>>>>
>>>> I modified the Reduce code as following:
>>>>  public static class Reduce extends MapReduceBase implements
>>>> Reducer<Text, IntWritable, Text, IntWritable> {
>>>>      *private static int baseSum ;*
>>>> *      public void configure(JobConf job){*
>>>> *      baseSum = Integer.parseInt(job.get("basecount"));*
>>>> *      *
>>>> *      }*
>>>>        public void reduce(Text key, Iterator<IntWritable> values,
>>>> OutputCollector<Text, IntWritable> output, Reporter reporter) throws
>>>> IOException {
>>>>          int sum =* baseSum*;
>>>>         while (values.hasNext()) {
>>>>           sum += values.next().get();
>>>>          }
>>>>         output.collect(key, new IntWritable(sum));
>>>>       }
>>>>      }
>>>>
>>>>
>>>> And in main added:
>>>>    conf.setInt("basecount",200000);
>>>>
>>>>
>>>>
>>>> So my hope was this should have done the trick..
>>>> But its not working. the code is running normally :(
>>>> How do i resolve this?
>>>> Thanks
>>>>
>>>
>>

Re: passing arguments to hadoop job

Posted by Satbeer Lamba <sa...@gmail.com>.
Please be sure that you are getting the value of baseSum in reducer by
adding a logger or syso.

Also consider removing static in declaration of baseSum as it would add
counts of previous keys.
On Jan 22, 2013 7:17 AM, "jamal sasha" <ja...@gmail.com> wrote:

> The second one.
> If the word hello appears once, its count is  2000001.
> :)
>
>
> On Mon, Jan 21, 2013 at 5:40 PM, Satbeer Lamba <sa...@gmail.com>wrote:
>
>> Do you mean to say you want to count the words from 200000 th line
>> onwards?
>>
>> OR
>>
>> You want to start counting from 2000000?
>> For example if HELLO appears once it's count is 2000001.
>>
>> Please clarify
>> On Jan 22, 2013 5:22 AM, "jamal sasha" <ja...@gmail.com> wrote:
>>
>>> Hi,
>>>   Lets say I have the standard helloworld program
>>>
>>> http://hadoop.apache.org/docs/r0.17.0/mapred_tutorial.html#Example%3A+WordCount+v2.0
>>>
>>> Now, lets say, I want to start the counting not from zero but from
>>> 200000.
>>> So my reference line is 200000.
>>>
>>> I modified the Reduce code as following:
>>>  public static class Reduce extends MapReduceBase implements
>>> Reducer<Text, IntWritable, Text, IntWritable> {
>>>      *private static int baseSum ;*
>>> *      public void configure(JobConf job){*
>>> *      baseSum = Integer.parseInt(job.get("basecount"));*
>>> *      *
>>> *      }*
>>>        public void reduce(Text key, Iterator<IntWritable> values,
>>> OutputCollector<Text, IntWritable> output, Reporter reporter) throws
>>> IOException {
>>>          int sum =* baseSum*;
>>>         while (values.hasNext()) {
>>>           sum += values.next().get();
>>>          }
>>>         output.collect(key, new IntWritable(sum));
>>>       }
>>>      }
>>>
>>>
>>> And in main added:
>>>    conf.setInt("basecount",200000);
>>>
>>>
>>>
>>> So my hope was this should have done the trick..
>>> But its not working. the code is running normally :(
>>> How do i resolve this?
>>> Thanks
>>>
>>
>

Re: passing arguments to hadoop job

Posted by Satbeer Lamba <sa...@gmail.com>.
Please be sure that you are getting the value of baseSum in reducer by
adding a logger or syso.

Also consider removing static in declaration of baseSum as it would add
counts of previous keys.
On Jan 22, 2013 7:17 AM, "jamal sasha" <ja...@gmail.com> wrote:

> The second one.
> If the word hello appears once, its count is  2000001.
> :)
>
>
> On Mon, Jan 21, 2013 at 5:40 PM, Satbeer Lamba <sa...@gmail.com>wrote:
>
>> Do you mean to say you want to count the words from 200000 th line
>> onwards?
>>
>> OR
>>
>> You want to start counting from 2000000?
>> For example if HELLO appears once it's count is 2000001.
>>
>> Please clarify
>> On Jan 22, 2013 5:22 AM, "jamal sasha" <ja...@gmail.com> wrote:
>>
>>> Hi,
>>>   Lets say I have the standard helloworld program
>>>
>>> http://hadoop.apache.org/docs/r0.17.0/mapred_tutorial.html#Example%3A+WordCount+v2.0
>>>
>>> Now, lets say, I want to start the counting not from zero but from
>>> 200000.
>>> So my reference line is 200000.
>>>
>>> I modified the Reduce code as following:
>>>  public static class Reduce extends MapReduceBase implements
>>> Reducer<Text, IntWritable, Text, IntWritable> {
>>>      *private static int baseSum ;*
>>> *      public void configure(JobConf job){*
>>> *      baseSum = Integer.parseInt(job.get("basecount"));*
>>> *      *
>>> *      }*
>>>        public void reduce(Text key, Iterator<IntWritable> values,
>>> OutputCollector<Text, IntWritable> output, Reporter reporter) throws
>>> IOException {
>>>          int sum =* baseSum*;
>>>         while (values.hasNext()) {
>>>           sum += values.next().get();
>>>          }
>>>         output.collect(key, new IntWritable(sum));
>>>       }
>>>      }
>>>
>>>
>>> And in main added:
>>>    conf.setInt("basecount",200000);
>>>
>>>
>>>
>>> So my hope was this should have done the trick..
>>> But its not working. the code is running normally :(
>>> How do i resolve this?
>>> Thanks
>>>
>>
>

Re: passing arguments to hadoop job

Posted by Satbeer Lamba <sa...@gmail.com>.
Please be sure that you are getting the value of baseSum in reducer by
adding a logger or syso.

Also consider removing static in declaration of baseSum as it would add
counts of previous keys.
On Jan 22, 2013 7:17 AM, "jamal sasha" <ja...@gmail.com> wrote:

> The second one.
> If the word hello appears once, its count is  2000001.
> :)
>
>
> On Mon, Jan 21, 2013 at 5:40 PM, Satbeer Lamba <sa...@gmail.com>wrote:
>
>> Do you mean to say you want to count the words from 200000 th line
>> onwards?
>>
>> OR
>>
>> You want to start counting from 2000000?
>> For example if HELLO appears once it's count is 2000001.
>>
>> Please clarify
>> On Jan 22, 2013 5:22 AM, "jamal sasha" <ja...@gmail.com> wrote:
>>
>>> Hi,
>>>   Lets say I have the standard helloworld program
>>>
>>> http://hadoop.apache.org/docs/r0.17.0/mapred_tutorial.html#Example%3A+WordCount+v2.0
>>>
>>> Now, lets say, I want to start the counting not from zero but from
>>> 200000.
>>> So my reference line is 200000.
>>>
>>> I modified the Reduce code as following:
>>>  public static class Reduce extends MapReduceBase implements
>>> Reducer<Text, IntWritable, Text, IntWritable> {
>>>      *private static int baseSum ;*
>>> *      public void configure(JobConf job){*
>>> *      baseSum = Integer.parseInt(job.get("basecount"));*
>>> *      *
>>> *      }*
>>>        public void reduce(Text key, Iterator<IntWritable> values,
>>> OutputCollector<Text, IntWritable> output, Reporter reporter) throws
>>> IOException {
>>>          int sum =* baseSum*;
>>>         while (values.hasNext()) {
>>>           sum += values.next().get();
>>>          }
>>>         output.collect(key, new IntWritable(sum));
>>>       }
>>>      }
>>>
>>>
>>> And in main added:
>>>    conf.setInt("basecount",200000);
>>>
>>>
>>>
>>> So my hope was this should have done the trick..
>>> But its not working. the code is running normally :(
>>> How do i resolve this?
>>> Thanks
>>>
>>
>

Re: passing arguments to hadoop job

Posted by Satbeer Lamba <sa...@gmail.com>.
Please be sure that you are getting the value of baseSum in reducer by
adding a logger or syso.

Also consider removing static in declaration of baseSum as it would add
counts of previous keys.
On Jan 22, 2013 7:17 AM, "jamal sasha" <ja...@gmail.com> wrote:

> The second one.
> If the word hello appears once, its count is  2000001.
> :)
>
>
> On Mon, Jan 21, 2013 at 5:40 PM, Satbeer Lamba <sa...@gmail.com>wrote:
>
>> Do you mean to say you want to count the words from 200000 th line
>> onwards?
>>
>> OR
>>
>> You want to start counting from 2000000?
>> For example if HELLO appears once it's count is 2000001.
>>
>> Please clarify
>> On Jan 22, 2013 5:22 AM, "jamal sasha" <ja...@gmail.com> wrote:
>>
>>> Hi,
>>>   Lets say I have the standard helloworld program
>>>
>>> http://hadoop.apache.org/docs/r0.17.0/mapred_tutorial.html#Example%3A+WordCount+v2.0
>>>
>>> Now, lets say, I want to start the counting not from zero but from
>>> 200000.
>>> So my reference line is 200000.
>>>
>>> I modified the Reduce code as following:
>>>  public static class Reduce extends MapReduceBase implements
>>> Reducer<Text, IntWritable, Text, IntWritable> {
>>>      *private static int baseSum ;*
>>> *      public void configure(JobConf job){*
>>> *      baseSum = Integer.parseInt(job.get("basecount"));*
>>> *      *
>>> *      }*
>>>        public void reduce(Text key, Iterator<IntWritable> values,
>>> OutputCollector<Text, IntWritable> output, Reporter reporter) throws
>>> IOException {
>>>          int sum =* baseSum*;
>>>         while (values.hasNext()) {
>>>           sum += values.next().get();
>>>          }
>>>         output.collect(key, new IntWritable(sum));
>>>       }
>>>      }
>>>
>>>
>>> And in main added:
>>>    conf.setInt("basecount",200000);
>>>
>>>
>>>
>>> So my hope was this should have done the trick..
>>> But its not working. the code is running normally :(
>>> How do i resolve this?
>>> Thanks
>>>
>>
>

Re: passing arguments to hadoop job

Posted by jamal sasha <ja...@gmail.com>.
The second one.
If the word hello appears once, its count is  2000001.
:)


On Mon, Jan 21, 2013 at 5:40 PM, Satbeer Lamba <sa...@gmail.com>wrote:

> Do you mean to say you want to count the words from 200000 th line onwards?
>
> OR
>
> You want to start counting from 2000000?
> For example if HELLO appears once it's count is 2000001.
>
> Please clarify
> On Jan 22, 2013 5:22 AM, "jamal sasha" <ja...@gmail.com> wrote:
>
>> Hi,
>>   Lets say I have the standard helloworld program
>>
>> http://hadoop.apache.org/docs/r0.17.0/mapred_tutorial.html#Example%3A+WordCount+v2.0
>>
>> Now, lets say, I want to start the counting not from zero but from 200000.
>> So my reference line is 200000.
>>
>> I modified the Reduce code as following:
>>  public static class Reduce extends MapReduceBase implements
>> Reducer<Text, IntWritable, Text, IntWritable> {
>>      *private static int baseSum ;*
>> *      public void configure(JobConf job){*
>> *      baseSum = Integer.parseInt(job.get("basecount"));*
>> *      *
>> *      }*
>>        public void reduce(Text key, Iterator<IntWritable> values,
>> OutputCollector<Text, IntWritable> output, Reporter reporter) throws
>> IOException {
>>          int sum =* baseSum*;
>>         while (values.hasNext()) {
>>           sum += values.next().get();
>>          }
>>         output.collect(key, new IntWritable(sum));
>>       }
>>      }
>>
>>
>> And in main added:
>>    conf.setInt("basecount",200000);
>>
>>
>>
>> So my hope was this should have done the trick..
>> But its not working. the code is running normally :(
>> How do i resolve this?
>> Thanks
>>
>

Re: passing arguments to hadoop job

Posted by jamal sasha <ja...@gmail.com>.
The second one.
If the word hello appears once, its count is  2000001.
:)


On Mon, Jan 21, 2013 at 5:40 PM, Satbeer Lamba <sa...@gmail.com>wrote:

> Do you mean to say you want to count the words from 200000 th line onwards?
>
> OR
>
> You want to start counting from 2000000?
> For example if HELLO appears once it's count is 2000001.
>
> Please clarify
> On Jan 22, 2013 5:22 AM, "jamal sasha" <ja...@gmail.com> wrote:
>
>> Hi,
>>   Lets say I have the standard helloworld program
>>
>> http://hadoop.apache.org/docs/r0.17.0/mapred_tutorial.html#Example%3A+WordCount+v2.0
>>
>> Now, lets say, I want to start the counting not from zero but from 200000.
>> So my reference line is 200000.
>>
>> I modified the Reduce code as following:
>>  public static class Reduce extends MapReduceBase implements
>> Reducer<Text, IntWritable, Text, IntWritable> {
>>      *private static int baseSum ;*
>> *      public void configure(JobConf job){*
>> *      baseSum = Integer.parseInt(job.get("basecount"));*
>> *      *
>> *      }*
>>        public void reduce(Text key, Iterator<IntWritable> values,
>> OutputCollector<Text, IntWritable> output, Reporter reporter) throws
>> IOException {
>>          int sum =* baseSum*;
>>         while (values.hasNext()) {
>>           sum += values.next().get();
>>          }
>>         output.collect(key, new IntWritable(sum));
>>       }
>>      }
>>
>>
>> And in main added:
>>    conf.setInt("basecount",200000);
>>
>>
>>
>> So my hope was this should have done the trick..
>> But its not working. the code is running normally :(
>> How do i resolve this?
>> Thanks
>>
>

Re: passing arguments to hadoop job

Posted by jamal sasha <ja...@gmail.com>.
The second one.
If the word hello appears once, its count is  2000001.
:)


On Mon, Jan 21, 2013 at 5:40 PM, Satbeer Lamba <sa...@gmail.com>wrote:

> Do you mean to say you want to count the words from 200000 th line onwards?
>
> OR
>
> You want to start counting from 2000000?
> For example if HELLO appears once it's count is 2000001.
>
> Please clarify
> On Jan 22, 2013 5:22 AM, "jamal sasha" <ja...@gmail.com> wrote:
>
>> Hi,
>>   Lets say I have the standard helloworld program
>>
>> http://hadoop.apache.org/docs/r0.17.0/mapred_tutorial.html#Example%3A+WordCount+v2.0
>>
>> Now, lets say, I want to start the counting not from zero but from 200000.
>> So my reference line is 200000.
>>
>> I modified the Reduce code as following:
>>  public static class Reduce extends MapReduceBase implements
>> Reducer<Text, IntWritable, Text, IntWritable> {
>>      *private static int baseSum ;*
>> *      public void configure(JobConf job){*
>> *      baseSum = Integer.parseInt(job.get("basecount"));*
>> *      *
>> *      }*
>>        public void reduce(Text key, Iterator<IntWritable> values,
>> OutputCollector<Text, IntWritable> output, Reporter reporter) throws
>> IOException {
>>          int sum =* baseSum*;
>>         while (values.hasNext()) {
>>           sum += values.next().get();
>>          }
>>         output.collect(key, new IntWritable(sum));
>>       }
>>      }
>>
>>
>> And in main added:
>>    conf.setInt("basecount",200000);
>>
>>
>>
>> So my hope was this should have done the trick..
>> But its not working. the code is running normally :(
>> How do i resolve this?
>> Thanks
>>
>

Re: passing arguments to hadoop job

Posted by jamal sasha <ja...@gmail.com>.
The second one.
If the word hello appears once, its count is  2000001.
:)


On Mon, Jan 21, 2013 at 5:40 PM, Satbeer Lamba <sa...@gmail.com>wrote:

> Do you mean to say you want to count the words from 200000 th line onwards?
>
> OR
>
> You want to start counting from 2000000?
> For example if HELLO appears once it's count is 2000001.
>
> Please clarify
> On Jan 22, 2013 5:22 AM, "jamal sasha" <ja...@gmail.com> wrote:
>
>> Hi,
>>   Lets say I have the standard helloworld program
>>
>> http://hadoop.apache.org/docs/r0.17.0/mapred_tutorial.html#Example%3A+WordCount+v2.0
>>
>> Now, lets say, I want to start the counting not from zero but from 200000.
>> So my reference line is 200000.
>>
>> I modified the Reduce code as following:
>>  public static class Reduce extends MapReduceBase implements
>> Reducer<Text, IntWritable, Text, IntWritable> {
>>      *private static int baseSum ;*
>> *      public void configure(JobConf job){*
>> *      baseSum = Integer.parseInt(job.get("basecount"));*
>> *      *
>> *      }*
>>        public void reduce(Text key, Iterator<IntWritable> values,
>> OutputCollector<Text, IntWritable> output, Reporter reporter) throws
>> IOException {
>>          int sum =* baseSum*;
>>         while (values.hasNext()) {
>>           sum += values.next().get();
>>          }
>>         output.collect(key, new IntWritable(sum));
>>       }
>>      }
>>
>>
>> And in main added:
>>    conf.setInt("basecount",200000);
>>
>>
>>
>> So my hope was this should have done the trick..
>> But its not working. the code is running normally :(
>> How do i resolve this?
>> Thanks
>>
>

Re: passing arguments to hadoop job

Posted by Satbeer Lamba <sa...@gmail.com>.
Do you mean to say you want to count the words from 200000 th line onwards?

OR

You want to start counting from 2000000?
For example if HELLO appears once it's count is 2000001.

Please clarify
On Jan 22, 2013 5:22 AM, "jamal sasha" <ja...@gmail.com> wrote:

> Hi,
>   Lets say I have the standard helloworld program
>
> http://hadoop.apache.org/docs/r0.17.0/mapred_tutorial.html#Example%3A+WordCount+v2.0
>
> Now, lets say, I want to start the counting not from zero but from 200000.
> So my reference line is 200000.
>
> I modified the Reduce code as following:
>  public static class Reduce extends MapReduceBase implements Reducer<Text,
> IntWritable, Text, IntWritable> {
>      *private static int baseSum ;*
> *      public void configure(JobConf job){*
> *      baseSum = Integer.parseInt(job.get("basecount"));*
> *      *
> *      }*
>        public void reduce(Text key, Iterator<IntWritable> values,
> OutputCollector<Text, IntWritable> output, Reporter reporter) throws
> IOException {
>          int sum =* baseSum*;
>         while (values.hasNext()) {
>           sum += values.next().get();
>          }
>         output.collect(key, new IntWritable(sum));
>       }
>      }
>
>
> And in main added:
>    conf.setInt("basecount",200000);
>
>
>
> So my hope was this should have done the trick..
> But its not working. the code is running normally :(
> How do i resolve this?
> Thanks
>

Re: passing arguments to hadoop job

Posted by Satbeer Lamba <sa...@gmail.com>.
Do you mean to say you want to count the words from 200000 th line onwards?

OR

You want to start counting from 2000000?
For example if HELLO appears once it's count is 2000001.

Please clarify
On Jan 22, 2013 5:22 AM, "jamal sasha" <ja...@gmail.com> wrote:

> Hi,
>   Lets say I have the standard helloworld program
>
> http://hadoop.apache.org/docs/r0.17.0/mapred_tutorial.html#Example%3A+WordCount+v2.0
>
> Now, lets say, I want to start the counting not from zero but from 200000.
> So my reference line is 200000.
>
> I modified the Reduce code as following:
>  public static class Reduce extends MapReduceBase implements Reducer<Text,
> IntWritable, Text, IntWritable> {
>      *private static int baseSum ;*
> *      public void configure(JobConf job){*
> *      baseSum = Integer.parseInt(job.get("basecount"));*
> *      *
> *      }*
>        public void reduce(Text key, Iterator<IntWritable> values,
> OutputCollector<Text, IntWritable> output, Reporter reporter) throws
> IOException {
>          int sum =* baseSum*;
>         while (values.hasNext()) {
>           sum += values.next().get();
>          }
>         output.collect(key, new IntWritable(sum));
>       }
>      }
>
>
> And in main added:
>    conf.setInt("basecount",200000);
>
>
>
> So my hope was this should have done the trick..
> But its not working. the code is running normally :(
> How do i resolve this?
> Thanks
>

Re: passing arguments to hadoop job

Posted by Satbeer Lamba <sa...@gmail.com>.
Do you mean to say you want to count the words from 200000 th line onwards?

OR

You want to start counting from 2000000?
For example if HELLO appears once it's count is 2000001.

Please clarify
On Jan 22, 2013 5:22 AM, "jamal sasha" <ja...@gmail.com> wrote:

> Hi,
>   Lets say I have the standard helloworld program
>
> http://hadoop.apache.org/docs/r0.17.0/mapred_tutorial.html#Example%3A+WordCount+v2.0
>
> Now, lets say, I want to start the counting not from zero but from 200000.
> So my reference line is 200000.
>
> I modified the Reduce code as following:
>  public static class Reduce extends MapReduceBase implements Reducer<Text,
> IntWritable, Text, IntWritable> {
>      *private static int baseSum ;*
> *      public void configure(JobConf job){*
> *      baseSum = Integer.parseInt(job.get("basecount"));*
> *      *
> *      }*
>        public void reduce(Text key, Iterator<IntWritable> values,
> OutputCollector<Text, IntWritable> output, Reporter reporter) throws
> IOException {
>          int sum =* baseSum*;
>         while (values.hasNext()) {
>           sum += values.next().get();
>          }
>         output.collect(key, new IntWritable(sum));
>       }
>      }
>
>
> And in main added:
>    conf.setInt("basecount",200000);
>
>
>
> So my hope was this should have done the trick..
> But its not working. the code is running normally :(
> How do i resolve this?
> Thanks
>

Re: passing arguments to hadoop job

Posted by Satbeer Lamba <sa...@gmail.com>.
Do you mean to say you want to count the words from 200000 th line onwards?

OR

You want to start counting from 2000000?
For example if HELLO appears once it's count is 2000001.

Please clarify
On Jan 22, 2013 5:22 AM, "jamal sasha" <ja...@gmail.com> wrote:

> Hi,
>   Lets say I have the standard helloworld program
>
> http://hadoop.apache.org/docs/r0.17.0/mapred_tutorial.html#Example%3A+WordCount+v2.0
>
> Now, lets say, I want to start the counting not from zero but from 200000.
> So my reference line is 200000.
>
> I modified the Reduce code as following:
>  public static class Reduce extends MapReduceBase implements Reducer<Text,
> IntWritable, Text, IntWritable> {
>      *private static int baseSum ;*
> *      public void configure(JobConf job){*
> *      baseSum = Integer.parseInt(job.get("basecount"));*
> *      *
> *      }*
>        public void reduce(Text key, Iterator<IntWritable> values,
> OutputCollector<Text, IntWritable> output, Reporter reporter) throws
> IOException {
>          int sum =* baseSum*;
>         while (values.hasNext()) {
>           sum += values.next().get();
>          }
>         output.collect(key, new IntWritable(sum));
>       }
>      }
>
>
> And in main added:
>    conf.setInt("basecount",200000);
>
>
>
> So my hope was this should have done the trick..
> But its not working. the code is running normally :(
> How do i resolve this?
> Thanks
>