You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by Suhail Rehman <su...@gmail.com> on 2010/01/19 18:15:55 UTC

Using SequenceFiles in Hadoop for an imaging application.

I am using hadoop to write some sample code which takes every image and
blurs it using a blurring filter.

I was able to convert all my input images into a sequence file, and I've
written the following hadoop code to perform the blurring operation. (The
input sequencefile key-value pairs are Text (filename of the image),
BytesWritable (image contents) for each record).

've used the TAR to sequence file creator available here:
http://stuartsierra.com/2008/04/24/a-million-little-files

But for some reason, I cannot get it working. I've pasted the code here (not
including the imports), let me know what I'm doing wrong (I'm new to Hadoop
btw).

import com.jhlabs.image.BoxBlurFilter;

//Mapper Class

public class BlurMapper extends MapReduceBase implements
Mapper<Text,BytesWritable,Text,BytesWritable> {

        public void map(Text key, BytesWritable file,
                        OutputCollector<Text, BytesWritable> output,
Reporter reporter) throws
IOException {


                   //Read Current Image from File.
                   BufferedImage img = ImageIO.read(new ByteArrayInputStream
(file.getBytes()));
                   BufferedImage dest = null;

                   //Apply Blur on Filter Operation - External JAR
                   BoxBlurFilter BlurOp = new BoxBlurFilter(10,10,2);
                   BlurOp.filter(img, dest);

                   ByteArrayOutputStream outputbytes = new
ByteArrayOutputStream();
                   ImageIO.write(dest, "jpeg", outputbytes);
                   BytesWritable outfile = new
BytesWritable(outputbytes.toByteArray());

                   output.collect(key, outfile);
        }

}

//MAIN CLASS

public class BlurVideoHadoop {

        public static void main(String[] args) {

                if(args.length!=2) {
                        System.err.println("Usage: blurvideo input output");
                        System.exit(-1);

                }
                JobClient client = new JobClient();
                JobConf conf = new JobConf(BlurVideoHadoop.class);

                conf.setOutputKeyClass(Text.class);
                conf.setOutputValueClass(BytesWritable.class);

                SequenceFileInputFormat.addInputPath(conf, new Path(args[0]));
                SequenceFileOutputFormat.setOutputPath(conf, new Path(args[1]));


                conf.setMapperClass(BlurMapper.class);



conf.setReducerClass(org.apache.hadoop.mapred.lib.IdentityReducer.class);

                client.setConf(conf);
                try {
                        JobClient.runJob(conf);
                } catch (Exception e) {
                        e.printStackTrace();
                }
        }



Thanks,

Regards,

Suhail Rehman
MS by Research in Computer Science
International Institute of Information Technology - Hyderabad
rehman@research.iiit.ac.in
---------------------------------------------------------------------
http://research.iiit.ac.in/~rehman

Re: Using SequenceFiles in Hadoop for an imaging application.

Posted by Suhail Rehman <su...@gmail.com>.
All issues were fixed, the imaging primitives were in an external jar that
needed to be integrated into the final hadoop jar. Thanks Guys!

Suhail

On Wed, Jan 20, 2010 at 11:45 AM, Suhail Rehman <su...@gmail.com>wrote:

> OK, i fixed that problem by adding
>
> conf.setInputFormat(SequenceFileInputFormat.class);
>
> to the job configuration.
>
> Does the BytesWritable file object point to exactly one file inside the
> Sequence File?
>
>
>
>
> On Wed, Jan 20, 2010 at 10:54 AM, Suhail Rehman <su...@gmail.com>wrote:
>
>> This is the console output trying to run this application.  I'm sure the
>> input sequencefile is a (Text,BytesWritable) pair.
>>
>> Suhail
>>
>> 10/01/20 10:52:34 WARN conf.Configuration: DEPRECATED: hadoop-site.xml
>> found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use
>> core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of
>> core-default.xml, mapred-default.xml and hdfs-default.xml respectively
>> 10/01/20 10:52:34 WARN mapred.JobClient: Use GenericOptionsParser for
>> parsing the arguments. Applications should implement Tool for the same.
>> 10/01/20 10:52:34 INFO mapred.FileInputFormat: Total input paths to
>> process : 1
>> 10/01/20 10:52:34 INFO mapred.JobClient: Running job:
>> job_201001122200_0019
>> 10/01/20 10:52:35 INFO mapred.JobClient:  map 0% reduce 0%
>> 10/01/20 10:52:44 INFO mapred.JobClient: Task Id :
>> attempt_201001122200_0019_m_000000_0, Status : FAILED
>> java.lang.ClassCastException: org.apache.hadoop.io.LongWritable
>> incompatible with org.apache.hadoop.io.Text
>>     at BlurMapper.map(BlurMapper.java:1)
>>     at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
>>     at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
>>     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
>>     at org.apache.hadoop.mapred.Child.main(Child.java:170)
>>
>>
>> On Wed, Jan 20, 2010 at 5:41 AM, Jeff Zhang <zj...@gmail.com> wrote:
>>
>>> Could you paste your exception message ?
>>>
>>>
>>>
>>> On Wed, Jan 20, 2010 at 1:15 AM, Suhail Rehman <su...@gmail.com>wrote:
>>>
>>>> I am using hadoop to write some sample code which takes every image and
>>>> blurs it using a blurring filter.
>>>>
>>>> I was able to convert all my input images into a sequence file, and I've
>>>> written the following hadoop code to perform the blurring operation. (The
>>>> input sequencefile key-value pairs are Text (filename of the image),
>>>> BytesWritable (image contents) for each record).
>>>>
>>>> 've used the TAR to sequence file creator available here:
>>>> http://stuartsierra.com/2008/04/24/a-million-little-files
>>>>
>>>> But for some reason, I cannot get it working. I've pasted the code here
>>>> (not including the imports), let me know what I'm doing wrong (I'm new to
>>>> Hadoop btw).
>>>>
>>>> import com.jhlabs.image.BoxBlurFilter;
>>>>
>>>> //Mapper Class
>>>>
>>>> public class BlurMapper extends MapReduceBase implements
>>>> Mapper<Text,BytesWritable,Text,BytesWritable> {
>>>>
>>>>         public void map(Text key, BytesWritable file,
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>                         OutputCollector<Text, BytesWritable> output, Reporter reporter) throws
>>>> IOException {
>>>>
>>>>
>>>>                    //Read Current Image from File.
>>>>                    BufferedImage img = ImageIO.read(new ByteArrayInputStream
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> (file.getBytes()));
>>>>                    BufferedImage dest = null;
>>>>
>>>>                    //Apply Blur on Filter Operation - External JAR
>>>>                    BoxBlurFilter BlurOp = new BoxBlurFilter(10,10,2);
>>>>                    BlurOp.filter(img, dest);
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>                    ByteArrayOutputStream outputbytes = new ByteArrayOutputStream();
>>>>                    ImageIO.write(dest, "jpeg", outputbytes);
>>>>                    BytesWritable outfile = new BytesWritable(outputbytes.toByteArray());
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>                    output.collect(key, outfile);
>>>>         }
>>>>
>>>> }
>>>>
>>>> //MAIN CLASS
>>>>
>>>> public class BlurVideoHadoop {
>>>>
>>>>         public static void main(String[] args) {
>>>>
>>>>                 if(args.length!=2) {
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>                         System.err.println("Usage: blurvideo input output");
>>>>                         System.exit(-1);
>>>>
>>>>                 }
>>>>                 JobClient client = new JobClient();
>>>>                 JobConf conf = new JobConf(BlurVideoHadoop.class);
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>                 conf.setOutputKeyClass(Text.class);
>>>>                 conf.setOutputValueClass(BytesWritable.class);
>>>>
>>>>                 SequenceFileInputFormat.addInputPath(conf, new Path(args[0]));
>>>>                 SequenceFileOutputFormat.setOutputPath(conf, new Path(args[1]));
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>                 conf.setMapperClass(BlurMapper.class);
>>>>
>>>>
>>>>                 conf.setReducerClass(org.apache.hadoop.mapred.lib.IdentityReducer.class);
>>>>
>>>>                 client.setConf(conf);
>>>>                 try {
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>                         JobClient.runJob(conf);
>>>>                 } catch (Exception e) {
>>>>                         e.printStackTrace();
>>>>                 }
>>>>         }
>>>>
>>>>
>>>>
>>>> Thanks,
>>>>
>>>> Regards,
>>>>
>>>> Suhail Rehman
>>>> MS by Research in Computer Science
>>>> International Institute of Information Technology - Hyderabad
>>>> rehman@research.iiit.ac.in
>>>> ---------------------------------------------------------------------
>>>> http://research.iiit.ac.in/~rehman<http://research.iiit.ac.in/%7Erehman>
>>>>
>>>
>>>
>>>
>>> --
>>> Best Regards
>>>
>>> Jeff Zhang
>>>
>>
>>
>>
>> --
>> Regards,
>>
>> Suhail Rehman
>> MS by Research in Computer Science
>> International Institute of Information Technology - Hyderabad
>> rehman@research.iiit.ac.in
>> ---------------------------------------------------------------------
>> http://research.iiit.ac.in/~rehman <http://research.iiit.ac.in/%7Erehman>
>>
>
>
>
> --
> Regards,
>
> Suhail Rehman
> MS by Research in Computer Science
> International Institute of Information Technology - Hyderabad
> rehman@research.iiit.ac.in
> ---------------------------------------------------------------------
> http://research.iiit.ac.in/~rehman <http://research.iiit.ac.in/%7Erehman>
>



-- 
Regards,

Suhail Rehman
MS by Research in Computer Science
International Institute of Information Technology - Hyderabad
rehman@research.iiit.ac.in
---------------------------------------------------------------------
http://research.iiit.ac.in/~rehman

Re: Using SequenceFiles in Hadoop for an imaging application.

Posted by Suhail Rehman <su...@gmail.com>.
OK, i fixed that problem by adding

conf.setInputFormat(SequenceFileInputFormat.class);

to the job configuration.

Does the BytesWritable file object point to exactly one file inside the
Sequence File?



On Wed, Jan 20, 2010 at 10:54 AM, Suhail Rehman <su...@gmail.com>wrote:

> This is the console output trying to run this application.  I'm sure the
> input sequencefile is a (Text,BytesWritable) pair.
>
> Suhail
>
> 10/01/20 10:52:34 WARN conf.Configuration: DEPRECATED: hadoop-site.xml
> found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use
> core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of
> core-default.xml, mapred-default.xml and hdfs-default.xml respectively
> 10/01/20 10:52:34 WARN mapred.JobClient: Use GenericOptionsParser for
> parsing the arguments. Applications should implement Tool for the same.
> 10/01/20 10:52:34 INFO mapred.FileInputFormat: Total input paths to process
> : 1
> 10/01/20 10:52:34 INFO mapred.JobClient: Running job: job_201001122200_0019
> 10/01/20 10:52:35 INFO mapred.JobClient:  map 0% reduce 0%
> 10/01/20 10:52:44 INFO mapred.JobClient: Task Id :
> attempt_201001122200_0019_m_000000_0, Status : FAILED
> java.lang.ClassCastException: org.apache.hadoop.io.LongWritable
> incompatible with org.apache.hadoop.io.Text
>     at BlurMapper.map(BlurMapper.java:1)
>     at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
>     at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
>     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
>     at org.apache.hadoop.mapred.Child.main(Child.java:170)
>
>
> On Wed, Jan 20, 2010 at 5:41 AM, Jeff Zhang <zj...@gmail.com> wrote:
>
>> Could you paste your exception message ?
>>
>>
>>
>> On Wed, Jan 20, 2010 at 1:15 AM, Suhail Rehman <su...@gmail.com>wrote:
>>
>>> I am using hadoop to write some sample code which takes every image and
>>> blurs it using a blurring filter.
>>>
>>> I was able to convert all my input images into a sequence file, and I've
>>> written the following hadoop code to perform the blurring operation. (The
>>> input sequencefile key-value pairs are Text (filename of the image),
>>> BytesWritable (image contents) for each record).
>>>
>>> 've used the TAR to sequence file creator available here:
>>> http://stuartsierra.com/2008/04/24/a-million-little-files
>>>
>>> But for some reason, I cannot get it working. I've pasted the code here
>>> (not including the imports), let me know what I'm doing wrong (I'm new to
>>> Hadoop btw).
>>>
>>> import com.jhlabs.image.BoxBlurFilter;
>>>
>>> //Mapper Class
>>>
>>> public class BlurMapper extends MapReduceBase implements
>>> Mapper<Text,BytesWritable,Text,BytesWritable> {
>>>
>>>         public void map(Text key, BytesWritable file,
>>>
>>>
>>>
>>>
>>>                         OutputCollector<Text, BytesWritable> output, Reporter reporter) throws
>>> IOException {
>>>
>>>
>>>                    //Read Current Image from File.
>>>                    BufferedImage img = ImageIO.read(new ByteArrayInputStream
>>>
>>>
>>>
>>>
>>> (file.getBytes()));
>>>                    BufferedImage dest = null;
>>>
>>>                    //Apply Blur on Filter Operation - External JAR
>>>                    BoxBlurFilter BlurOp = new BoxBlurFilter(10,10,2);
>>>                    BlurOp.filter(img, dest);
>>>
>>>
>>>
>>>
>>>                    ByteArrayOutputStream outputbytes = new ByteArrayOutputStream();
>>>                    ImageIO.write(dest, "jpeg", outputbytes);
>>>                    BytesWritable outfile = new BytesWritable(outputbytes.toByteArray());
>>>
>>>
>>>
>>>
>>>                    output.collect(key, outfile);
>>>         }
>>>
>>> }
>>>
>>> //MAIN CLASS
>>>
>>> public class BlurVideoHadoop {
>>>
>>>         public static void main(String[] args) {
>>>
>>>                 if(args.length!=2) {
>>>
>>>
>>>
>>>
>>>                         System.err.println("Usage: blurvideo input output");
>>>                         System.exit(-1);
>>>
>>>                 }
>>>                 JobClient client = new JobClient();
>>>                 JobConf conf = new JobConf(BlurVideoHadoop.class);
>>>
>>>
>>>
>>>
>>>                 conf.setOutputKeyClass(Text.class);
>>>                 conf.setOutputValueClass(BytesWritable.class);
>>>
>>>                 SequenceFileInputFormat.addInputPath(conf, new Path(args[0]));
>>>                 SequenceFileOutputFormat.setOutputPath(conf, new Path(args[1]));
>>>
>>>
>>>
>>>
>>>
>>>                 conf.setMapperClass(BlurMapper.class);
>>>
>>>
>>>                 conf.setReducerClass(org.apache.hadoop.mapred.lib.IdentityReducer.class);
>>>
>>>                 client.setConf(conf);
>>>                 try {
>>>
>>>
>>>
>>>
>>>                         JobClient.runJob(conf);
>>>                 } catch (Exception e) {
>>>                         e.printStackTrace();
>>>                 }
>>>         }
>>>
>>>
>>>
>>> Thanks,
>>>
>>> Regards,
>>>
>>> Suhail Rehman
>>> MS by Research in Computer Science
>>> International Institute of Information Technology - Hyderabad
>>> rehman@research.iiit.ac.in
>>> ---------------------------------------------------------------------
>>> http://research.iiit.ac.in/~rehman<http://research.iiit.ac.in/%7Erehman>
>>>
>>
>>
>>
>> --
>> Best Regards
>>
>> Jeff Zhang
>>
>
>
>
> --
> Regards,
>
> Suhail Rehman
> MS by Research in Computer Science
> International Institute of Information Technology - Hyderabad
> rehman@research.iiit.ac.in
> ---------------------------------------------------------------------
> http://research.iiit.ac.in/~rehman <http://research.iiit.ac.in/%7Erehman>
>



-- 
Regards,

Suhail Rehman
MS by Research in Computer Science
International Institute of Information Technology - Hyderabad
rehman@research.iiit.ac.in
---------------------------------------------------------------------
http://research.iiit.ac.in/~rehman <http://research.iiit.ac.in/%7Erehman>

Re: Using SequenceFiles in Hadoop for an imaging application.

Posted by Suhail Rehman <su...@gmail.com>.
This is the console output trying to run this application.  I'm sure the
input sequencefile is a (Text,BytesWritable) pair.

Suhail

10/01/20 10:52:34 WARN conf.Configuration: DEPRECATED: hadoop-site.xml found
in the classpath. Usage of hadoop-site.xml is deprecated. Instead use
core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of
core-default.xml, mapred-default.xml and hdfs-default.xml respectively
10/01/20 10:52:34 WARN mapred.JobClient: Use GenericOptionsParser for
parsing the arguments. Applications should implement Tool for the same.
10/01/20 10:52:34 INFO mapred.FileInputFormat: Total input paths to process
: 1
10/01/20 10:52:34 INFO mapred.JobClient: Running job: job_201001122200_0019
10/01/20 10:52:35 INFO mapred.JobClient:  map 0% reduce 0%
10/01/20 10:52:44 INFO mapred.JobClient: Task Id :
attempt_201001122200_0019_m_000000_0, Status : FAILED
java.lang.ClassCastException: org.apache.hadoop.io.LongWritable incompatible
with org.apache.hadoop.io.Text
    at BlurMapper.map(BlurMapper.java:1)
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
    at org.apache.hadoop.mapred.Child.main(Child.java:170)

On Wed, Jan 20, 2010 at 5:41 AM, Jeff Zhang <zj...@gmail.com> wrote:

> Could you paste your exception message ?
>
>
>
> On Wed, Jan 20, 2010 at 1:15 AM, Suhail Rehman <su...@gmail.com>wrote:
>
>> I am using hadoop to write some sample code which takes every image and
>> blurs it using a blurring filter.
>>
>> I was able to convert all my input images into a sequence file, and I've
>> written the following hadoop code to perform the blurring operation. (The
>> input sequencefile key-value pairs are Text (filename of the image),
>> BytesWritable (image contents) for each record).
>>
>> 've used the TAR to sequence file creator available here:
>> http://stuartsierra.com/2008/04/24/a-million-little-files
>>
>> But for some reason, I cannot get it working. I've pasted the code here
>> (not including the imports), let me know what I'm doing wrong (I'm new to
>> Hadoop btw).
>>
>> import com.jhlabs.image.BoxBlurFilter;
>>
>> //Mapper Class
>>
>> public class BlurMapper extends MapReduceBase implements
>> Mapper<Text,BytesWritable,Text,BytesWritable> {
>>
>>         public void map(Text key, BytesWritable file,
>>
>>
>>                         OutputCollector<Text, BytesWritable> output, Reporter reporter) throws
>> IOException {
>>
>>
>>                    //Read Current Image from File.
>>                    BufferedImage img = ImageIO.read(new ByteArrayInputStream
>>
>>
>> (file.getBytes()));
>>                    BufferedImage dest = null;
>>
>>                    //Apply Blur on Filter Operation - External JAR
>>                    BoxBlurFilter BlurOp = new BoxBlurFilter(10,10,2);
>>                    BlurOp.filter(img, dest);
>>
>>
>>                    ByteArrayOutputStream outputbytes = new ByteArrayOutputStream();
>>                    ImageIO.write(dest, "jpeg", outputbytes);
>>                    BytesWritable outfile = new BytesWritable(outputbytes.toByteArray());
>>
>>
>>                    output.collect(key, outfile);
>>         }
>>
>> }
>>
>> //MAIN CLASS
>>
>> public class BlurVideoHadoop {
>>
>>         public static void main(String[] args) {
>>
>>                 if(args.length!=2) {
>>
>>
>>                         System.err.println("Usage: blurvideo input output");
>>                         System.exit(-1);
>>
>>                 }
>>                 JobClient client = new JobClient();
>>                 JobConf conf = new JobConf(BlurVideoHadoop.class);
>>
>>
>>                 conf.setOutputKeyClass(Text.class);
>>                 conf.setOutputValueClass(BytesWritable.class);
>>
>>                 SequenceFileInputFormat.addInputPath(conf, new Path(args[0]));
>>                 SequenceFileOutputFormat.setOutputPath(conf, new Path(args[1]));
>>
>>
>>
>>                 conf.setMapperClass(BlurMapper.class);
>>
>>
>>                 conf.setReducerClass(org.apache.hadoop.mapred.lib.IdentityReducer.class);
>>
>>                 client.setConf(conf);
>>                 try {
>>
>>
>>                         JobClient.runJob(conf);
>>                 } catch (Exception e) {
>>                         e.printStackTrace();
>>                 }
>>         }
>>
>>
>>
>> Thanks,
>>
>> Regards,
>>
>> Suhail Rehman
>> MS by Research in Computer Science
>> International Institute of Information Technology - Hyderabad
>> rehman@research.iiit.ac.in
>> ---------------------------------------------------------------------
>> http://research.iiit.ac.in/~rehman <http://research.iiit.ac.in/%7Erehman>
>>
>
>
>
> --
> Best Regards
>
> Jeff Zhang
>



-- 
Regards,

Suhail Rehman
MS by Research in Computer Science
International Institute of Information Technology - Hyderabad
rehman@research.iiit.ac.in
---------------------------------------------------------------------
http://research.iiit.ac.in/~rehman

Re: Using SequenceFiles in Hadoop for an imaging application.

Posted by Jeff Zhang <zj...@gmail.com>.
Could you paste your exception message ?



On Wed, Jan 20, 2010 at 1:15 AM, Suhail Rehman <su...@gmail.com>wrote:

> I am using hadoop to write some sample code which takes every image and
> blurs it using a blurring filter.
>
> I was able to convert all my input images into a sequence file, and I've
> written the following hadoop code to perform the blurring operation. (The
> input sequencefile key-value pairs are Text (filename of the image),
> BytesWritable (image contents) for each record).
>
> 've used the TAR to sequence file creator available here:
> http://stuartsierra.com/2008/04/24/a-million-little-files
>
> But for some reason, I cannot get it working. I've pasted the code here
> (not including the imports), let me know what I'm doing wrong (I'm new to
> Hadoop btw).
>
> import com.jhlabs.image.BoxBlurFilter;
>
> //Mapper Class
>
> public class BlurMapper extends MapReduceBase implements
> Mapper<Text,BytesWritable,Text,BytesWritable> {
>
>         public void map(Text key, BytesWritable file,
>
>                         OutputCollector<Text, BytesWritable> output, Reporter reporter) throws
> IOException {
>
>
>                    //Read Current Image from File.
>                    BufferedImage img = ImageIO.read(new ByteArrayInputStream
>
> (file.getBytes()));
>                    BufferedImage dest = null;
>
>                    //Apply Blur on Filter Operation - External JAR
>                    BoxBlurFilter BlurOp = new BoxBlurFilter(10,10,2);
>                    BlurOp.filter(img, dest);
>
>                    ByteArrayOutputStream outputbytes = new ByteArrayOutputStream();
>                    ImageIO.write(dest, "jpeg", outputbytes);
>                    BytesWritable outfile = new BytesWritable(outputbytes.toByteArray());
>
>                    output.collect(key, outfile);
>         }
>
> }
>
> //MAIN CLASS
>
> public class BlurVideoHadoop {
>
>         public static void main(String[] args) {
>
>                 if(args.length!=2) {
>
>                         System.err.println("Usage: blurvideo input output");
>                         System.exit(-1);
>
>                 }
>                 JobClient client = new JobClient();
>                 JobConf conf = new JobConf(BlurVideoHadoop.class);
>
>                 conf.setOutputKeyClass(Text.class);
>                 conf.setOutputValueClass(BytesWritable.class);
>
>                 SequenceFileInputFormat.addInputPath(conf, new Path(args[0]));
>                 SequenceFileOutputFormat.setOutputPath(conf, new Path(args[1]));
>
>
>                 conf.setMapperClass(BlurMapper.class);
>
>
>                 conf.setReducerClass(org.apache.hadoop.mapred.lib.IdentityReducer.class);
>
>                 client.setConf(conf);
>                 try {
>
>                         JobClient.runJob(conf);
>                 } catch (Exception e) {
>                         e.printStackTrace();
>                 }
>         }
>
>
>
> Thanks,
>
> Regards,
>
> Suhail Rehman
> MS by Research in Computer Science
> International Institute of Information Technology - Hyderabad
> rehman@research.iiit.ac.in
> ---------------------------------------------------------------------
> http://research.iiit.ac.in/~rehman <http://research.iiit.ac.in/%7Erehman>
>



-- 
Best Regards

Jeff Zhang