You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Owen O'Malley (JIRA)" <ji...@apache.org> on 2007/06/08 19:03:29 UTC

[jira] Commented: (HADOOP-372) should allow to specify different inputformat classes for different input dirs for Map/Reduce jobs

    [ https://issues.apache.org/jira/browse/HADOOP-372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12502881 ] 

Owen O'Malley commented on HADOOP-372:
--------------------------------------

Ok, my current thoughts are as follows:

interface JobInput {
  void validateInput(JobConf conf)
  List<InputSplit> createSplits(JobConf conf);
}

and the natural implementation over FileSystem files and directories:

class FileSystemJobInput implements JobInput { 
  ...
  public static void addInputPath(JobConf conf, Path path);
  public static void addInputPath(JobConf conf, Path path, 
                                                           Class<? extends RecordReader> reader, 
                                                           Class<? extends Mapper> mapper);
  public static void setDefaultRecordReader(JobConf conf, Class<? extends RecordReader> reader);
}

which would be the default and be used by most applications. The other major change is to InputSplits, which get the ability to define their RecordReader and Mapper.

interface InputSplit extends Writable {
  ...
  RecordReader createRecordReader();
  Mapper createMapper();
}

Finally, RecordReader gets a method to set the InputSplit:

interface RecordReader {
  void initialize(InputSplit split, Progressable progress) throws IOException;
  ...
}

so that their interface is standard and can be created via ReflectionUtils.newInstance.

> should allow to specify different inputformat classes for different input dirs for Map/Reduce jobs
> --------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-372
>                 URL: https://issues.apache.org/jira/browse/HADOOP-372
>             Project: Hadoop
>          Issue Type: New Feature
>          Components: mapred
>    Affects Versions: 0.4.0
>         Environment: all
>            Reporter: Runping Qi
>            Assignee: Owen O'Malley
>
> Right now, the user can specify multiple input directories for a map reduce job. 
> However, the files under all the directories are assumed to be in the same format, 
> with the same key/value classes. This proves to be  a serious limit in many situations. 
> Here is an example. Suppose I have three simple tables: 
> one has URLs and their rank values (page ranks), 
> another has URLs and their classification values, 
> and the third one has the URL meta data such as crawl status, last crawl time, etc. 
> Suppose now I need a job to generate a list of URLs to be crawled next. 
> The decision depends on the info in all the three tables.
> Right now, there is no easy way to accomplish this.
> However, this job can be done if the framework allows to specify different inputformats for different input dirs.
> Suppose my three tables are in the following directory respectively: rankTable, classificationTable. and metaDataTable. 
> If we extend JobConf class with the following method (as Owen suggested to me):
>     addInputPath(aPath, anInputFormatClass, anInputKeyClass, anInputValueClass)
> Then I can specify my job as follows:
>     addInputPath(rankTable, SequenceFileInputFormat.class, UTF8.class, DoubleWritable.class)
>     addInputPath(classificationTable, TextInputFormat.class, UTF8,class, UTF8.class)
>     addInputPath(metaDataTable, SequenceFileInputFormat.class, UTF8.class, MyRecord.class)
> If an input directory is added through the current API, it will have the same meaning as it is now. 
> Thus this extension will not affect any applications that do not need this new feature.
> It is relatively easy for the M/R framework to create an appropriate record reader for a map task based on the above information.
> And that is the only change needed for supporting this extension.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.