You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Subroto <ss...@datameer.com> on 2013/05/28 11:16:14 UTC
LocalJobRunner is not using the correct JobConf to setup the OutputCommitter
Hi,
I am reusing JobClient object which internally holds a LocalJobRunner instance.
When I submit the Job via the JobClient; LocalJobRunner is not using the correct JobConf to set the OutputCommitter.setupJob().
Following is the code snippet from LocalJobRunner#org.apache.hadoop.mapred.LocalJobRunner.Job.run():
public void run() {
JobID jobId = profile.getJobID();
JobContext jContext = new JobContext(conf, jobId);
OutputCommitter outputCommitter = job.getOutputCommitter();
try {
TaskSplitMetaInfo[] taskSplitMetaInfos =
SplitMetaInfoReader.readSplitMetaInfo(jobId, localFs, conf, systemJobDir);
int numReduceTasks = job.getNumReduceTasks();
if (numReduceTasks > 1 || numReduceTasks < 0) {
// we only allow 0 or 1 reducer in local mode
numReduceTasks = 1;
job.setNumReduceTasks(1);
}
outputCommitter.setupJob(jContext);
status.setSetupProgress(1.0f);
// Some more code to start map and reduce
}
The JobContext created in the second line of snippet is being created with the JobConf with which LocalJobRunner is instantiated; instead the JobContext should be created with JobConf with which the Job is instantiated.
Same context is being used to call outputcommitter.setupJob.
Please let me know if this is a bug or there is some specific intention behind this ??
Cheers,
Subroto Sanyal