You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "michele (JIRA)" <ji...@apache.org> on 2014/05/21 21:32:38 UTC
[jira] [Created] (MAPREDUCE-5901) Hadoop 2.4 Java execution issue:
remotely submission jobs fail
michele created MAPREDUCE-5901:
----------------------------------
Summary: Hadoop 2.4 Java execution issue: remotely submission jobs fail
Key: MAPREDUCE-5901
URL: https://issues.apache.org/jira/browse/MAPREDUCE-5901
Project: Hadoop Map/Reduce
Issue Type: New Feature
Environment: java, hadoop v. 2.4
Reporter: michele
I have installed Hadoop 2.4 on remote machine in Single-Mode setting. From another machine (client) I run a Java application that submit a job to a remote Hadoop machine (cluster), I have used the attached code. The problem is that the real execution of the map process is run on my local machine (client) not on the cluster machine.
JobConf job = new JobConf(SOF.class);
job.setJobName("SIM-"+sim_id);
System.setProperty("HADOOP_USER_NAME", "hadoop");
FileInputFormat.addInputPath(job,new Path("hdfs://cluster_ip:port"+USERS_HOME+user+"/SIM-"+sim_id+"/"+INPUT_FOLDER_HOME+"/input.tmp")/*new_inputs_path*/);
FileOutputFormat.setOutputPath(job, new Path("hdfs://cluster_ip:port"+USERS_HOME+user+"/SIM-"+sim_id+"/"+OUTPUT_FOLDER_HOME));
job.set("jar.work.directory", "hdfs://cluster_ip:port"+SOF.USERS_HOME+user+"/SIM-"+sim_id+"/flockers.jar");
job.setMapperClass(Mapper.class);
job.setReducerClass(Reducer.class);
job.setOutputKeyClass(org.apache.hadoop.io.Text.class);
job.setOutputValueClass(org.apache.hadoop.io.Text.class);
job.set("mapred.job.tracker", "cluster_ip:port");
job.set("fs.default.name", "hdfs://cluster_ip:port");
job.set("hadoop.job.ugi", "hadoop,hadoop");
job.set("user", "hadoop");
try {
JobClient jobc=new JobClient(job);
System.out.println(jobc+" "+job);
RunningJob runjob;
runjob = jobc.submitJob(job);
System.out.println(runjob);
System.out.println("VM "+Inet4Address.getLocalHost());
while(runjob.getJobStatus().equals(JobStatus.SUCCEEDED)){}
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
I have tried to set up correctly hadoop using the following mapred-site.xml:
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>cluster_ip:port</value>
</property>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
--
This message was sent by Atlassian JIRA
(v6.2#6252)