You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by yonghu <yo...@gmail.com> on 2012/07/12 13:15:45 UTC

Why Hadoop can't find Reducer when Mapper reads data from HBase?

Hello,

I tried the program as follows:
public class MRTableAccess {

	static class MRTableMapper extends TableMapper<Text, Text>{
		private Text rowInfor = new Text();
		private Text column = new Text();
		public void map(ImmutableBytesWritable row, Result values, Context
context) throws IOException, InterruptedException{
			for(KeyValue kv : values.raw()){
				String row_key = Bytes.toString(kv.getRow()) + "/" +
Bytes.toString(kv.getFamily()) + "/" +
Bytes.toString(kv.getQualifier());
				String opType =  KeyValue.Type.codeToType(kv.getType()).toString();
				String column_value = opType + "/" + new
Long(kv.getTimestamp()).toString() + "/" +
Bytes.toString(kv.getValue());
				rowInfor.set(row_key);
				column.set(column_value);
				context.write(rowInfor, column);
			}
		}
	}
	
	static class MTableReducer extends Reducer<Text, Text, Text, Text>{
		
		public void reduce(Text key, Iterable<Text> values, Context context)
throws IOException, InterruptedException{
			Iterator it = values.iterator();
			while(it.hasNext()){
				context.write(key, (Text)it.next());
			}
		}
	}
	
	public static void main(String[] args) throws Exception{  //only
extract the latest data version for each row
		long start_time = System.currentTimeMillis();
		//System.out.println("start_time is " + start_time);
		Configuration conf = new Configuration();
		Configuration hconf = HBaseConfiguration.create(conf);
		hconf.set("hbase.zookeeper.quorum", "localhost");
	        hconf.set("hbase.zookeeper.property.clientPort", "2181");
	       hconf.set("fs.default.name", "hdfs://localhost:8020");
		Job job = new Job(hconf,"MRTableScann");
		job.setJarByClass(MRTableAccess.class);
		Scan scan = new Scan();
		TableMapReduceUtil.initTableMapperJob("Baseball", scan,
MRTableMapper.class, Text.class, Text.class, job);
		job.setReducerClass(MTableReducer.class);
		job.setOutputKeyClass(Text.class);
		job.setOutputValueClass(Text.class);
		FileOutputFormat.setOutputPath(job, new
Path("hdfs://localhost/MRExtraction"));
		boolean i = job.waitForCompletion(true);
		System.exit(i ? 1 : 0);
	}
}
 I run this program in my laptop. It works fine if only Map taks run,
but if I add reduce task, an error occurs:

java.lang.RuntimeException: java.lang.ClassNotFoundException:
com.mapreducetablescan.MRTableAccess$MTableReducer;

Does anybody know why?

regards!

Yong

Re: Why Hadoop can't find Reducer when Mapper reads data from HBase?

Posted by yonghu <yo...@gmail.com>.
Strage thing is the same program works fine in the cluster.  By the
way, also in pseudo mode when MapReduce read data from Cassandra in
Map phase and transferred to Reduce phase, the same error happened.

regards!

Yong

On Thu, Jul 12, 2012 at 2:01 PM, Stack <st...@duboce.net> wrote:
> On Thu, Jul 12, 2012 at 1:15 PM, yonghu <yo...@gmail.com> wrote:
>> java.lang.RuntimeException: java.lang.ClassNotFoundException:
>> com.mapreducetablescan.MRTableAccess$MTableReducer;
>>
>> Does anybody know why?
>>
>
> Its not in your job jar?  Check the job jar (jar -tf JAR_FILE).
>
> St.Ack

Re: Why Hadoop can't find Reducer when Mapper reads data from HBase?

Posted by Stack <st...@duboce.net>.
On Thu, Jul 12, 2012 at 1:15 PM, yonghu <yo...@gmail.com> wrote:
> java.lang.RuntimeException: java.lang.ClassNotFoundException:
> com.mapreducetablescan.MRTableAccess$MTableReducer;
>
> Does anybody know why?
>

Its not in your job jar?  Check the job jar (jar -tf JAR_FILE).

St.Ack