You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Li Li <fa...@gmail.com> on 2014/05/08 07:50:32 UTC

map reduce become much slower when upgrading from 0.94.11 to 0.96.2-hadoop1

today I upgraded hbase 0.94.11 to 0.96.2-hadoop1. I have not changed
any client codes except replace 0.94.11 client jar to 0.96.2 's
When with old version. when doing mapreduce task. the requests per
seconds is about 10,000. But with new one, the value is 300. What's
wrong with it?
The hbase put and get is fast and Request Per Second is larger than 5,000

my codes:
List<Scan> scans = new ArrayList<Scan>();
Scan urldbScan=new Scan();
urldbScan.setCaching(5000);
urldbScan.setCacheBlocks(false);
urldbScan.setAttribute(Scan.SCAN_ATTRIBUTES_TABLE_NAME,
HbaseTools.TB_URL_DB_BT);
urldbScan.addFamily(HbaseTools.CF_BT);
scans.add(urldbScan);
Scan outLinkScan=new Scan();
outLinkScan.setCaching(5000);
outLinkScan.setCacheBlocks(false);
outLinkScan.setAttribute(Scan.SCAN_ATTRIBUTES_TABLE_NAME,
HbaseTools.TB_OUT_LINK_BT);
outLinkScan.addFamily(HbaseTools.CF_BT);
scans.add(outLinkScan);
TableMapReduceUtil.initTableMapperJob(scans, Step1Mapper.class,
BytesWritable.class,
ScheduleData.class, job);

Re: map reduce become much slower when upgrading from 0.94.11 to 0.96.2-hadoop1

Posted by Ishan Chhabra <ic...@rocketfuel.com>.
Adding back user@hbase.


On Mon, Jul 21, 2014 at 7:11 PM, Ishan Chhabra <ic...@rocketfuel.com>
wrote:

> Remove the line:
>
> urldbScan.setCaching(5000);
>
> and add:
>
> TableMapReduceUtil.setScannerCaching(job, 5000);
>
>
> On Mon, Jul 21, 2014 at 6:58 PM, Li Li <fa...@gmail.com> wrote:
>
>> it seems we confront this problem. But after reading this issue, I
>> still don't know how to solve it. could you please give me some sample
>> codes?
>> my codes are as follows, what shoud I do in hbase 0.96?
>> List<Scan> scans = new ArrayList<Scan>();
>> Scan urldbScan=new Scan();
>> urldbScan.setCaching(5000);
>> urldbScan.setCacheBlocks(false);
>> urldbScan.setAttribute(Scan.SCAN_ATTRIBUTES_TABLE_NAME,
>> HbaseTools.TB_URL_DB_BT);
>> urldbScan.addFamily(HbaseTools.CF_BT);
>> scans.add(urldbScan);
>> Scan outLinkScan=new Scan();
>> outLinkScan.setCaching(5000);
>> outLinkScan.setCacheBlocks(false);
>> outLinkScan.setAttribute(Scan.SCAN_ATTRIBUTES_TABLE_NAME,
>> HbaseTools.TB_OUT_LINK_BT);
>> outLinkScan.addFamily(HbaseTools.CF_BT);
>> scans.add(outLinkScan);
>> TableMapReduceUtil.initTableMapperJob(scans, Step1Mapper.class,
>> BytesWritable.class,
>> ScheduleData.class, job);
>>
>> On Tue, Jul 22, 2014 at 8:14 AM, Ishan Chhabra <ic...@rocketfuel.com>
>> wrote:
>> > You might be affected by this:
>> > https://issues.apache.org/jira/browse/HBASE-11558
>> >
>> >
>> > On Wed, Jun 25, 2014 at 4:18 PM, Ishan Chhabra <ichhabra@rocketfuel.com
>> >
>> > wrote:
>> >>
>> >> Li Li,
>> >> Were you able to figure out the cause of this? I am seeing something
>> >> similar.
>> >>
>> >>
>> >> On Wed, May 7, 2014 at 10:50 PM, Li Li <fa...@gmail.com> wrote:
>> >>>
>> >>> today I upgraded hbase 0.94.11 to 0.96.2-hadoop1. I have not changed
>> >>> any client codes except replace 0.94.11 client jar to 0.96.2 's
>> >>> When with old version. when doing mapreduce task. the requests per
>> >>> seconds is about 10,000. But with new one, the value is 300. What's
>> >>> wrong with it?
>> >>> The hbase put and get is fast and Request Per Second is larger than
>> 5,000
>> >>>
>> >>> my codes:
>> >>> List<Scan> scans = new ArrayList<Scan>();
>> >>> Scan urldbScan=new Scan();
>> >>> urldbScan.setCaching(5000);
>> >>> urldbScan.setCacheBlocks(false);
>> >>> urldbScan.setAttribute(Scan.SCAN_ATTRIBUTES_TABLE_NAME,
>> >>> HbaseTools.TB_URL_DB_BT);
>> >>> urldbScan.addFamily(HbaseTools.CF_BT);
>> >>> scans.add(urldbScan);
>> >>> Scan outLinkScan=new Scan();
>> >>> outLinkScan.setCaching(5000);
>> >>> outLinkScan.setCacheBlocks(false);
>> >>> outLinkScan.setAttribute(Scan.SCAN_ATTRIBUTES_TABLE_NAME,
>> >>> HbaseTools.TB_OUT_LINK_BT);
>> >>> outLinkScan.addFamily(HbaseTools.CF_BT);
>> >>> scans.add(outLinkScan);
>> >>> TableMapReduceUtil.initTableMapperJob(scans, Step1Mapper.class,
>> >>> BytesWritable.class,
>> >>> ScheduleData.class, job);
>> >>
>> >>
>> >>
>> >>
>> >> --
>> >> Ishan Chhabra | Rocket Scientist | RocketFuel Inc.
>> >
>> >
>> >
>> >
>> > --
>> > Ishan Chhabra | Rocket Scientist | RocketFuel Inc.
>>
>
>
>
> --
> *Ishan Chhabra *| Rocket Scientist | RocketFuel Inc.
>



-- 
*Ishan Chhabra *| Rocket Scientist | RocketFuel Inc.

Re: map reduce become much slower when upgrading from 0.94.11 to 0.96.2-hadoop1

Posted by Ishan Chhabra <ic...@rocketfuel.com>.
You might be affected by this:
https://issues.apache.org/jira/browse/HBASE-11558


On Wed, Jun 25, 2014 at 4:18 PM, Ishan Chhabra <ic...@rocketfuel.com>
wrote:

> Li Li,
> Were you able to figure out the cause of this? I am seeing something
> similar.
>
>
> On Wed, May 7, 2014 at 10:50 PM, Li Li <fa...@gmail.com> wrote:
>
>> today I upgraded hbase 0.94.11 to 0.96.2-hadoop1. I have not changed
>> any client codes except replace 0.94.11 client jar to 0.96.2 's
>> When with old version. when doing mapreduce task. the requests per
>> seconds is about 10,000. But with new one, the value is 300. What's
>> wrong with it?
>> The hbase put and get is fast and Request Per Second is larger than 5,000
>>
>> my codes:
>> List<Scan> scans = new ArrayList<Scan>();
>> Scan urldbScan=new Scan();
>> urldbScan.setCaching(5000);
>> urldbScan.setCacheBlocks(false);
>> urldbScan.setAttribute(Scan.SCAN_ATTRIBUTES_TABLE_NAME,
>> HbaseTools.TB_URL_DB_BT);
>> urldbScan.addFamily(HbaseTools.CF_BT);
>> scans.add(urldbScan);
>> Scan outLinkScan=new Scan();
>> outLinkScan.setCaching(5000);
>> outLinkScan.setCacheBlocks(false);
>> outLinkScan.setAttribute(Scan.SCAN_ATTRIBUTES_TABLE_NAME,
>> HbaseTools.TB_OUT_LINK_BT);
>> outLinkScan.addFamily(HbaseTools.CF_BT);
>> scans.add(outLinkScan);
>> TableMapReduceUtil.initTableMapperJob(scans, Step1Mapper.class,
>> BytesWritable.class,
>> ScheduleData.class, job);
>>
>
>
>
> --
> *Ishan Chhabra *| Rocket Scientist | RocketFuel Inc.
>



-- 
*Ishan Chhabra *| Rocket Scientist | RocketFuel Inc.

Re: map reduce become much slower when upgrading from 0.94.11 to 0.96.2-hadoop1

Posted by Ishan Chhabra <ic...@rocketfuel.com>.
Li Li,
Were you able to figure out the cause of this? I am seeing something
similar.


On Wed, May 7, 2014 at 10:50 PM, Li Li <fa...@gmail.com> wrote:

> today I upgraded hbase 0.94.11 to 0.96.2-hadoop1. I have not changed
> any client codes except replace 0.94.11 client jar to 0.96.2 's
> When with old version. when doing mapreduce task. the requests per
> seconds is about 10,000. But with new one, the value is 300. What's
> wrong with it?
> The hbase put and get is fast and Request Per Second is larger than 5,000
>
> my codes:
> List<Scan> scans = new ArrayList<Scan>();
> Scan urldbScan=new Scan();
> urldbScan.setCaching(5000);
> urldbScan.setCacheBlocks(false);
> urldbScan.setAttribute(Scan.SCAN_ATTRIBUTES_TABLE_NAME,
> HbaseTools.TB_URL_DB_BT);
> urldbScan.addFamily(HbaseTools.CF_BT);
> scans.add(urldbScan);
> Scan outLinkScan=new Scan();
> outLinkScan.setCaching(5000);
> outLinkScan.setCacheBlocks(false);
> outLinkScan.setAttribute(Scan.SCAN_ATTRIBUTES_TABLE_NAME,
> HbaseTools.TB_OUT_LINK_BT);
> outLinkScan.addFamily(HbaseTools.CF_BT);
> scans.add(outLinkScan);
> TableMapReduceUtil.initTableMapperJob(scans, Step1Mapper.class,
> BytesWritable.class,
> ScheduleData.class, job);
>



-- 
*Ishan Chhabra *| Rocket Scientist | RocketFuel Inc.