You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Christian Schäfer <sy...@yahoo.de> on 2012/08/01 10:18:50 UTC

Re: How to query by rowKey-infix

Thanks Matt & Jerry for your replies.

The data for each row is small (some hundred Bytes).

So, I will try the parallel table scan at first as you suggested...
Before organizing that by myself, wouldn't it be a better idea to create a map reduce job for that?

I'm not so keen on implementing secondary indices especially due to the mentioned consistency concerns.
Unfortunately projects like ithbase and ihbase are no more supporting current hbase and secondary indexes by coprocessors seems are not yet to there.
If I'm wrong feel free to correct me :)

regards,
Chris



----- Ursprüngliche Message -----
Von: Matt Corgan <mc...@hotpads.com>
An: user@hbase.apache.org
CC: Christian Schäfer <sy...@yahoo.de>
Gesendet: 19:41 Dienstag, 31.Juli 2012
Betreff: Re: How to query by rowKey-infix

When deciding between a table scan vs secondary index, you should try to
estimate what percent of the underlying data blocks will be used in the
query.  By default, each block is 64KB.

If each user's data is small and you are fitting multiple users per block,
then you're going to need all the blocks, so a tablescan is better because
it's simpler.  If each user has 1MB+ data then you will want to pick out
the individual blocks relevant to each date.  The secondary index will help
you go directly to those sparse blocks, but with a cost in complexity,
consistency, and extra denormalized data that knocks primary data out of
your block cache.

If latency is not a concern, I would start with the table scan.  If that's
too slow you add the secondary index, and if you still need it faster you
do the primary key lookups in parallel as Jerry mentions.

Matt

On Tue, Jul 31, 2012 at 10:10 AM, Jerry Lam <ch...@gmail.com> wrote:

> Hi Chris:
>
> I'm thinking about building a secondary index for primary key lookup, then
> query using the primary keys in parallel.
>
> I'm interested to see if there is other option too.
>
> Best Regards,
>
> Jerry
>
> On Tue, Jul 31, 2012 at 11:27 AM, Christian Schäfer <syrious3000@yahoo.de
> >wrote:
>
> > Hello there,
> >
> > I designed a row key for queries that need best performance (~100 ms)
> > which looks like this:
> >
> > userId-date-sessionId
> >
> > These queries(scans) are always based on a userId and sometimes
> > additionally on a date, too.
> > That's no problem with the key above.
> >
> > However, another kind of queries shall be based on a given time range
> > whereas the outermost left userId is not given or known.
> > In this case I need to get all rows covering the given time range with
> > their date to create a daily reporting.
> >
> > As I can't set wildcards at the beginning of a left-based index for the
> > scan,
> > I only see the possibility to scan the index of the whole table to
> collect
> > the
> > rowKeys that are inside the timerange I'm interested in.
> >
> > Is there a more elegant way to collect rows within time range X?
> > (Unfortunately, the date attribute is not equal to the timestamp that is
> > stored by hbase automatically.)
> >
> > Could/should one maybe leverage some kind of row key caching to
> accelerate
> > the collection process?
> > Is that covered by the block cache?
> >
> > Thanks in advance for any advice.
> >
> > regards
> > Chris
> >
>