You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Jone Zhang <jo...@gmail.com> on 2016/05/03 03:41:57 UTC

Re: How can i get hbase table memory used? Why hdfs size of hbase table double when i use bulkload?

For #1
My workload is read heavy.
I use bulkload to  write data once a day.

Thanks.

2016-04-30 1:13 GMT+08:00 Ted Yu <yu...@gmail.com>:

> For #1, can you clarify whether your workload is read heavy, write heavy or
> mixed load of read and write ?
>
> For #2, have you run major compaction after the second bulk load ?
>
> On Thu, Apr 28, 2016 at 9:16 PM, Jone Zhang <jo...@gmail.com>
> wrote:
>
> > *1、How can i get hbase table memory used?*
> > *2、Why hdfs size of hbase table  double  when i use bulkload*
> >
> > bulkload file to qimei_info
> >
> > 101.7 G  /user/hbase/data/default/qimei_info
> >
> > bulkload same file to qimei_info agagin
> >
> > 203.3 G  /user/hbase/data/default/qimei_info
> >
> > hbase(main):001:0> describe 'qimei_info'
> > DESCRIPTION
> >
> >
> >  'qimei_info', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER
> =>
> > 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '
> >
> >  1', COMPRESSION => 'LZO', MIN_VERSIONS => '0', TTL => '2147483647',
> > KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536',
> >
> >   IN_MEMORY => 'false', BLOCKCACHE => 'true'}
> >
> >
> > 1 row(s) in 1.4170 seconds
> >
> >
> > *Besh wishes.*
> > *Thanks.*
> >
>

Re: How can i get hbase table memory used? Why hdfs size of hbase table double when i use bulkload?

Posted by Ted Yu <yu...@gmail.com>.
For #1, consider increasing hfile.block.cache.size (assuming majority of
your reads are not point gets).

FYI

On Mon, May 2, 2016 at 6:41 PM, Jone Zhang <jo...@gmail.com> wrote:

> For #1
> My workload is read heavy.
> I use bulkload to  write data once a day.
>
> Thanks.
>
> 2016-04-30 1:13 GMT+08:00 Ted Yu <yu...@gmail.com>:
>
> > For #1, can you clarify whether your workload is read heavy, write heavy
> or
> > mixed load of read and write ?
> >
> > For #2, have you run major compaction after the second bulk load ?
> >
> > On Thu, Apr 28, 2016 at 9:16 PM, Jone Zhang <jo...@gmail.com>
> > wrote:
> >
> > > *1、How can i get hbase table memory used?*
> > > *2、Why hdfs size of hbase table  double  when i use bulkload*
> > >
> > > bulkload file to qimei_info
> > >
> > > 101.7 G  /user/hbase/data/default/qimei_info
> > >
> > > bulkload same file to qimei_info agagin
> > >
> > > 203.3 G  /user/hbase/data/default/qimei_info
> > >
> > > hbase(main):001:0> describe 'qimei_info'
> > > DESCRIPTION
> > >
> > >
> > >  'qimei_info', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER
> > =>
> > > 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '
> > >
> > >  1', COMPRESSION => 'LZO', MIN_VERSIONS => '0', TTL => '2147483647',
> > > KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536',
> > >
> > >   IN_MEMORY => 'false', BLOCKCACHE => 'true'}
> > >
> > >
> > > 1 row(s) in 1.4170 seconds
> > >
> > >
> > > *Besh wishes.*
> > > *Thanks.*
> > >
> >
>