You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by 娄帅 <lo...@gmail.com> on 2015/05/12 11:59:11 UTC

HBase Block locality always 0

Hi, all,

I am maintaining an hbase 0.96.0 cluster, but from the web ui of HBase
regionserver,
i saw Block locality is 0 for all regionserver.

Datanode on l-hbase[26-31].data.cn8 and regionserver on
l-hbase[25-31].data.cn8,

Any idea?

Re: HBase Block locality always 0

Posted by Louis Hust <lo...@gmail.com>.
Hi, Alex,

Today the same problem occurs, and one region server  rs1 with heavy input
network traffic, it seems the regionserver read
data from other datanode instead of local datanode, and at last ,the region
server can not offer service, so we have to stop the region server, but
after stop the region server, another region server  rs2 got the same
problem.


2015-05-19 10:41 GMT+08:00 Louis Hust <lo...@gmail.com>:

> Hi, Alex,
>
> May be the Block locality  display wrong? cause I checked some region
> file and found some replica on the same machine!
>
> 2015-05-19 7:18 GMT+08:00 Alex Baranau <al...@gmail.com>:
>
>> Sorry if I'm asking a silly question... Are you sure your RSs and
>> Datanodes
>> are all up and running? Are you sure they are collocated?
>>
>> > Datanode on l-hbase[26-31].data.cn8 and regionserver on
>> > l-hbase[25-31].data.cn8,
>>
>> Could be that your only live RS is on l-hbase25.data.cn8, which would
>> cause
>> that behavior... Btw, why 25th is not collocated with datanode?
>>
>> Alex Baranau
>> --
>> http://cdap.io - open source framework to build and run data applications
>> on Hadoop & HBase
>>
>> On Fri, May 15, 2015 at 8:12 PM, Louis Hust <lo...@gmail.com> wrote:
>>
>> > Hi, Esteban,
>> >
>> > Hadoop Version 2.2.0, r1537062.
>> > So i do not know why it always write other datanode instead of local
>> > datanode,
>> > If there is some log for the hdfs write policy? And now the cluster is
>> > working not healthy,
>> > with heavy networking.
>> >
>> > 2015-05-15 1:28 GMT+08:00 Esteban Gutierrez <es...@cloudera.com>:
>> >
>> > > Hi Louis,
>> > >
>> > > Locality 0 is not right for a cluster of that size and having 3
>> replicas
>> > > per block unless all RS cannot connect to the local DN and somehow the
>> > > local DN to the RS is always excluded from the pipeline. In Hadoop
>> > > 2.0-alpha there was a bug (HDFS-3224) that caused the NN to report a
>> DN
>> > as
>> > > live and dead if the storage ID was changed in a single volume (e.g.
>> > after
>> > > replacing one drive) and that caused fs.getFileBlockLocations() to
>> report
>> > > less blocks for calculating the HDFS locality index. Unless your
>> cluster
>> > is
>> > > using Hadoop 2.0-alpha I won't worry too much about that.
>> > >
>> > > Regarding the logs its odd that the JN is taking about 1.5 seconds
>> just
>> > to
>> > > send less than 200 bytes. Perhaps some IO contention issue is going
>> on in
>> > > your cluster?
>> > >
>> > > thanks,
>> > > esteban.
>> > >
>> > > --
>> > > Cloudera, Inc.
>> > >
>> > >
>> > > On Thu, May 14, 2015 at 5:48 AM, Louis Hust <lo...@gmail.com>
>> > wrote:
>> > >
>> > > > Hi, Esteban
>> > > >
>> > > > Each region server has about 122 regions, data is large. HDFS
>> replica
>> > is
>> > > > defined as default 3,  and namenode have some WARN like below.
>> > > >
>> > > > {log}
>> > > > 2015-05-14 20:45:37,463 WARN
>> > > > org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Took
>> > 1503ms
>> > > to
>> > > > send a batch of 3 edits (179 bytes) to remote journal
>> > 192.168.44.29:8485
>> > > > {/log}
>> > > >
>> > > > Regionserver's log seems normal:
>> > > >
>> > > > {log}
>> > > > 2015-05-14 20:46:59,890 INFO  [Thread-15] regionserver.HRegion:
>> > Finished
>> > > > memstore flush of ~44.4 M/46586984, currentsize=0/0 for region
>> > > >
>> > > >
>> > >
>> >
>> qmq_backup,0066485937885860620cb396a3e65c6c9de92cae9aa29,1412429632233.65684ef65f58cb3e27986ca38d397bee.
>> > > > in 3141ms, sequenceid=7493455453, compaction requested=true
>> > > > 2015-05-14 20:46:59,890 INFO
>> > > >  [regionserver60020-smallCompactions-1431462564717]
>> > regionserver.HRegion:
>> > > > Starting compaction on m in region
>> > > >
>> > > >
>> > >
>> >
>> qmq_backup,0066485937885860620cb396a3e65c6c9de92cae9aa29,1412429632233.65684ef65f58cb3e27986ca38d397bee.
>> > > > {/log}
>> > > >
>> > > > Any idea?
>> > > >
>> > > >
>> > > >
>> > > > 2015-05-13 1:26 GMT+08:00 Esteban Gutierrez <es...@cloudera.com>:
>> > > >
>> > > > > Hi,
>> > > > >
>> > > > > How many regions you per RS? one possibility is that you have very
>> > > little
>> > > > > data in your cluster and regions have moved around and there are
>> no
>> > > > blocks
>> > > > > in the local DN to the RS. Another possibility is that you have
>> one
>> > > > replica
>> > > > > configured and regions moved too so that makes even harder to have
>> > some
>> > > > > local blocks in the DN to the RS. Lastly it could be some other
>> > problem
>> > > > > where the HDFS pipeline has excluded the local DN. Have you seen
>> any
>> > > > > exception in the RSs or the NameNode that might be interesting?
>> > > > >
>> > > > > thanks,
>> > > > > esteban.
>> > > > >
>> > > > >
>> > > > >
>> > > > > --
>> > > > > Cloudera, Inc.
>> > > > >
>> > > > >
>> > > > > On Tue, May 12, 2015 at 2:59 AM, 娄帅 <lo...@gmail.com>
>> wrote:
>> > > > >
>> > > > > > Hi, all,
>> > > > > >
>> > > > > > I am maintaining an hbase 0.96.0 cluster, but from the web ui of
>> > > HBase
>> > > > > > regionserver,
>> > > > > > i saw Block locality is 0 for all regionserver.
>> > > > > >
>> > > > > > Datanode on l-hbase[26-31].data.cn8 and regionserver on
>> > > > > > l-hbase[25-31].data.cn8,
>> > > > > >
>> > > > > > Any idea?
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>>
>
>

Re: HBase Block locality always 0

Posted by Louis Hust <lo...@gmail.com>.
Hi, Alex,

May be the Block locality  display wrong? cause I checked some region file
and found some replica on the same machine!

2015-05-19 7:18 GMT+08:00 Alex Baranau <al...@gmail.com>:

> Sorry if I'm asking a silly question... Are you sure your RSs and Datanodes
> are all up and running? Are you sure they are collocated?
>
> > Datanode on l-hbase[26-31].data.cn8 and regionserver on
> > l-hbase[25-31].data.cn8,
>
> Could be that your only live RS is on l-hbase25.data.cn8, which would cause
> that behavior... Btw, why 25th is not collocated with datanode?
>
> Alex Baranau
> --
> http://cdap.io - open source framework to build and run data applications
> on Hadoop & HBase
>
> On Fri, May 15, 2015 at 8:12 PM, Louis Hust <lo...@gmail.com> wrote:
>
> > Hi, Esteban,
> >
> > Hadoop Version 2.2.0, r1537062.
> > So i do not know why it always write other datanode instead of local
> > datanode,
> > If there is some log for the hdfs write policy? And now the cluster is
> > working not healthy,
> > with heavy networking.
> >
> > 2015-05-15 1:28 GMT+08:00 Esteban Gutierrez <es...@cloudera.com>:
> >
> > > Hi Louis,
> > >
> > > Locality 0 is not right for a cluster of that size and having 3
> replicas
> > > per block unless all RS cannot connect to the local DN and somehow the
> > > local DN to the RS is always excluded from the pipeline. In Hadoop
> > > 2.0-alpha there was a bug (HDFS-3224) that caused the NN to report a DN
> > as
> > > live and dead if the storage ID was changed in a single volume (e.g.
> > after
> > > replacing one drive) and that caused fs.getFileBlockLocations() to
> report
> > > less blocks for calculating the HDFS locality index. Unless your
> cluster
> > is
> > > using Hadoop 2.0-alpha I won't worry too much about that.
> > >
> > > Regarding the logs its odd that the JN is taking about 1.5 seconds just
> > to
> > > send less than 200 bytes. Perhaps some IO contention issue is going on
> in
> > > your cluster?
> > >
> > > thanks,
> > > esteban.
> > >
> > > --
> > > Cloudera, Inc.
> > >
> > >
> > > On Thu, May 14, 2015 at 5:48 AM, Louis Hust <lo...@gmail.com>
> > wrote:
> > >
> > > > Hi, Esteban
> > > >
> > > > Each region server has about 122 regions, data is large. HDFS replica
> > is
> > > > defined as default 3,  and namenode have some WARN like below.
> > > >
> > > > {log}
> > > > 2015-05-14 20:45:37,463 WARN
> > > > org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Took
> > 1503ms
> > > to
> > > > send a batch of 3 edits (179 bytes) to remote journal
> > 192.168.44.29:8485
> > > > {/log}
> > > >
> > > > Regionserver's log seems normal:
> > > >
> > > > {log}
> > > > 2015-05-14 20:46:59,890 INFO  [Thread-15] regionserver.HRegion:
> > Finished
> > > > memstore flush of ~44.4 M/46586984, currentsize=0/0 for region
> > > >
> > > >
> > >
> >
> qmq_backup,0066485937885860620cb396a3e65c6c9de92cae9aa29,1412429632233.65684ef65f58cb3e27986ca38d397bee.
> > > > in 3141ms, sequenceid=7493455453, compaction requested=true
> > > > 2015-05-14 20:46:59,890 INFO
> > > >  [regionserver60020-smallCompactions-1431462564717]
> > regionserver.HRegion:
> > > > Starting compaction on m in region
> > > >
> > > >
> > >
> >
> qmq_backup,0066485937885860620cb396a3e65c6c9de92cae9aa29,1412429632233.65684ef65f58cb3e27986ca38d397bee.
> > > > {/log}
> > > >
> > > > Any idea?
> > > >
> > > >
> > > >
> > > > 2015-05-13 1:26 GMT+08:00 Esteban Gutierrez <es...@cloudera.com>:
> > > >
> > > > > Hi,
> > > > >
> > > > > How many regions you per RS? one possibility is that you have very
> > > little
> > > > > data in your cluster and regions have moved around and there are no
> > > > blocks
> > > > > in the local DN to the RS. Another possibility is that you have one
> > > > replica
> > > > > configured and regions moved too so that makes even harder to have
> > some
> > > > > local blocks in the DN to the RS. Lastly it could be some other
> > problem
> > > > > where the HDFS pipeline has excluded the local DN. Have you seen
> any
> > > > > exception in the RSs or the NameNode that might be interesting?
> > > > >
> > > > > thanks,
> > > > > esteban.
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Cloudera, Inc.
> > > > >
> > > > >
> > > > > On Tue, May 12, 2015 at 2:59 AM, 娄帅 <lo...@gmail.com>
> wrote:
> > > > >
> > > > > > Hi, all,
> > > > > >
> > > > > > I am maintaining an hbase 0.96.0 cluster, but from the web ui of
> > > HBase
> > > > > > regionserver,
> > > > > > i saw Block locality is 0 for all regionserver.
> > > > > >
> > > > > > Datanode on l-hbase[26-31].data.cn8 and regionserver on
> > > > > > l-hbase[25-31].data.cn8,
> > > > > >
> > > > > > Any idea?
> > > > > >
> > > > >
> > > >
> > >
> >
>

Re: HBase Block locality always 0

Posted by Alex Baranau <al...@gmail.com>.
Sorry if I'm asking a silly question... Are you sure your RSs and Datanodes
are all up and running? Are you sure they are collocated?

> Datanode on l-hbase[26-31].data.cn8 and regionserver on
> l-hbase[25-31].data.cn8,

Could be that your only live RS is on l-hbase25.data.cn8, which would cause
that behavior... Btw, why 25th is not collocated with datanode?

Alex Baranau
--
http://cdap.io - open source framework to build and run data applications
on Hadoop & HBase

On Fri, May 15, 2015 at 8:12 PM, Louis Hust <lo...@gmail.com> wrote:

> Hi, Esteban,
>
> Hadoop Version 2.2.0, r1537062.
> So i do not know why it always write other datanode instead of local
> datanode,
> If there is some log for the hdfs write policy? And now the cluster is
> working not healthy,
> with heavy networking.
>
> 2015-05-15 1:28 GMT+08:00 Esteban Gutierrez <es...@cloudera.com>:
>
> > Hi Louis,
> >
> > Locality 0 is not right for a cluster of that size and having 3 replicas
> > per block unless all RS cannot connect to the local DN and somehow the
> > local DN to the RS is always excluded from the pipeline. In Hadoop
> > 2.0-alpha there was a bug (HDFS-3224) that caused the NN to report a DN
> as
> > live and dead if the storage ID was changed in a single volume (e.g.
> after
> > replacing one drive) and that caused fs.getFileBlockLocations() to report
> > less blocks for calculating the HDFS locality index. Unless your cluster
> is
> > using Hadoop 2.0-alpha I won't worry too much about that.
> >
> > Regarding the logs its odd that the JN is taking about 1.5 seconds just
> to
> > send less than 200 bytes. Perhaps some IO contention issue is going on in
> > your cluster?
> >
> > thanks,
> > esteban.
> >
> > --
> > Cloudera, Inc.
> >
> >
> > On Thu, May 14, 2015 at 5:48 AM, Louis Hust <lo...@gmail.com>
> wrote:
> >
> > > Hi, Esteban
> > >
> > > Each region server has about 122 regions, data is large. HDFS replica
> is
> > > defined as default 3,  and namenode have some WARN like below.
> > >
> > > {log}
> > > 2015-05-14 20:45:37,463 WARN
> > > org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Took
> 1503ms
> > to
> > > send a batch of 3 edits (179 bytes) to remote journal
> 192.168.44.29:8485
> > > {/log}
> > >
> > > Regionserver's log seems normal:
> > >
> > > {log}
> > > 2015-05-14 20:46:59,890 INFO  [Thread-15] regionserver.HRegion:
> Finished
> > > memstore flush of ~44.4 M/46586984, currentsize=0/0 for region
> > >
> > >
> >
> qmq_backup,0066485937885860620cb396a3e65c6c9de92cae9aa29,1412429632233.65684ef65f58cb3e27986ca38d397bee.
> > > in 3141ms, sequenceid=7493455453, compaction requested=true
> > > 2015-05-14 20:46:59,890 INFO
> > >  [regionserver60020-smallCompactions-1431462564717]
> regionserver.HRegion:
> > > Starting compaction on m in region
> > >
> > >
> >
> qmq_backup,0066485937885860620cb396a3e65c6c9de92cae9aa29,1412429632233.65684ef65f58cb3e27986ca38d397bee.
> > > {/log}
> > >
> > > Any idea?
> > >
> > >
> > >
> > > 2015-05-13 1:26 GMT+08:00 Esteban Gutierrez <es...@cloudera.com>:
> > >
> > > > Hi,
> > > >
> > > > How many regions you per RS? one possibility is that you have very
> > little
> > > > data in your cluster and regions have moved around and there are no
> > > blocks
> > > > in the local DN to the RS. Another possibility is that you have one
> > > replica
> > > > configured and regions moved too so that makes even harder to have
> some
> > > > local blocks in the DN to the RS. Lastly it could be some other
> problem
> > > > where the HDFS pipeline has excluded the local DN. Have you seen any
> > > > exception in the RSs or the NameNode that might be interesting?
> > > >
> > > > thanks,
> > > > esteban.
> > > >
> > > >
> > > >
> > > > --
> > > > Cloudera, Inc.
> > > >
> > > >
> > > > On Tue, May 12, 2015 at 2:59 AM, 娄帅 <lo...@gmail.com> wrote:
> > > >
> > > > > Hi, all,
> > > > >
> > > > > I am maintaining an hbase 0.96.0 cluster, but from the web ui of
> > HBase
> > > > > regionserver,
> > > > > i saw Block locality is 0 for all regionserver.
> > > > >
> > > > > Datanode on l-hbase[26-31].data.cn8 and regionserver on
> > > > > l-hbase[25-31].data.cn8,
> > > > >
> > > > > Any idea?
> > > > >
> > > >
> > >
> >
>

Re: HBase Block locality always 0

Posted by Louis Hust <lo...@gmail.com>.
Hi, Esteban,

Hadoop Version 2.2.0, r1537062.
So i do not know why it always write other datanode instead of local
datanode,
If there is some log for the hdfs write policy? And now the cluster is
working not healthy,
with heavy networking.

2015-05-15 1:28 GMT+08:00 Esteban Gutierrez <es...@cloudera.com>:

> Hi Louis,
>
> Locality 0 is not right for a cluster of that size and having 3 replicas
> per block unless all RS cannot connect to the local DN and somehow the
> local DN to the RS is always excluded from the pipeline. In Hadoop
> 2.0-alpha there was a bug (HDFS-3224) that caused the NN to report a DN as
> live and dead if the storage ID was changed in a single volume (e.g. after
> replacing one drive) and that caused fs.getFileBlockLocations() to report
> less blocks for calculating the HDFS locality index. Unless your cluster is
> using Hadoop 2.0-alpha I won't worry too much about that.
>
> Regarding the logs its odd that the JN is taking about 1.5 seconds just to
> send less than 200 bytes. Perhaps some IO contention issue is going on in
> your cluster?
>
> thanks,
> esteban.
>
> --
> Cloudera, Inc.
>
>
> On Thu, May 14, 2015 at 5:48 AM, Louis Hust <lo...@gmail.com> wrote:
>
> > Hi, Esteban
> >
> > Each region server has about 122 regions, data is large. HDFS replica is
> > defined as default 3,  and namenode have some WARN like below.
> >
> > {log}
> > 2015-05-14 20:45:37,463 WARN
> > org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Took 1503ms
> to
> > send a batch of 3 edits (179 bytes) to remote journal 192.168.44.29:8485
> > {/log}
> >
> > Regionserver's log seems normal:
> >
> > {log}
> > 2015-05-14 20:46:59,890 INFO  [Thread-15] regionserver.HRegion: Finished
> > memstore flush of ~44.4 M/46586984, currentsize=0/0 for region
> >
> >
> qmq_backup,0066485937885860620cb396a3e65c6c9de92cae9aa29,1412429632233.65684ef65f58cb3e27986ca38d397bee.
> > in 3141ms, sequenceid=7493455453, compaction requested=true
> > 2015-05-14 20:46:59,890 INFO
> >  [regionserver60020-smallCompactions-1431462564717] regionserver.HRegion:
> > Starting compaction on m in region
> >
> >
> qmq_backup,0066485937885860620cb396a3e65c6c9de92cae9aa29,1412429632233.65684ef65f58cb3e27986ca38d397bee.
> > {/log}
> >
> > Any idea?
> >
> >
> >
> > 2015-05-13 1:26 GMT+08:00 Esteban Gutierrez <es...@cloudera.com>:
> >
> > > Hi,
> > >
> > > How many regions you per RS? one possibility is that you have very
> little
> > > data in your cluster and regions have moved around and there are no
> > blocks
> > > in the local DN to the RS. Another possibility is that you have one
> > replica
> > > configured and regions moved too so that makes even harder to have some
> > > local blocks in the DN to the RS. Lastly it could be some other problem
> > > where the HDFS pipeline has excluded the local DN. Have you seen any
> > > exception in the RSs or the NameNode that might be interesting?
> > >
> > > thanks,
> > > esteban.
> > >
> > >
> > >
> > > --
> > > Cloudera, Inc.
> > >
> > >
> > > On Tue, May 12, 2015 at 2:59 AM, 娄帅 <lo...@gmail.com> wrote:
> > >
> > > > Hi, all,
> > > >
> > > > I am maintaining an hbase 0.96.0 cluster, but from the web ui of
> HBase
> > > > regionserver,
> > > > i saw Block locality is 0 for all regionserver.
> > > >
> > > > Datanode on l-hbase[26-31].data.cn8 and regionserver on
> > > > l-hbase[25-31].data.cn8,
> > > >
> > > > Any idea?
> > > >
> > >
> >
>

Re: HBase Block locality always 0

Posted by Esteban Gutierrez <es...@cloudera.com>.
Hi Louis,

Locality 0 is not right for a cluster of that size and having 3 replicas
per block unless all RS cannot connect to the local DN and somehow the
local DN to the RS is always excluded from the pipeline. In Hadoop
2.0-alpha there was a bug (HDFS-3224) that caused the NN to report a DN as
live and dead if the storage ID was changed in a single volume (e.g. after
replacing one drive) and that caused fs.getFileBlockLocations() to report
less blocks for calculating the HDFS locality index. Unless your cluster is
using Hadoop 2.0-alpha I won't worry too much about that.

Regarding the logs its odd that the JN is taking about 1.5 seconds just to
send less than 200 bytes. Perhaps some IO contention issue is going on in
your cluster?

thanks,
esteban.

--
Cloudera, Inc.


On Thu, May 14, 2015 at 5:48 AM, Louis Hust <lo...@gmail.com> wrote:

> Hi, Esteban
>
> Each region server has about 122 regions, data is large. HDFS replica is
> defined as default 3,  and namenode have some WARN like below.
>
> {log}
> 2015-05-14 20:45:37,463 WARN
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Took 1503ms to
> send a batch of 3 edits (179 bytes) to remote journal 192.168.44.29:8485
> {/log}
>
> Regionserver's log seems normal:
>
> {log}
> 2015-05-14 20:46:59,890 INFO  [Thread-15] regionserver.HRegion: Finished
> memstore flush of ~44.4 M/46586984, currentsize=0/0 for region
>
> qmq_backup,0066485937885860620cb396a3e65c6c9de92cae9aa29,1412429632233.65684ef65f58cb3e27986ca38d397bee.
> in 3141ms, sequenceid=7493455453, compaction requested=true
> 2015-05-14 20:46:59,890 INFO
>  [regionserver60020-smallCompactions-1431462564717] regionserver.HRegion:
> Starting compaction on m in region
>
> qmq_backup,0066485937885860620cb396a3e65c6c9de92cae9aa29,1412429632233.65684ef65f58cb3e27986ca38d397bee.
> {/log}
>
> Any idea?
>
>
>
> 2015-05-13 1:26 GMT+08:00 Esteban Gutierrez <es...@cloudera.com>:
>
> > Hi,
> >
> > How many regions you per RS? one possibility is that you have very little
> > data in your cluster and regions have moved around and there are no
> blocks
> > in the local DN to the RS. Another possibility is that you have one
> replica
> > configured and regions moved too so that makes even harder to have some
> > local blocks in the DN to the RS. Lastly it could be some other problem
> > where the HDFS pipeline has excluded the local DN. Have you seen any
> > exception in the RSs or the NameNode that might be interesting?
> >
> > thanks,
> > esteban.
> >
> >
> >
> > --
> > Cloudera, Inc.
> >
> >
> > On Tue, May 12, 2015 at 2:59 AM, 娄帅 <lo...@gmail.com> wrote:
> >
> > > Hi, all,
> > >
> > > I am maintaining an hbase 0.96.0 cluster, but from the web ui of HBase
> > > regionserver,
> > > i saw Block locality is 0 for all regionserver.
> > >
> > > Datanode on l-hbase[26-31].data.cn8 and regionserver on
> > > l-hbase[25-31].data.cn8,
> > >
> > > Any idea?
> > >
> >
>

Re: HBase Block locality always 0

Posted by Louis Hust <lo...@gmail.com>.
Hi, Esteban

Each region server has about 122 regions, data is large. HDFS replica is
defined as default 3,  and namenode have some WARN like below.

{log}
2015-05-14 20:45:37,463 WARN
org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Took 1503ms to
send a batch of 3 edits (179 bytes) to remote journal 192.168.44.29:8485
{/log}

Regionserver's log seems normal:

{log}
2015-05-14 20:46:59,890 INFO  [Thread-15] regionserver.HRegion: Finished
memstore flush of ~44.4 M/46586984, currentsize=0/0 for region
qmq_backup,0066485937885860620cb396a3e65c6c9de92cae9aa29,1412429632233.65684ef65f58cb3e27986ca38d397bee.
in 3141ms, sequenceid=7493455453, compaction requested=true
2015-05-14 20:46:59,890 INFO
 [regionserver60020-smallCompactions-1431462564717] regionserver.HRegion:
Starting compaction on m in region
qmq_backup,0066485937885860620cb396a3e65c6c9de92cae9aa29,1412429632233.65684ef65f58cb3e27986ca38d397bee.
{/log}

Any idea?



2015-05-13 1:26 GMT+08:00 Esteban Gutierrez <es...@cloudera.com>:

> Hi,
>
> How many regions you per RS? one possibility is that you have very little
> data in your cluster and regions have moved around and there are no blocks
> in the local DN to the RS. Another possibility is that you have one replica
> configured and regions moved too so that makes even harder to have some
> local blocks in the DN to the RS. Lastly it could be some other problem
> where the HDFS pipeline has excluded the local DN. Have you seen any
> exception in the RSs or the NameNode that might be interesting?
>
> thanks,
> esteban.
>
>
>
> --
> Cloudera, Inc.
>
>
> On Tue, May 12, 2015 at 2:59 AM, 娄帅 <lo...@gmail.com> wrote:
>
> > Hi, all,
> >
> > I am maintaining an hbase 0.96.0 cluster, but from the web ui of HBase
> > regionserver,
> > i saw Block locality is 0 for all regionserver.
> >
> > Datanode on l-hbase[26-31].data.cn8 and regionserver on
> > l-hbase[25-31].data.cn8,
> >
> > Any idea?
> >
>

Re: HBase Block locality always 0

Posted by Esteban Gutierrez <es...@cloudera.com>.
Hi,

How many regions you per RS? one possibility is that you have very little
data in your cluster and regions have moved around and there are no blocks
in the local DN to the RS. Another possibility is that you have one replica
configured and regions moved too so that makes even harder to have some
local blocks in the DN to the RS. Lastly it could be some other problem
where the HDFS pipeline has excluded the local DN. Have you seen any
exception in the RSs or the NameNode that might be interesting?

thanks,
esteban.



--
Cloudera, Inc.


On Tue, May 12, 2015 at 2:59 AM, 娄帅 <lo...@gmail.com> wrote:

> Hi, all,
>
> I am maintaining an hbase 0.96.0 cluster, but from the web ui of HBase
> regionserver,
> i saw Block locality is 0 for all regionserver.
>
> Datanode on l-hbase[26-31].data.cn8 and regionserver on
> l-hbase[25-31].data.cn8,
>
> Any idea?
>

Re: HBase Block locality always 0

Posted by Dima Spivak <ds...@cloudera.com>.
Have you seen Esteban's suggestion? Another possibility is that a number of
old JIRAs covered the fact that regions were assigned in a silly way when a
table was disabled and then enabled. Could this be the case for you?

-Dima

On Wed, May 13, 2015 at 8:36 PM, 娄帅 <lo...@gmail.com> wrote:

> anyidea?
>
> 2015-05-12 17:59 GMT+08:00 娄帅 <lo...@gmail.com>:
>
> > Hi, all,
> >
> > I am maintaining an hbase 0.96.0 cluster, but from the web ui of HBase
> > regionserver,
> > i saw Block locality is 0 for all regionserver.
> >
> > Datanode on l-hbase[26-31].data.cn8 and regionserver on
> > l-hbase[25-31].data.cn8,
> >
> > Any idea?
> >
>

Re: HBase Block locality always 0

Posted by 娄帅 <lo...@gmail.com>.
anyidea?

2015-05-12 17:59 GMT+08:00 娄帅 <lo...@gmail.com>:

> Hi, all,
>
> I am maintaining an hbase 0.96.0 cluster, but from the web ui of HBase
> regionserver,
> i saw Block locality is 0 for all regionserver.
>
> Datanode on l-hbase[26-31].data.cn8 and regionserver on
> l-hbase[25-31].data.cn8,
>
> Any idea?
>