You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by Jean-Marc Spaggiari <je...@spaggiari.org> on 2013/09/02 21:04:27 UTC

0.96.0 keep logging "Memstore is above high water mark"

While running PE with 10 clients, server keep logging:
2013-09-02 14:56:59,919 WARN  [RpcServer.handler=5,port=44439]
regionserver.MemStoreFlusher: Memstore is above high water mark and
block 2362ms
2013-09-02 14:56:59,919 WARN  [RpcServer.handler=18,port=44439]
regionserver.MemStoreFlusher: Memstore is above high water mark and
block 2363ms

Then when test is over:
2013-09-02 14:57:02,280 WARN
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxn:
caught end of stream exception
EndOfStreamException: Unable to read additional data from client
sessionid 0x140dfdfc6270044, likely client has closed socket
    at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220)
    at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
    at java.lang.Thread.run(Thread.java:662)

/hbase-0.96.0$ cat logs/hbase-jmspaggiari-master-t430s.log  | grep
"Memstore is above high water mark and block" | wc
   3555   49770  514558


/hbase-0.96.0$ cat logs/hbase-jmspaggiari-master-t430s.log  | grep
"Unable to read additional data from client sessionid" | wc
    102    1530   12852

I guess it's only because of PE, but is this something which need to
be looked at?

JM

Re: 0.96.0 keep logging "Memstore is above high water mark"

Posted by Frank Chow <zh...@gmail.com>.
if there are more memory, enlarge the parameter: hbase.hregion.memstore.block.multiplier.
also, it will be help if enlarge the parameter: hbase.hstore.blockingStoreFiles 




Frank Chow

Re: 0.96.0 keep logging "Memstore is above high water mark"

Posted by Kevin O'dell <ke...@cloudera.com>.
JD,

  I see what you are saying, I got this mixed up with a different message.
 Sorry about that.


On Tue, Sep 3, 2013 at 1:47 PM, Jean-Daniel Cryans <jd...@apache.org>wrote:

> On Tue, Sep 3, 2013 at 10:35 AM, Jean-Marc Spaggiari <
> jean-marc@spaggiari.org> wrote:
>
> > > Your 10 clients are disconnecting from ZK, you're letting HBase manage
> > it?
> > Yep, I start PE from the command line, I don't expect to have to do
> > anything after that. So issue is on PE side?
> >
>
> Not really what I asked, I wanted to know if you set HBASE_MANAGES_ZK at
> all in hbase-env.
>
>
> >
> > > Please don't remove the warn. It is important for troubleshooting and
> > sizing.
> > Can you please tell more on how it helps to do sizing? Interested.
> >
> > Thanks,
> >
> > JM
> >
> >
> > 2013/9/3 Kevin O'dell <ke...@cloudera.com>
> >
> > > Please don't remove the warn. It is important for troubleshooting and
> > > sizing.
> > > On Sep 3, 2013 1:29 PM, "Jean-Daniel Cryans" <jd...@apache.org>
> > wrote:
> > >
> > > > On Mon, Sep 2, 2013 at 12:04 PM, Jean-Marc Spaggiari <
> > > > jean-marc@spaggiari.org> wrote:
> > > >
> > > > > While running PE with 10 clients, server keep logging:
> > > > > 2013-09-02 14:56:59,919 WARN  [RpcServer.handler=5,port=44439]
> > > > > regionserver.MemStoreFlusher: Memstore is above high water mark and
> > > > > block 2362ms
> > > > > 2013-09-02 14:56:59,919 WARN  [RpcServer.handler=18,port=44439]
> > > > > regionserver.MemStoreFlusher: Memstore is above high water mark and
> > > > > block 2363ms
> > > > >
> > > >
> > > > Yeah that was added in HBASE-6466, it helps tracing when the handlers
> > are
> > > > blocked on the memstores, else you have to match the "Blocking
> updates"
> > > > with the "Unblocking updates" lines. I'd just be fine adding the time
> > > into
> > > > the "Unblocking updates" line and remove that WARN.
> > > >
> > > >
> > > > >
> > > > > Then when test is over:
> > > > > 2013-09-02 14:57:02,280 WARN
> > > > > [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxn:
> > > > > caught end of stream exception
> > > > > EndOfStreamException: Unable to read additional data from client
> > > > > sessionid 0x140dfdfc6270044, likely client has closed socket
> > > > >     at
> > > > >
> > org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220)
> > > > >     at
> > > > >
> > > >
> > >
> >
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
> > > > >     at java.lang.Thread.run(Thread.java:662)
> > > > >
> > > >
> > > > Your 10 clients are disconnecting from ZK, you're letting HBase
> manage
> > > it?
> > > >
> > > >
> > > > >
> > > > > /hbase-0.96.0$ cat logs/hbase-jmspaggiari-master-t430s.log  | grep
> > > > > "Memstore is above high water mark and block" | wc
> > > > >    3555   49770  514558
> > > > >
> > > > >
> > > > > /hbase-0.96.0$ cat logs/hbase-jmspaggiari-master-t430s.log  | grep
> > > > > "Unable to read additional data from client sessionid" | wc
> > > > >     102    1530   12852
> > > > >
> > > > > I guess it's only because of PE, but is this something which need
> to
> > > > > be looked at?
> > > > >
> > > > > JM
> > > > >
> > > >
> > >
> >
>



-- 
Kevin O'Dell
Systems Engineer, Cloudera

Re: 0.96.0 keep logging "Memstore is above high water mark"

Posted by Jean-Marc Spaggiari <je...@spaggiari.org>.
Ok. I don't have yet enought servers to run a separate ZK... Or I might
give it a try with pseudo-dist and ZK on the same server. I will keep you
posted if I get a chance to test that.

JM


2013/9/3 Jean-Daniel Cryans <jd...@apache.org>

> It's probably not disconnecting properly so the zookeeper server prints
> that exception. FWIW if your ZK server was started outside of HBase, you
> wouldn't see those traces in the master log.
>
> J-D
>
>
> On Tue, Sep 3, 2013 at 3:16 PM, Jean-Marc Spaggiari <
> jean-marc@spaggiari.org
> > wrote:
>
> > So are you saying that it's normal to see a ZK warning each time a client
> > disconnect? Or it's because PE is not disconnecting correctly?
> > Le 2013-09-03 17:48, "Jean-Daniel Cryans" <jd...@apache.org> a écrit
> :
> >
> > > I ran the command line you are using, it actually creates 100 clients
> and
> > > this is why you see ~100 of them disconnecting from zookeeper.
> > >
> > > J-D
> > >
> > >
> > > On Tue, Sep 3, 2013 at 2:08 PM, Jean-Marc Spaggiari <
> > > jean-marc@spaggiari.org
> > > > wrote:
> > >
> > > > Way more than 10...
> > > >
> > > > rm logs/*
> > > > bin/start-hbase.sh
> > > > bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=4096
> > > > randomWrite 10
> > > > cat logs/hbase-jmspaggiari-master-t430s.log  | grep "Unable to read
> > > > additional" | wc
> > > >     102    1530   12852
> > > >
> > > >
> > > >
> > > > 2013/9/3 Jean-Marc Spaggiari <je...@spaggiari.org>
> > > >
> > > > > I'm getting multi lines in the logs. I did not count them but sound
> > > like
> > > > > it's one per client.
> > > > >
> > > > > I will re-run the test and wc the lines to see if it's really one
> for
> > > > > one...
> > > > >
> > > > > JM
> > > > >
> > > > >
> > > > > 2013/9/3 Jean-Daniel Cryans <jd...@apache.org>
> > > > >
> > > > >> The reason I was asking is if you used --nomapred, all your
> clients
> > > > would
> > > > >> share the same connection so only one stack trace should be
> printed
> > > > (when
> > > > >> the connection is closed).
> > > > >>
> > > > >> Since you're running with MR, how many stack traces do you see?
> > Still
> > > > one?
> > > > >> If so, is it a local job runner? I'm trying to understand how many
> > > > >> connections you are using.
> > > > >>
> > > > >> In any case, it's a normal-ish stack trace.
> > > > >>
> > > > >> J-D
> > > > >>
> > > > >>
> > > > >> On Tue, Sep 3, 2013 at 11:14 AM, Jean-Marc Spaggiari <
> > > > >> jean-marc@spaggiari.org> wrote:
> > > > >>
> > > > >> > I was running a MR with 10 clients, so it's a MapRed one.
> > > > >> >
> > > > >> > Just retried with only 1 thread and got the same exception.
> > > > >> >
> > > > >> > So it's not making any difference.
> > > > >> >
> > > > >> >
> > > > >> > 2013/9/3 Jean-Daniel Cryans <jd...@apache.org>
> > > > >> >
> > > > >> > > Alright so your ZK is running inside the master and that's
> where
> > > > >> you're
> > > > >> > > seeing those lines, and they are normal if each thread has a
> > > > different
> > > > >> > > connection... Are you doing a --nomapred PE?
> > > > >> > >
> > > > >> > >
> > > > >> > > On Tue, Sep 3, 2013 at 10:50 AM, Jean-Marc Spaggiari <
> > > > >> > > jean-marc@spaggiari.org> wrote:
> > > > >> > >
> > > > >> > > > Oh, ok ;) I just un-packed the jar and ran it. So my config
> > file
> > > > is:
> > > > >> > > > <configuration>
> > > > >> > > > </configuration>
> > > > >> > > >
> > > > >> > > >
> > > > >> > > >
> > > > >> > > > 2013/9/3 Jean-Daniel Cryans <jd...@apache.org>
> > > > >> > > >
> > > > >> > > > > On Tue, Sep 3, 2013 at 10:35 AM, Jean-Marc Spaggiari <
> > > > >> > > > > jean-marc@spaggiari.org> wrote:
> > > > >> > > > >
> > > > >> > > > > > > Your 10 clients are disconnecting from ZK, you're
> > letting
> > > > >> HBase
> > > > >> > > > manage
> > > > >> > > > > > it?
> > > > >> > > > > > Yep, I start PE from the command line, I don't expect to
> > > have
> > > > >> to do
> > > > >> > > > > > anything after that. So issue is on PE side?
> > > > >> > > > > >
> > > > >> > > > >
> > > > >> > > > > Not really what I asked, I wanted to know if you set
> > > > >> HBASE_MANAGES_ZK
> > > > >> > > at
> > > > >> > > > > all in hbase-env.
> > > > >> > > > >
> > > > >> > > > >
> > > > >> > > > > >
> > > > >> > > > > > > Please don't remove the warn. It is important for
> > > > >> troubleshooting
> > > > >> > > and
> > > > >> > > > > > sizing.
> > > > >> > > > > > Can you please tell more on how it helps to do sizing?
> > > > >> Interested.
> > > > >> > > > > >
> > > > >> > > > > > Thanks,
> > > > >> > > > > >
> > > > >> > > > > > JM
> > > > >> > > > > >
> > > > >> > > > > >
> > > > >> > > > > > 2013/9/3 Kevin O'dell <ke...@cloudera.com>
> > > > >> > > > > >
> > > > >> > > > > > > Please don't remove the warn. It is important for
> > > > >> troubleshooting
> > > > >> > > and
> > > > >> > > > > > > sizing.
> > > > >> > > > > > > On Sep 3, 2013 1:29 PM, "Jean-Daniel Cryans" <
> > > > >> > jdcryans@apache.org>
> > > > >> > > > > > wrote:
> > > > >> > > > > > >
> > > > >> > > > > > > > On Mon, Sep 2, 2013 at 12:04 PM, Jean-Marc
> Spaggiari <
> > > > >> > > > > > > > jean-marc@spaggiari.org> wrote:
> > > > >> > > > > > > >
> > > > >> > > > > > > > > While running PE with 10 clients, server keep
> > logging:
> > > > >> > > > > > > > > 2013-09-02 14:56:59,919 WARN
> > > > >> >  [RpcServer.handler=5,port=44439]
> > > > >> > > > > > > > > regionserver.MemStoreFlusher: Memstore is above
> high
> > > > water
> > > > >> > mark
> > > > >> > > > and
> > > > >> > > > > > > > > block 2362ms
> > > > >> > > > > > > > > 2013-09-02 14:56:59,919 WARN
> > > > >> >  [RpcServer.handler=18,port=44439]
> > > > >> > > > > > > > > regionserver.MemStoreFlusher: Memstore is above
> high
> > > > water
> > > > >> > mark
> > > > >> > > > and
> > > > >> > > > > > > > > block 2363ms
> > > > >> > > > > > > > >
> > > > >> > > > > > > >
> > > > >> > > > > > > > Yeah that was added in HBASE-6466, it helps tracing
> > when
> > > > the
> > > > >> > > > handlers
> > > > >> > > > > > are
> > > > >> > > > > > > > blocked on the memstores, else you have to match the
> > > > >> "Blocking
> > > > >> > > > > updates"
> > > > >> > > > > > > > with the "Unblocking updates" lines. I'd just be
> fine
> > > > adding
> > > > >> > the
> > > > >> > > > time
> > > > >> > > > > > > into
> > > > >> > > > > > > > the "Unblocking updates" line and remove that WARN.
> > > > >> > > > > > > >
> > > > >> > > > > > > >
> > > > >> > > > > > > > >
> > > > >> > > > > > > > > Then when test is over:
> > > > >> > > > > > > > > 2013-09-02 14:57:02,280 WARN
> > > > >> > > > > > > > > [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181]
> > > > >> > > > server.NIOServerCnxn:
> > > > >> > > > > > > > > caught end of stream exception
> > > > >> > > > > > > > > EndOfStreamException: Unable to read additional
> data
> > > > from
> > > > >> > > client
> > > > >> > > > > > > > > sessionid 0x140dfdfc6270044, likely client has
> > closed
> > > > >> socket
> > > > >> > > > > > > > >     at
> > > > >> > > > > > > > >
> > > > >> > > > > >
> > > > >> > >
> > > >
> org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220)
> > > > >> > > > > > > > >     at
> > > > >> > > > > > > > >
> > > > >> > > > > > > >
> > > > >> > > > > > >
> > > > >> > > > > >
> > > > >> > > > >
> > > > >> > > >
> > > > >> > >
> > > > >> >
> > > > >>
> > > >
> > >
> >
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
> > > > >> > > > > > > > >     at java.lang.Thread.run(Thread.java:662)
> > > > >> > > > > > > > >
> > > > >> > > > > > > >
> > > > >> > > > > > > > Your 10 clients are disconnecting from ZK, you're
> > > letting
> > > > >> HBase
> > > > >> > > > > manage
> > > > >> > > > > > > it?
> > > > >> > > > > > > >
> > > > >> > > > > > > >
> > > > >> > > > > > > > >
> > > > >> > > > > > > > > /hbase-0.96.0$ cat
> > > > >> logs/hbase-jmspaggiari-master-t430s.log  |
> > > > >> > > > grep
> > > > >> > > > > > > > > "Memstore is above high water mark and block" | wc
> > > > >> > > > > > > > >    3555   49770  514558
> > > > >> > > > > > > > >
> > > > >> > > > > > > > >
> > > > >> > > > > > > > > /hbase-0.96.0$ cat
> > > > >> logs/hbase-jmspaggiari-master-t430s.log  |
> > > > >> > > > grep
> > > > >> > > > > > > > > "Unable to read additional data from client
> > > sessionid" |
> > > > >> wc
> > > > >> > > > > > > > >     102    1530   12852
> > > > >> > > > > > > > >
> > > > >> > > > > > > > > I guess it's only because of PE, but is this
> > something
> > > > >> which
> > > > >> > > need
> > > > >> > > > > to
> > > > >> > > > > > > > > be looked at?
> > > > >> > > > > > > > >
> > > > >> > > > > > > > > JM
> > > > >> > > > > > > > >
> > > > >> > > > > > > >
> > > > >> > > > > > >
> > > > >> > > > > >
> > > > >> > > > >
> > > > >> > > >
> > > > >> > >
> > > > >> >
> > > > >>
> > > > >
> > > > >
> > > >
> > >
> >
>

Re: 0.96.0 keep logging "Memstore is above high water mark"

Posted by Jean-Daniel Cryans <jd...@apache.org>.
It's probably not disconnecting properly so the zookeeper server prints
that exception. FWIW if your ZK server was started outside of HBase, you
wouldn't see those traces in the master log.

J-D


On Tue, Sep 3, 2013 at 3:16 PM, Jean-Marc Spaggiari <jean-marc@spaggiari.org
> wrote:

> So are you saying that it's normal to see a ZK warning each time a client
> disconnect? Or it's because PE is not disconnecting correctly?
> Le 2013-09-03 17:48, "Jean-Daniel Cryans" <jd...@apache.org> a écrit :
>
> > I ran the command line you are using, it actually creates 100 clients and
> > this is why you see ~100 of them disconnecting from zookeeper.
> >
> > J-D
> >
> >
> > On Tue, Sep 3, 2013 at 2:08 PM, Jean-Marc Spaggiari <
> > jean-marc@spaggiari.org
> > > wrote:
> >
> > > Way more than 10...
> > >
> > > rm logs/*
> > > bin/start-hbase.sh
> > > bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=4096
> > > randomWrite 10
> > > cat logs/hbase-jmspaggiari-master-t430s.log  | grep "Unable to read
> > > additional" | wc
> > >     102    1530   12852
> > >
> > >
> > >
> > > 2013/9/3 Jean-Marc Spaggiari <je...@spaggiari.org>
> > >
> > > > I'm getting multi lines in the logs. I did not count them but sound
> > like
> > > > it's one per client.
> > > >
> > > > I will re-run the test and wc the lines to see if it's really one for
> > > > one...
> > > >
> > > > JM
> > > >
> > > >
> > > > 2013/9/3 Jean-Daniel Cryans <jd...@apache.org>
> > > >
> > > >> The reason I was asking is if you used --nomapred, all your clients
> > > would
> > > >> share the same connection so only one stack trace should be printed
> > > (when
> > > >> the connection is closed).
> > > >>
> > > >> Since you're running with MR, how many stack traces do you see?
> Still
> > > one?
> > > >> If so, is it a local job runner? I'm trying to understand how many
> > > >> connections you are using.
> > > >>
> > > >> In any case, it's a normal-ish stack trace.
> > > >>
> > > >> J-D
> > > >>
> > > >>
> > > >> On Tue, Sep 3, 2013 at 11:14 AM, Jean-Marc Spaggiari <
> > > >> jean-marc@spaggiari.org> wrote:
> > > >>
> > > >> > I was running a MR with 10 clients, so it's a MapRed one.
> > > >> >
> > > >> > Just retried with only 1 thread and got the same exception.
> > > >> >
> > > >> > So it's not making any difference.
> > > >> >
> > > >> >
> > > >> > 2013/9/3 Jean-Daniel Cryans <jd...@apache.org>
> > > >> >
> > > >> > > Alright so your ZK is running inside the master and that's where
> > > >> you're
> > > >> > > seeing those lines, and they are normal if each thread has a
> > > different
> > > >> > > connection... Are you doing a --nomapred PE?
> > > >> > >
> > > >> > >
> > > >> > > On Tue, Sep 3, 2013 at 10:50 AM, Jean-Marc Spaggiari <
> > > >> > > jean-marc@spaggiari.org> wrote:
> > > >> > >
> > > >> > > > Oh, ok ;) I just un-packed the jar and ran it. So my config
> file
> > > is:
> > > >> > > > <configuration>
> > > >> > > > </configuration>
> > > >> > > >
> > > >> > > >
> > > >> > > >
> > > >> > > > 2013/9/3 Jean-Daniel Cryans <jd...@apache.org>
> > > >> > > >
> > > >> > > > > On Tue, Sep 3, 2013 at 10:35 AM, Jean-Marc Spaggiari <
> > > >> > > > > jean-marc@spaggiari.org> wrote:
> > > >> > > > >
> > > >> > > > > > > Your 10 clients are disconnecting from ZK, you're
> letting
> > > >> HBase
> > > >> > > > manage
> > > >> > > > > > it?
> > > >> > > > > > Yep, I start PE from the command line, I don't expect to
> > have
> > > >> to do
> > > >> > > > > > anything after that. So issue is on PE side?
> > > >> > > > > >
> > > >> > > > >
> > > >> > > > > Not really what I asked, I wanted to know if you set
> > > >> HBASE_MANAGES_ZK
> > > >> > > at
> > > >> > > > > all in hbase-env.
> > > >> > > > >
> > > >> > > > >
> > > >> > > > > >
> > > >> > > > > > > Please don't remove the warn. It is important for
> > > >> troubleshooting
> > > >> > > and
> > > >> > > > > > sizing.
> > > >> > > > > > Can you please tell more on how it helps to do sizing?
> > > >> Interested.
> > > >> > > > > >
> > > >> > > > > > Thanks,
> > > >> > > > > >
> > > >> > > > > > JM
> > > >> > > > > >
> > > >> > > > > >
> > > >> > > > > > 2013/9/3 Kevin O'dell <ke...@cloudera.com>
> > > >> > > > > >
> > > >> > > > > > > Please don't remove the warn. It is important for
> > > >> troubleshooting
> > > >> > > and
> > > >> > > > > > > sizing.
> > > >> > > > > > > On Sep 3, 2013 1:29 PM, "Jean-Daniel Cryans" <
> > > >> > jdcryans@apache.org>
> > > >> > > > > > wrote:
> > > >> > > > > > >
> > > >> > > > > > > > On Mon, Sep 2, 2013 at 12:04 PM, Jean-Marc Spaggiari <
> > > >> > > > > > > > jean-marc@spaggiari.org> wrote:
> > > >> > > > > > > >
> > > >> > > > > > > > > While running PE with 10 clients, server keep
> logging:
> > > >> > > > > > > > > 2013-09-02 14:56:59,919 WARN
> > > >> >  [RpcServer.handler=5,port=44439]
> > > >> > > > > > > > > regionserver.MemStoreFlusher: Memstore is above high
> > > water
> > > >> > mark
> > > >> > > > and
> > > >> > > > > > > > > block 2362ms
> > > >> > > > > > > > > 2013-09-02 14:56:59,919 WARN
> > > >> >  [RpcServer.handler=18,port=44439]
> > > >> > > > > > > > > regionserver.MemStoreFlusher: Memstore is above high
> > > water
> > > >> > mark
> > > >> > > > and
> > > >> > > > > > > > > block 2363ms
> > > >> > > > > > > > >
> > > >> > > > > > > >
> > > >> > > > > > > > Yeah that was added in HBASE-6466, it helps tracing
> when
> > > the
> > > >> > > > handlers
> > > >> > > > > > are
> > > >> > > > > > > > blocked on the memstores, else you have to match the
> > > >> "Blocking
> > > >> > > > > updates"
> > > >> > > > > > > > with the "Unblocking updates" lines. I'd just be fine
> > > adding
> > > >> > the
> > > >> > > > time
> > > >> > > > > > > into
> > > >> > > > > > > > the "Unblocking updates" line and remove that WARN.
> > > >> > > > > > > >
> > > >> > > > > > > >
> > > >> > > > > > > > >
> > > >> > > > > > > > > Then when test is over:
> > > >> > > > > > > > > 2013-09-02 14:57:02,280 WARN
> > > >> > > > > > > > > [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181]
> > > >> > > > server.NIOServerCnxn:
> > > >> > > > > > > > > caught end of stream exception
> > > >> > > > > > > > > EndOfStreamException: Unable to read additional data
> > > from
> > > >> > > client
> > > >> > > > > > > > > sessionid 0x140dfdfc6270044, likely client has
> closed
> > > >> socket
> > > >> > > > > > > > >     at
> > > >> > > > > > > > >
> > > >> > > > > >
> > > >> > >
> > > org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220)
> > > >> > > > > > > > >     at
> > > >> > > > > > > > >
> > > >> > > > > > > >
> > > >> > > > > > >
> > > >> > > > > >
> > > >> > > > >
> > > >> > > >
> > > >> > >
> > > >> >
> > > >>
> > >
> >
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
> > > >> > > > > > > > >     at java.lang.Thread.run(Thread.java:662)
> > > >> > > > > > > > >
> > > >> > > > > > > >
> > > >> > > > > > > > Your 10 clients are disconnecting from ZK, you're
> > letting
> > > >> HBase
> > > >> > > > > manage
> > > >> > > > > > > it?
> > > >> > > > > > > >
> > > >> > > > > > > >
> > > >> > > > > > > > >
> > > >> > > > > > > > > /hbase-0.96.0$ cat
> > > >> logs/hbase-jmspaggiari-master-t430s.log  |
> > > >> > > > grep
> > > >> > > > > > > > > "Memstore is above high water mark and block" | wc
> > > >> > > > > > > > >    3555   49770  514558
> > > >> > > > > > > > >
> > > >> > > > > > > > >
> > > >> > > > > > > > > /hbase-0.96.0$ cat
> > > >> logs/hbase-jmspaggiari-master-t430s.log  |
> > > >> > > > grep
> > > >> > > > > > > > > "Unable to read additional data from client
> > sessionid" |
> > > >> wc
> > > >> > > > > > > > >     102    1530   12852
> > > >> > > > > > > > >
> > > >> > > > > > > > > I guess it's only because of PE, but is this
> something
> > > >> which
> > > >> > > need
> > > >> > > > > to
> > > >> > > > > > > > > be looked at?
> > > >> > > > > > > > >
> > > >> > > > > > > > > JM
> > > >> > > > > > > > >
> > > >> > > > > > > >
> > > >> > > > > > >
> > > >> > > > > >
> > > >> > > > >
> > > >> > > >
> > > >> > >
> > > >> >
> > > >>
> > > >
> > > >
> > >
> >
>

Re: 0.96.0 keep logging "Memstore is above high water mark"

Posted by Jean-Marc Spaggiari <je...@spaggiari.org>.
So are you saying that it's normal to see a ZK warning each time a client
disconnect? Or it's because PE is not disconnecting correctly?
Le 2013-09-03 17:48, "Jean-Daniel Cryans" <jd...@apache.org> a écrit :

> I ran the command line you are using, it actually creates 100 clients and
> this is why you see ~100 of them disconnecting from zookeeper.
>
> J-D
>
>
> On Tue, Sep 3, 2013 at 2:08 PM, Jean-Marc Spaggiari <
> jean-marc@spaggiari.org
> > wrote:
>
> > Way more than 10...
> >
> > rm logs/*
> > bin/start-hbase.sh
> > bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=4096
> > randomWrite 10
> > cat logs/hbase-jmspaggiari-master-t430s.log  | grep "Unable to read
> > additional" | wc
> >     102    1530   12852
> >
> >
> >
> > 2013/9/3 Jean-Marc Spaggiari <je...@spaggiari.org>
> >
> > > I'm getting multi lines in the logs. I did not count them but sound
> like
> > > it's one per client.
> > >
> > > I will re-run the test and wc the lines to see if it's really one for
> > > one...
> > >
> > > JM
> > >
> > >
> > > 2013/9/3 Jean-Daniel Cryans <jd...@apache.org>
> > >
> > >> The reason I was asking is if you used --nomapred, all your clients
> > would
> > >> share the same connection so only one stack trace should be printed
> > (when
> > >> the connection is closed).
> > >>
> > >> Since you're running with MR, how many stack traces do you see? Still
> > one?
> > >> If so, is it a local job runner? I'm trying to understand how many
> > >> connections you are using.
> > >>
> > >> In any case, it's a normal-ish stack trace.
> > >>
> > >> J-D
> > >>
> > >>
> > >> On Tue, Sep 3, 2013 at 11:14 AM, Jean-Marc Spaggiari <
> > >> jean-marc@spaggiari.org> wrote:
> > >>
> > >> > I was running a MR with 10 clients, so it's a MapRed one.
> > >> >
> > >> > Just retried with only 1 thread and got the same exception.
> > >> >
> > >> > So it's not making any difference.
> > >> >
> > >> >
> > >> > 2013/9/3 Jean-Daniel Cryans <jd...@apache.org>
> > >> >
> > >> > > Alright so your ZK is running inside the master and that's where
> > >> you're
> > >> > > seeing those lines, and they are normal if each thread has a
> > different
> > >> > > connection... Are you doing a --nomapred PE?
> > >> > >
> > >> > >
> > >> > > On Tue, Sep 3, 2013 at 10:50 AM, Jean-Marc Spaggiari <
> > >> > > jean-marc@spaggiari.org> wrote:
> > >> > >
> > >> > > > Oh, ok ;) I just un-packed the jar and ran it. So my config file
> > is:
> > >> > > > <configuration>
> > >> > > > </configuration>
> > >> > > >
> > >> > > >
> > >> > > >
> > >> > > > 2013/9/3 Jean-Daniel Cryans <jd...@apache.org>
> > >> > > >
> > >> > > > > On Tue, Sep 3, 2013 at 10:35 AM, Jean-Marc Spaggiari <
> > >> > > > > jean-marc@spaggiari.org> wrote:
> > >> > > > >
> > >> > > > > > > Your 10 clients are disconnecting from ZK, you're letting
> > >> HBase
> > >> > > > manage
> > >> > > > > > it?
> > >> > > > > > Yep, I start PE from the command line, I don't expect to
> have
> > >> to do
> > >> > > > > > anything after that. So issue is on PE side?
> > >> > > > > >
> > >> > > > >
> > >> > > > > Not really what I asked, I wanted to know if you set
> > >> HBASE_MANAGES_ZK
> > >> > > at
> > >> > > > > all in hbase-env.
> > >> > > > >
> > >> > > > >
> > >> > > > > >
> > >> > > > > > > Please don't remove the warn. It is important for
> > >> troubleshooting
> > >> > > and
> > >> > > > > > sizing.
> > >> > > > > > Can you please tell more on how it helps to do sizing?
> > >> Interested.
> > >> > > > > >
> > >> > > > > > Thanks,
> > >> > > > > >
> > >> > > > > > JM
> > >> > > > > >
> > >> > > > > >
> > >> > > > > > 2013/9/3 Kevin O'dell <ke...@cloudera.com>
> > >> > > > > >
> > >> > > > > > > Please don't remove the warn. It is important for
> > >> troubleshooting
> > >> > > and
> > >> > > > > > > sizing.
> > >> > > > > > > On Sep 3, 2013 1:29 PM, "Jean-Daniel Cryans" <
> > >> > jdcryans@apache.org>
> > >> > > > > > wrote:
> > >> > > > > > >
> > >> > > > > > > > On Mon, Sep 2, 2013 at 12:04 PM, Jean-Marc Spaggiari <
> > >> > > > > > > > jean-marc@spaggiari.org> wrote:
> > >> > > > > > > >
> > >> > > > > > > > > While running PE with 10 clients, server keep logging:
> > >> > > > > > > > > 2013-09-02 14:56:59,919 WARN
> > >> >  [RpcServer.handler=5,port=44439]
> > >> > > > > > > > > regionserver.MemStoreFlusher: Memstore is above high
> > water
> > >> > mark
> > >> > > > and
> > >> > > > > > > > > block 2362ms
> > >> > > > > > > > > 2013-09-02 14:56:59,919 WARN
> > >> >  [RpcServer.handler=18,port=44439]
> > >> > > > > > > > > regionserver.MemStoreFlusher: Memstore is above high
> > water
> > >> > mark
> > >> > > > and
> > >> > > > > > > > > block 2363ms
> > >> > > > > > > > >
> > >> > > > > > > >
> > >> > > > > > > > Yeah that was added in HBASE-6466, it helps tracing when
> > the
> > >> > > > handlers
> > >> > > > > > are
> > >> > > > > > > > blocked on the memstores, else you have to match the
> > >> "Blocking
> > >> > > > > updates"
> > >> > > > > > > > with the "Unblocking updates" lines. I'd just be fine
> > adding
> > >> > the
> > >> > > > time
> > >> > > > > > > into
> > >> > > > > > > > the "Unblocking updates" line and remove that WARN.
> > >> > > > > > > >
> > >> > > > > > > >
> > >> > > > > > > > >
> > >> > > > > > > > > Then when test is over:
> > >> > > > > > > > > 2013-09-02 14:57:02,280 WARN
> > >> > > > > > > > > [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181]
> > >> > > > server.NIOServerCnxn:
> > >> > > > > > > > > caught end of stream exception
> > >> > > > > > > > > EndOfStreamException: Unable to read additional data
> > from
> > >> > > client
> > >> > > > > > > > > sessionid 0x140dfdfc6270044, likely client has closed
> > >> socket
> > >> > > > > > > > >     at
> > >> > > > > > > > >
> > >> > > > > >
> > >> > >
> > org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220)
> > >> > > > > > > > >     at
> > >> > > > > > > > >
> > >> > > > > > > >
> > >> > > > > > >
> > >> > > > > >
> > >> > > > >
> > >> > > >
> > >> > >
> > >> >
> > >>
> >
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
> > >> > > > > > > > >     at java.lang.Thread.run(Thread.java:662)
> > >> > > > > > > > >
> > >> > > > > > > >
> > >> > > > > > > > Your 10 clients are disconnecting from ZK, you're
> letting
> > >> HBase
> > >> > > > > manage
> > >> > > > > > > it?
> > >> > > > > > > >
> > >> > > > > > > >
> > >> > > > > > > > >
> > >> > > > > > > > > /hbase-0.96.0$ cat
> > >> logs/hbase-jmspaggiari-master-t430s.log  |
> > >> > > > grep
> > >> > > > > > > > > "Memstore is above high water mark and block" | wc
> > >> > > > > > > > >    3555   49770  514558
> > >> > > > > > > > >
> > >> > > > > > > > >
> > >> > > > > > > > > /hbase-0.96.0$ cat
> > >> logs/hbase-jmspaggiari-master-t430s.log  |
> > >> > > > grep
> > >> > > > > > > > > "Unable to read additional data from client
> sessionid" |
> > >> wc
> > >> > > > > > > > >     102    1530   12852
> > >> > > > > > > > >
> > >> > > > > > > > > I guess it's only because of PE, but is this something
> > >> which
> > >> > > need
> > >> > > > > to
> > >> > > > > > > > > be looked at?
> > >> > > > > > > > >
> > >> > > > > > > > > JM
> > >> > > > > > > > >
> > >> > > > > > > >
> > >> > > > > > >
> > >> > > > > >
> > >> > > > >
> > >> > > >
> > >> > >
> > >> >
> > >>
> > >
> > >
> >
>

Re: 0.96.0 keep logging "Memstore is above high water mark"

Posted by Jean-Daniel Cryans <jd...@apache.org>.
I ran the command line you are using, it actually creates 100 clients and
this is why you see ~100 of them disconnecting from zookeeper.

J-D


On Tue, Sep 3, 2013 at 2:08 PM, Jean-Marc Spaggiari <jean-marc@spaggiari.org
> wrote:

> Way more than 10...
>
> rm logs/*
> bin/start-hbase.sh
> bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=4096
> randomWrite 10
> cat logs/hbase-jmspaggiari-master-t430s.log  | grep "Unable to read
> additional" | wc
>     102    1530   12852
>
>
>
> 2013/9/3 Jean-Marc Spaggiari <je...@spaggiari.org>
>
> > I'm getting multi lines in the logs. I did not count them but sound like
> > it's one per client.
> >
> > I will re-run the test and wc the lines to see if it's really one for
> > one...
> >
> > JM
> >
> >
> > 2013/9/3 Jean-Daniel Cryans <jd...@apache.org>
> >
> >> The reason I was asking is if you used --nomapred, all your clients
> would
> >> share the same connection so only one stack trace should be printed
> (when
> >> the connection is closed).
> >>
> >> Since you're running with MR, how many stack traces do you see? Still
> one?
> >> If so, is it a local job runner? I'm trying to understand how many
> >> connections you are using.
> >>
> >> In any case, it's a normal-ish stack trace.
> >>
> >> J-D
> >>
> >>
> >> On Tue, Sep 3, 2013 at 11:14 AM, Jean-Marc Spaggiari <
> >> jean-marc@spaggiari.org> wrote:
> >>
> >> > I was running a MR with 10 clients, so it's a MapRed one.
> >> >
> >> > Just retried with only 1 thread and got the same exception.
> >> >
> >> > So it's not making any difference.
> >> >
> >> >
> >> > 2013/9/3 Jean-Daniel Cryans <jd...@apache.org>
> >> >
> >> > > Alright so your ZK is running inside the master and that's where
> >> you're
> >> > > seeing those lines, and they are normal if each thread has a
> different
> >> > > connection... Are you doing a --nomapred PE?
> >> > >
> >> > >
> >> > > On Tue, Sep 3, 2013 at 10:50 AM, Jean-Marc Spaggiari <
> >> > > jean-marc@spaggiari.org> wrote:
> >> > >
> >> > > > Oh, ok ;) I just un-packed the jar and ran it. So my config file
> is:
> >> > > > <configuration>
> >> > > > </configuration>
> >> > > >
> >> > > >
> >> > > >
> >> > > > 2013/9/3 Jean-Daniel Cryans <jd...@apache.org>
> >> > > >
> >> > > > > On Tue, Sep 3, 2013 at 10:35 AM, Jean-Marc Spaggiari <
> >> > > > > jean-marc@spaggiari.org> wrote:
> >> > > > >
> >> > > > > > > Your 10 clients are disconnecting from ZK, you're letting
> >> HBase
> >> > > > manage
> >> > > > > > it?
> >> > > > > > Yep, I start PE from the command line, I don't expect to have
> >> to do
> >> > > > > > anything after that. So issue is on PE side?
> >> > > > > >
> >> > > > >
> >> > > > > Not really what I asked, I wanted to know if you set
> >> HBASE_MANAGES_ZK
> >> > > at
> >> > > > > all in hbase-env.
> >> > > > >
> >> > > > >
> >> > > > > >
> >> > > > > > > Please don't remove the warn. It is important for
> >> troubleshooting
> >> > > and
> >> > > > > > sizing.
> >> > > > > > Can you please tell more on how it helps to do sizing?
> >> Interested.
> >> > > > > >
> >> > > > > > Thanks,
> >> > > > > >
> >> > > > > > JM
> >> > > > > >
> >> > > > > >
> >> > > > > > 2013/9/3 Kevin O'dell <ke...@cloudera.com>
> >> > > > > >
> >> > > > > > > Please don't remove the warn. It is important for
> >> troubleshooting
> >> > > and
> >> > > > > > > sizing.
> >> > > > > > > On Sep 3, 2013 1:29 PM, "Jean-Daniel Cryans" <
> >> > jdcryans@apache.org>
> >> > > > > > wrote:
> >> > > > > > >
> >> > > > > > > > On Mon, Sep 2, 2013 at 12:04 PM, Jean-Marc Spaggiari <
> >> > > > > > > > jean-marc@spaggiari.org> wrote:
> >> > > > > > > >
> >> > > > > > > > > While running PE with 10 clients, server keep logging:
> >> > > > > > > > > 2013-09-02 14:56:59,919 WARN
> >> >  [RpcServer.handler=5,port=44439]
> >> > > > > > > > > regionserver.MemStoreFlusher: Memstore is above high
> water
> >> > mark
> >> > > > and
> >> > > > > > > > > block 2362ms
> >> > > > > > > > > 2013-09-02 14:56:59,919 WARN
> >> >  [RpcServer.handler=18,port=44439]
> >> > > > > > > > > regionserver.MemStoreFlusher: Memstore is above high
> water
> >> > mark
> >> > > > and
> >> > > > > > > > > block 2363ms
> >> > > > > > > > >
> >> > > > > > > >
> >> > > > > > > > Yeah that was added in HBASE-6466, it helps tracing when
> the
> >> > > > handlers
> >> > > > > > are
> >> > > > > > > > blocked on the memstores, else you have to match the
> >> "Blocking
> >> > > > > updates"
> >> > > > > > > > with the "Unblocking updates" lines. I'd just be fine
> adding
> >> > the
> >> > > > time
> >> > > > > > > into
> >> > > > > > > > the "Unblocking updates" line and remove that WARN.
> >> > > > > > > >
> >> > > > > > > >
> >> > > > > > > > >
> >> > > > > > > > > Then when test is over:
> >> > > > > > > > > 2013-09-02 14:57:02,280 WARN
> >> > > > > > > > > [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181]
> >> > > > server.NIOServerCnxn:
> >> > > > > > > > > caught end of stream exception
> >> > > > > > > > > EndOfStreamException: Unable to read additional data
> from
> >> > > client
> >> > > > > > > > > sessionid 0x140dfdfc6270044, likely client has closed
> >> socket
> >> > > > > > > > >     at
> >> > > > > > > > >
> >> > > > > >
> >> > >
> org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220)
> >> > > > > > > > >     at
> >> > > > > > > > >
> >> > > > > > > >
> >> > > > > > >
> >> > > > > >
> >> > > > >
> >> > > >
> >> > >
> >> >
> >>
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
> >> > > > > > > > >     at java.lang.Thread.run(Thread.java:662)
> >> > > > > > > > >
> >> > > > > > > >
> >> > > > > > > > Your 10 clients are disconnecting from ZK, you're letting
> >> HBase
> >> > > > > manage
> >> > > > > > > it?
> >> > > > > > > >
> >> > > > > > > >
> >> > > > > > > > >
> >> > > > > > > > > /hbase-0.96.0$ cat
> >> logs/hbase-jmspaggiari-master-t430s.log  |
> >> > > > grep
> >> > > > > > > > > "Memstore is above high water mark and block" | wc
> >> > > > > > > > >    3555   49770  514558
> >> > > > > > > > >
> >> > > > > > > > >
> >> > > > > > > > > /hbase-0.96.0$ cat
> >> logs/hbase-jmspaggiari-master-t430s.log  |
> >> > > > grep
> >> > > > > > > > > "Unable to read additional data from client sessionid" |
> >> wc
> >> > > > > > > > >     102    1530   12852
> >> > > > > > > > >
> >> > > > > > > > > I guess it's only because of PE, but is this something
> >> which
> >> > > need
> >> > > > > to
> >> > > > > > > > > be looked at?
> >> > > > > > > > >
> >> > > > > > > > > JM
> >> > > > > > > > >
> >> > > > > > > >
> >> > > > > > >
> >> > > > > >
> >> > > > >
> >> > > >
> >> > >
> >> >
> >>
> >
> >
>

Re: 0.96.0 keep logging "Memstore is above high water mark"

Posted by Jean-Marc Spaggiari <je...@spaggiari.org>.
Way more than 10...

rm logs/*
bin/start-hbase.sh
bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=4096
randomWrite 10
cat logs/hbase-jmspaggiari-master-t430s.log  | grep "Unable to read
additional" | wc
    102    1530   12852



2013/9/3 Jean-Marc Spaggiari <je...@spaggiari.org>

> I'm getting multi lines in the logs. I did not count them but sound like
> it's one per client.
>
> I will re-run the test and wc the lines to see if it's really one for
> one...
>
> JM
>
>
> 2013/9/3 Jean-Daniel Cryans <jd...@apache.org>
>
>> The reason I was asking is if you used --nomapred, all your clients would
>> share the same connection so only one stack trace should be printed (when
>> the connection is closed).
>>
>> Since you're running with MR, how many stack traces do you see? Still one?
>> If so, is it a local job runner? I'm trying to understand how many
>> connections you are using.
>>
>> In any case, it's a normal-ish stack trace.
>>
>> J-D
>>
>>
>> On Tue, Sep 3, 2013 at 11:14 AM, Jean-Marc Spaggiari <
>> jean-marc@spaggiari.org> wrote:
>>
>> > I was running a MR with 10 clients, so it's a MapRed one.
>> >
>> > Just retried with only 1 thread and got the same exception.
>> >
>> > So it's not making any difference.
>> >
>> >
>> > 2013/9/3 Jean-Daniel Cryans <jd...@apache.org>
>> >
>> > > Alright so your ZK is running inside the master and that's where
>> you're
>> > > seeing those lines, and they are normal if each thread has a different
>> > > connection... Are you doing a --nomapred PE?
>> > >
>> > >
>> > > On Tue, Sep 3, 2013 at 10:50 AM, Jean-Marc Spaggiari <
>> > > jean-marc@spaggiari.org> wrote:
>> > >
>> > > > Oh, ok ;) I just un-packed the jar and ran it. So my config file is:
>> > > > <configuration>
>> > > > </configuration>
>> > > >
>> > > >
>> > > >
>> > > > 2013/9/3 Jean-Daniel Cryans <jd...@apache.org>
>> > > >
>> > > > > On Tue, Sep 3, 2013 at 10:35 AM, Jean-Marc Spaggiari <
>> > > > > jean-marc@spaggiari.org> wrote:
>> > > > >
>> > > > > > > Your 10 clients are disconnecting from ZK, you're letting
>> HBase
>> > > > manage
>> > > > > > it?
>> > > > > > Yep, I start PE from the command line, I don't expect to have
>> to do
>> > > > > > anything after that. So issue is on PE side?
>> > > > > >
>> > > > >
>> > > > > Not really what I asked, I wanted to know if you set
>> HBASE_MANAGES_ZK
>> > > at
>> > > > > all in hbase-env.
>> > > > >
>> > > > >
>> > > > > >
>> > > > > > > Please don't remove the warn. It is important for
>> troubleshooting
>> > > and
>> > > > > > sizing.
>> > > > > > Can you please tell more on how it helps to do sizing?
>> Interested.
>> > > > > >
>> > > > > > Thanks,
>> > > > > >
>> > > > > > JM
>> > > > > >
>> > > > > >
>> > > > > > 2013/9/3 Kevin O'dell <ke...@cloudera.com>
>> > > > > >
>> > > > > > > Please don't remove the warn. It is important for
>> troubleshooting
>> > > and
>> > > > > > > sizing.
>> > > > > > > On Sep 3, 2013 1:29 PM, "Jean-Daniel Cryans" <
>> > jdcryans@apache.org>
>> > > > > > wrote:
>> > > > > > >
>> > > > > > > > On Mon, Sep 2, 2013 at 12:04 PM, Jean-Marc Spaggiari <
>> > > > > > > > jean-marc@spaggiari.org> wrote:
>> > > > > > > >
>> > > > > > > > > While running PE with 10 clients, server keep logging:
>> > > > > > > > > 2013-09-02 14:56:59,919 WARN
>> >  [RpcServer.handler=5,port=44439]
>> > > > > > > > > regionserver.MemStoreFlusher: Memstore is above high water
>> > mark
>> > > > and
>> > > > > > > > > block 2362ms
>> > > > > > > > > 2013-09-02 14:56:59,919 WARN
>> >  [RpcServer.handler=18,port=44439]
>> > > > > > > > > regionserver.MemStoreFlusher: Memstore is above high water
>> > mark
>> > > > and
>> > > > > > > > > block 2363ms
>> > > > > > > > >
>> > > > > > > >
>> > > > > > > > Yeah that was added in HBASE-6466, it helps tracing when the
>> > > > handlers
>> > > > > > are
>> > > > > > > > blocked on the memstores, else you have to match the
>> "Blocking
>> > > > > updates"
>> > > > > > > > with the "Unblocking updates" lines. I'd just be fine adding
>> > the
>> > > > time
>> > > > > > > into
>> > > > > > > > the "Unblocking updates" line and remove that WARN.
>> > > > > > > >
>> > > > > > > >
>> > > > > > > > >
>> > > > > > > > > Then when test is over:
>> > > > > > > > > 2013-09-02 14:57:02,280 WARN
>> > > > > > > > > [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181]
>> > > > server.NIOServerCnxn:
>> > > > > > > > > caught end of stream exception
>> > > > > > > > > EndOfStreamException: Unable to read additional data from
>> > > client
>> > > > > > > > > sessionid 0x140dfdfc6270044, likely client has closed
>> socket
>> > > > > > > > >     at
>> > > > > > > > >
>> > > > > >
>> > > org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220)
>> > > > > > > > >     at
>> > > > > > > > >
>> > > > > > > >
>> > > > > > >
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
>> > > > > > > > >     at java.lang.Thread.run(Thread.java:662)
>> > > > > > > > >
>> > > > > > > >
>> > > > > > > > Your 10 clients are disconnecting from ZK, you're letting
>> HBase
>> > > > > manage
>> > > > > > > it?
>> > > > > > > >
>> > > > > > > >
>> > > > > > > > >
>> > > > > > > > > /hbase-0.96.0$ cat
>> logs/hbase-jmspaggiari-master-t430s.log  |
>> > > > grep
>> > > > > > > > > "Memstore is above high water mark and block" | wc
>> > > > > > > > >    3555   49770  514558
>> > > > > > > > >
>> > > > > > > > >
>> > > > > > > > > /hbase-0.96.0$ cat
>> logs/hbase-jmspaggiari-master-t430s.log  |
>> > > > grep
>> > > > > > > > > "Unable to read additional data from client sessionid" |
>> wc
>> > > > > > > > >     102    1530   12852
>> > > > > > > > >
>> > > > > > > > > I guess it's only because of PE, but is this something
>> which
>> > > need
>> > > > > to
>> > > > > > > > > be looked at?
>> > > > > > > > >
>> > > > > > > > > JM
>> > > > > > > > >
>> > > > > > > >
>> > > > > > >
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>>
>
>

Re: 0.96.0 keep logging "Memstore is above high water mark"

Posted by Jean-Marc Spaggiari <je...@spaggiari.org>.
I'm getting multi lines in the logs. I did not count them but sound like
it's one per client.

I will re-run the test and wc the lines to see if it's really one for one...

JM


2013/9/3 Jean-Daniel Cryans <jd...@apache.org>

> The reason I was asking is if you used --nomapred, all your clients would
> share the same connection so only one stack trace should be printed (when
> the connection is closed).
>
> Since you're running with MR, how many stack traces do you see? Still one?
> If so, is it a local job runner? I'm trying to understand how many
> connections you are using.
>
> In any case, it's a normal-ish stack trace.
>
> J-D
>
>
> On Tue, Sep 3, 2013 at 11:14 AM, Jean-Marc Spaggiari <
> jean-marc@spaggiari.org> wrote:
>
> > I was running a MR with 10 clients, so it's a MapRed one.
> >
> > Just retried with only 1 thread and got the same exception.
> >
> > So it's not making any difference.
> >
> >
> > 2013/9/3 Jean-Daniel Cryans <jd...@apache.org>
> >
> > > Alright so your ZK is running inside the master and that's where you're
> > > seeing those lines, and they are normal if each thread has a different
> > > connection... Are you doing a --nomapred PE?
> > >
> > >
> > > On Tue, Sep 3, 2013 at 10:50 AM, Jean-Marc Spaggiari <
> > > jean-marc@spaggiari.org> wrote:
> > >
> > > > Oh, ok ;) I just un-packed the jar and ran it. So my config file is:
> > > > <configuration>
> > > > </configuration>
> > > >
> > > >
> > > >
> > > > 2013/9/3 Jean-Daniel Cryans <jd...@apache.org>
> > > >
> > > > > On Tue, Sep 3, 2013 at 10:35 AM, Jean-Marc Spaggiari <
> > > > > jean-marc@spaggiari.org> wrote:
> > > > >
> > > > > > > Your 10 clients are disconnecting from ZK, you're letting HBase
> > > > manage
> > > > > > it?
> > > > > > Yep, I start PE from the command line, I don't expect to have to
> do
> > > > > > anything after that. So issue is on PE side?
> > > > > >
> > > > >
> > > > > Not really what I asked, I wanted to know if you set
> HBASE_MANAGES_ZK
> > > at
> > > > > all in hbase-env.
> > > > >
> > > > >
> > > > > >
> > > > > > > Please don't remove the warn. It is important for
> troubleshooting
> > > and
> > > > > > sizing.
> > > > > > Can you please tell more on how it helps to do sizing?
> Interested.
> > > > > >
> > > > > > Thanks,
> > > > > >
> > > > > > JM
> > > > > >
> > > > > >
> > > > > > 2013/9/3 Kevin O'dell <ke...@cloudera.com>
> > > > > >
> > > > > > > Please don't remove the warn. It is important for
> troubleshooting
> > > and
> > > > > > > sizing.
> > > > > > > On Sep 3, 2013 1:29 PM, "Jean-Daniel Cryans" <
> > jdcryans@apache.org>
> > > > > > wrote:
> > > > > > >
> > > > > > > > On Mon, Sep 2, 2013 at 12:04 PM, Jean-Marc Spaggiari <
> > > > > > > > jean-marc@spaggiari.org> wrote:
> > > > > > > >
> > > > > > > > > While running PE with 10 clients, server keep logging:
> > > > > > > > > 2013-09-02 14:56:59,919 WARN
> >  [RpcServer.handler=5,port=44439]
> > > > > > > > > regionserver.MemStoreFlusher: Memstore is above high water
> > mark
> > > > and
> > > > > > > > > block 2362ms
> > > > > > > > > 2013-09-02 14:56:59,919 WARN
> >  [RpcServer.handler=18,port=44439]
> > > > > > > > > regionserver.MemStoreFlusher: Memstore is above high water
> > mark
> > > > and
> > > > > > > > > block 2363ms
> > > > > > > > >
> > > > > > > >
> > > > > > > > Yeah that was added in HBASE-6466, it helps tracing when the
> > > > handlers
> > > > > > are
> > > > > > > > blocked on the memstores, else you have to match the
> "Blocking
> > > > > updates"
> > > > > > > > with the "Unblocking updates" lines. I'd just be fine adding
> > the
> > > > time
> > > > > > > into
> > > > > > > > the "Unblocking updates" line and remove that WARN.
> > > > > > > >
> > > > > > > >
> > > > > > > > >
> > > > > > > > > Then when test is over:
> > > > > > > > > 2013-09-02 14:57:02,280 WARN
> > > > > > > > > [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181]
> > > > server.NIOServerCnxn:
> > > > > > > > > caught end of stream exception
> > > > > > > > > EndOfStreamException: Unable to read additional data from
> > > client
> > > > > > > > > sessionid 0x140dfdfc6270044, likely client has closed
> socket
> > > > > > > > >     at
> > > > > > > > >
> > > > > >
> > > org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220)
> > > > > > > > >     at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
> > > > > > > > >     at java.lang.Thread.run(Thread.java:662)
> > > > > > > > >
> > > > > > > >
> > > > > > > > Your 10 clients are disconnecting from ZK, you're letting
> HBase
> > > > > manage
> > > > > > > it?
> > > > > > > >
> > > > > > > >
> > > > > > > > >
> > > > > > > > > /hbase-0.96.0$ cat logs/hbase-jmspaggiari-master-t430s.log
>  |
> > > > grep
> > > > > > > > > "Memstore is above high water mark and block" | wc
> > > > > > > > >    3555   49770  514558
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > /hbase-0.96.0$ cat logs/hbase-jmspaggiari-master-t430s.log
>  |
> > > > grep
> > > > > > > > > "Unable to read additional data from client sessionid" | wc
> > > > > > > > >     102    1530   12852
> > > > > > > > >
> > > > > > > > > I guess it's only because of PE, but is this something
> which
> > > need
> > > > > to
> > > > > > > > > be looked at?
> > > > > > > > >
> > > > > > > > > JM
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

Re: 0.96.0 keep logging "Memstore is above high water mark"

Posted by Jean-Daniel Cryans <jd...@apache.org>.
The reason I was asking is if you used --nomapred, all your clients would
share the same connection so only one stack trace should be printed (when
the connection is closed).

Since you're running with MR, how many stack traces do you see? Still one?
If so, is it a local job runner? I'm trying to understand how many
connections you are using.

In any case, it's a normal-ish stack trace.

J-D


On Tue, Sep 3, 2013 at 11:14 AM, Jean-Marc Spaggiari <
jean-marc@spaggiari.org> wrote:

> I was running a MR with 10 clients, so it's a MapRed one.
>
> Just retried with only 1 thread and got the same exception.
>
> So it's not making any difference.
>
>
> 2013/9/3 Jean-Daniel Cryans <jd...@apache.org>
>
> > Alright so your ZK is running inside the master and that's where you're
> > seeing those lines, and they are normal if each thread has a different
> > connection... Are you doing a --nomapred PE?
> >
> >
> > On Tue, Sep 3, 2013 at 10:50 AM, Jean-Marc Spaggiari <
> > jean-marc@spaggiari.org> wrote:
> >
> > > Oh, ok ;) I just un-packed the jar and ran it. So my config file is:
> > > <configuration>
> > > </configuration>
> > >
> > >
> > >
> > > 2013/9/3 Jean-Daniel Cryans <jd...@apache.org>
> > >
> > > > On Tue, Sep 3, 2013 at 10:35 AM, Jean-Marc Spaggiari <
> > > > jean-marc@spaggiari.org> wrote:
> > > >
> > > > > > Your 10 clients are disconnecting from ZK, you're letting HBase
> > > manage
> > > > > it?
> > > > > Yep, I start PE from the command line, I don't expect to have to do
> > > > > anything after that. So issue is on PE side?
> > > > >
> > > >
> > > > Not really what I asked, I wanted to know if you set HBASE_MANAGES_ZK
> > at
> > > > all in hbase-env.
> > > >
> > > >
> > > > >
> > > > > > Please don't remove the warn. It is important for troubleshooting
> > and
> > > > > sizing.
> > > > > Can you please tell more on how it helps to do sizing? Interested.
> > > > >
> > > > > Thanks,
> > > > >
> > > > > JM
> > > > >
> > > > >
> > > > > 2013/9/3 Kevin O'dell <ke...@cloudera.com>
> > > > >
> > > > > > Please don't remove the warn. It is important for troubleshooting
> > and
> > > > > > sizing.
> > > > > > On Sep 3, 2013 1:29 PM, "Jean-Daniel Cryans" <
> jdcryans@apache.org>
> > > > > wrote:
> > > > > >
> > > > > > > On Mon, Sep 2, 2013 at 12:04 PM, Jean-Marc Spaggiari <
> > > > > > > jean-marc@spaggiari.org> wrote:
> > > > > > >
> > > > > > > > While running PE with 10 clients, server keep logging:
> > > > > > > > 2013-09-02 14:56:59,919 WARN
>  [RpcServer.handler=5,port=44439]
> > > > > > > > regionserver.MemStoreFlusher: Memstore is above high water
> mark
> > > and
> > > > > > > > block 2362ms
> > > > > > > > 2013-09-02 14:56:59,919 WARN
>  [RpcServer.handler=18,port=44439]
> > > > > > > > regionserver.MemStoreFlusher: Memstore is above high water
> mark
> > > and
> > > > > > > > block 2363ms
> > > > > > > >
> > > > > > >
> > > > > > > Yeah that was added in HBASE-6466, it helps tracing when the
> > > handlers
> > > > > are
> > > > > > > blocked on the memstores, else you have to match the "Blocking
> > > > updates"
> > > > > > > with the "Unblocking updates" lines. I'd just be fine adding
> the
> > > time
> > > > > > into
> > > > > > > the "Unblocking updates" line and remove that WARN.
> > > > > > >
> > > > > > >
> > > > > > > >
> > > > > > > > Then when test is over:
> > > > > > > > 2013-09-02 14:57:02,280 WARN
> > > > > > > > [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181]
> > > server.NIOServerCnxn:
> > > > > > > > caught end of stream exception
> > > > > > > > EndOfStreamException: Unable to read additional data from
> > client
> > > > > > > > sessionid 0x140dfdfc6270044, likely client has closed socket
> > > > > > > >     at
> > > > > > > >
> > > > >
> > org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220)
> > > > > > > >     at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
> > > > > > > >     at java.lang.Thread.run(Thread.java:662)
> > > > > > > >
> > > > > > >
> > > > > > > Your 10 clients are disconnecting from ZK, you're letting HBase
> > > > manage
> > > > > > it?
> > > > > > >
> > > > > > >
> > > > > > > >
> > > > > > > > /hbase-0.96.0$ cat logs/hbase-jmspaggiari-master-t430s.log  |
> > > grep
> > > > > > > > "Memstore is above high water mark and block" | wc
> > > > > > > >    3555   49770  514558
> > > > > > > >
> > > > > > > >
> > > > > > > > /hbase-0.96.0$ cat logs/hbase-jmspaggiari-master-t430s.log  |
> > > grep
> > > > > > > > "Unable to read additional data from client sessionid" | wc
> > > > > > > >     102    1530   12852
> > > > > > > >
> > > > > > > > I guess it's only because of PE, but is this something which
> > need
> > > > to
> > > > > > > > be looked at?
> > > > > > > >
> > > > > > > > JM
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

Re: 0.96.0 keep logging "Memstore is above high water mark"

Posted by Jean-Marc Spaggiari <je...@spaggiari.org>.
I was running a MR with 10 clients, so it's a MapRed one.

Just retried with only 1 thread and got the same exception.

So it's not making any difference.


2013/9/3 Jean-Daniel Cryans <jd...@apache.org>

> Alright so your ZK is running inside the master and that's where you're
> seeing those lines, and they are normal if each thread has a different
> connection... Are you doing a --nomapred PE?
>
>
> On Tue, Sep 3, 2013 at 10:50 AM, Jean-Marc Spaggiari <
> jean-marc@spaggiari.org> wrote:
>
> > Oh, ok ;) I just un-packed the jar and ran it. So my config file is:
> > <configuration>
> > </configuration>
> >
> >
> >
> > 2013/9/3 Jean-Daniel Cryans <jd...@apache.org>
> >
> > > On Tue, Sep 3, 2013 at 10:35 AM, Jean-Marc Spaggiari <
> > > jean-marc@spaggiari.org> wrote:
> > >
> > > > > Your 10 clients are disconnecting from ZK, you're letting HBase
> > manage
> > > > it?
> > > > Yep, I start PE from the command line, I don't expect to have to do
> > > > anything after that. So issue is on PE side?
> > > >
> > >
> > > Not really what I asked, I wanted to know if you set HBASE_MANAGES_ZK
> at
> > > all in hbase-env.
> > >
> > >
> > > >
> > > > > Please don't remove the warn. It is important for troubleshooting
> and
> > > > sizing.
> > > > Can you please tell more on how it helps to do sizing? Interested.
> > > >
> > > > Thanks,
> > > >
> > > > JM
> > > >
> > > >
> > > > 2013/9/3 Kevin O'dell <ke...@cloudera.com>
> > > >
> > > > > Please don't remove the warn. It is important for troubleshooting
> and
> > > > > sizing.
> > > > > On Sep 3, 2013 1:29 PM, "Jean-Daniel Cryans" <jd...@apache.org>
> > > > wrote:
> > > > >
> > > > > > On Mon, Sep 2, 2013 at 12:04 PM, Jean-Marc Spaggiari <
> > > > > > jean-marc@spaggiari.org> wrote:
> > > > > >
> > > > > > > While running PE with 10 clients, server keep logging:
> > > > > > > 2013-09-02 14:56:59,919 WARN  [RpcServer.handler=5,port=44439]
> > > > > > > regionserver.MemStoreFlusher: Memstore is above high water mark
> > and
> > > > > > > block 2362ms
> > > > > > > 2013-09-02 14:56:59,919 WARN  [RpcServer.handler=18,port=44439]
> > > > > > > regionserver.MemStoreFlusher: Memstore is above high water mark
> > and
> > > > > > > block 2363ms
> > > > > > >
> > > > > >
> > > > > > Yeah that was added in HBASE-6466, it helps tracing when the
> > handlers
> > > > are
> > > > > > blocked on the memstores, else you have to match the "Blocking
> > > updates"
> > > > > > with the "Unblocking updates" lines. I'd just be fine adding the
> > time
> > > > > into
> > > > > > the "Unblocking updates" line and remove that WARN.
> > > > > >
> > > > > >
> > > > > > >
> > > > > > > Then when test is over:
> > > > > > > 2013-09-02 14:57:02,280 WARN
> > > > > > > [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181]
> > server.NIOServerCnxn:
> > > > > > > caught end of stream exception
> > > > > > > EndOfStreamException: Unable to read additional data from
> client
> > > > > > > sessionid 0x140dfdfc6270044, likely client has closed socket
> > > > > > >     at
> > > > > > >
> > > >
> org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220)
> > > > > > >     at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
> > > > > > >     at java.lang.Thread.run(Thread.java:662)
> > > > > > >
> > > > > >
> > > > > > Your 10 clients are disconnecting from ZK, you're letting HBase
> > > manage
> > > > > it?
> > > > > >
> > > > > >
> > > > > > >
> > > > > > > /hbase-0.96.0$ cat logs/hbase-jmspaggiari-master-t430s.log  |
> > grep
> > > > > > > "Memstore is above high water mark and block" | wc
> > > > > > >    3555   49770  514558
> > > > > > >
> > > > > > >
> > > > > > > /hbase-0.96.0$ cat logs/hbase-jmspaggiari-master-t430s.log  |
> > grep
> > > > > > > "Unable to read additional data from client sessionid" | wc
> > > > > > >     102    1530   12852
> > > > > > >
> > > > > > > I guess it's only because of PE, but is this something which
> need
> > > to
> > > > > > > be looked at?
> > > > > > >
> > > > > > > JM
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

Re: 0.96.0 keep logging "Memstore is above high water mark"

Posted by Jean-Daniel Cryans <jd...@apache.org>.
Alright so your ZK is running inside the master and that's where you're
seeing those lines, and they are normal if each thread has a different
connection... Are you doing a --nomapred PE?


On Tue, Sep 3, 2013 at 10:50 AM, Jean-Marc Spaggiari <
jean-marc@spaggiari.org> wrote:

> Oh, ok ;) I just un-packed the jar and ran it. So my config file is:
> <configuration>
> </configuration>
>
>
>
> 2013/9/3 Jean-Daniel Cryans <jd...@apache.org>
>
> > On Tue, Sep 3, 2013 at 10:35 AM, Jean-Marc Spaggiari <
> > jean-marc@spaggiari.org> wrote:
> >
> > > > Your 10 clients are disconnecting from ZK, you're letting HBase
> manage
> > > it?
> > > Yep, I start PE from the command line, I don't expect to have to do
> > > anything after that. So issue is on PE side?
> > >
> >
> > Not really what I asked, I wanted to know if you set HBASE_MANAGES_ZK at
> > all in hbase-env.
> >
> >
> > >
> > > > Please don't remove the warn. It is important for troubleshooting and
> > > sizing.
> > > Can you please tell more on how it helps to do sizing? Interested.
> > >
> > > Thanks,
> > >
> > > JM
> > >
> > >
> > > 2013/9/3 Kevin O'dell <ke...@cloudera.com>
> > >
> > > > Please don't remove the warn. It is important for troubleshooting and
> > > > sizing.
> > > > On Sep 3, 2013 1:29 PM, "Jean-Daniel Cryans" <jd...@apache.org>
> > > wrote:
> > > >
> > > > > On Mon, Sep 2, 2013 at 12:04 PM, Jean-Marc Spaggiari <
> > > > > jean-marc@spaggiari.org> wrote:
> > > > >
> > > > > > While running PE with 10 clients, server keep logging:
> > > > > > 2013-09-02 14:56:59,919 WARN  [RpcServer.handler=5,port=44439]
> > > > > > regionserver.MemStoreFlusher: Memstore is above high water mark
> and
> > > > > > block 2362ms
> > > > > > 2013-09-02 14:56:59,919 WARN  [RpcServer.handler=18,port=44439]
> > > > > > regionserver.MemStoreFlusher: Memstore is above high water mark
> and
> > > > > > block 2363ms
> > > > > >
> > > > >
> > > > > Yeah that was added in HBASE-6466, it helps tracing when the
> handlers
> > > are
> > > > > blocked on the memstores, else you have to match the "Blocking
> > updates"
> > > > > with the "Unblocking updates" lines. I'd just be fine adding the
> time
> > > > into
> > > > > the "Unblocking updates" line and remove that WARN.
> > > > >
> > > > >
> > > > > >
> > > > > > Then when test is over:
> > > > > > 2013-09-02 14:57:02,280 WARN
> > > > > > [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181]
> server.NIOServerCnxn:
> > > > > > caught end of stream exception
> > > > > > EndOfStreamException: Unable to read additional data from client
> > > > > > sessionid 0x140dfdfc6270044, likely client has closed socket
> > > > > >     at
> > > > > >
> > > org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220)
> > > > > >     at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
> > > > > >     at java.lang.Thread.run(Thread.java:662)
> > > > > >
> > > > >
> > > > > Your 10 clients are disconnecting from ZK, you're letting HBase
> > manage
> > > > it?
> > > > >
> > > > >
> > > > > >
> > > > > > /hbase-0.96.0$ cat logs/hbase-jmspaggiari-master-t430s.log  |
> grep
> > > > > > "Memstore is above high water mark and block" | wc
> > > > > >    3555   49770  514558
> > > > > >
> > > > > >
> > > > > > /hbase-0.96.0$ cat logs/hbase-jmspaggiari-master-t430s.log  |
> grep
> > > > > > "Unable to read additional data from client sessionid" | wc
> > > > > >     102    1530   12852
> > > > > >
> > > > > > I guess it's only because of PE, but is this something which need
> > to
> > > > > > be looked at?
> > > > > >
> > > > > > JM
> > > > > >
> > > > >
> > > >
> > >
> >
>

Re: 0.96.0 keep logging "Memstore is above high water mark"

Posted by Jean-Marc Spaggiari <je...@spaggiari.org>.
Oh, ok ;) I just un-packed the jar and ran it. So my config file is:
<configuration>
</configuration>



2013/9/3 Jean-Daniel Cryans <jd...@apache.org>

> On Tue, Sep 3, 2013 at 10:35 AM, Jean-Marc Spaggiari <
> jean-marc@spaggiari.org> wrote:
>
> > > Your 10 clients are disconnecting from ZK, you're letting HBase manage
> > it?
> > Yep, I start PE from the command line, I don't expect to have to do
> > anything after that. So issue is on PE side?
> >
>
> Not really what I asked, I wanted to know if you set HBASE_MANAGES_ZK at
> all in hbase-env.
>
>
> >
> > > Please don't remove the warn. It is important for troubleshooting and
> > sizing.
> > Can you please tell more on how it helps to do sizing? Interested.
> >
> > Thanks,
> >
> > JM
> >
> >
> > 2013/9/3 Kevin O'dell <ke...@cloudera.com>
> >
> > > Please don't remove the warn. It is important for troubleshooting and
> > > sizing.
> > > On Sep 3, 2013 1:29 PM, "Jean-Daniel Cryans" <jd...@apache.org>
> > wrote:
> > >
> > > > On Mon, Sep 2, 2013 at 12:04 PM, Jean-Marc Spaggiari <
> > > > jean-marc@spaggiari.org> wrote:
> > > >
> > > > > While running PE with 10 clients, server keep logging:
> > > > > 2013-09-02 14:56:59,919 WARN  [RpcServer.handler=5,port=44439]
> > > > > regionserver.MemStoreFlusher: Memstore is above high water mark and
> > > > > block 2362ms
> > > > > 2013-09-02 14:56:59,919 WARN  [RpcServer.handler=18,port=44439]
> > > > > regionserver.MemStoreFlusher: Memstore is above high water mark and
> > > > > block 2363ms
> > > > >
> > > >
> > > > Yeah that was added in HBASE-6466, it helps tracing when the handlers
> > are
> > > > blocked on the memstores, else you have to match the "Blocking
> updates"
> > > > with the "Unblocking updates" lines. I'd just be fine adding the time
> > > into
> > > > the "Unblocking updates" line and remove that WARN.
> > > >
> > > >
> > > > >
> > > > > Then when test is over:
> > > > > 2013-09-02 14:57:02,280 WARN
> > > > > [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxn:
> > > > > caught end of stream exception
> > > > > EndOfStreamException: Unable to read additional data from client
> > > > > sessionid 0x140dfdfc6270044, likely client has closed socket
> > > > >     at
> > > > >
> > org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220)
> > > > >     at
> > > > >
> > > >
> > >
> >
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
> > > > >     at java.lang.Thread.run(Thread.java:662)
> > > > >
> > > >
> > > > Your 10 clients are disconnecting from ZK, you're letting HBase
> manage
> > > it?
> > > >
> > > >
> > > > >
> > > > > /hbase-0.96.0$ cat logs/hbase-jmspaggiari-master-t430s.log  | grep
> > > > > "Memstore is above high water mark and block" | wc
> > > > >    3555   49770  514558
> > > > >
> > > > >
> > > > > /hbase-0.96.0$ cat logs/hbase-jmspaggiari-master-t430s.log  | grep
> > > > > "Unable to read additional data from client sessionid" | wc
> > > > >     102    1530   12852
> > > > >
> > > > > I guess it's only because of PE, but is this something which need
> to
> > > > > be looked at?
> > > > >
> > > > > JM
> > > > >
> > > >
> > >
> >
>

Re: 0.96.0 keep logging "Memstore is above high water mark"

Posted by Jean-Daniel Cryans <jd...@apache.org>.
On Tue, Sep 3, 2013 at 10:35 AM, Jean-Marc Spaggiari <
jean-marc@spaggiari.org> wrote:

> > Your 10 clients are disconnecting from ZK, you're letting HBase manage
> it?
> Yep, I start PE from the command line, I don't expect to have to do
> anything after that. So issue is on PE side?
>

Not really what I asked, I wanted to know if you set HBASE_MANAGES_ZK at
all in hbase-env.


>
> > Please don't remove the warn. It is important for troubleshooting and
> sizing.
> Can you please tell more on how it helps to do sizing? Interested.
>
> Thanks,
>
> JM
>
>
> 2013/9/3 Kevin O'dell <ke...@cloudera.com>
>
> > Please don't remove the warn. It is important for troubleshooting and
> > sizing.
> > On Sep 3, 2013 1:29 PM, "Jean-Daniel Cryans" <jd...@apache.org>
> wrote:
> >
> > > On Mon, Sep 2, 2013 at 12:04 PM, Jean-Marc Spaggiari <
> > > jean-marc@spaggiari.org> wrote:
> > >
> > > > While running PE with 10 clients, server keep logging:
> > > > 2013-09-02 14:56:59,919 WARN  [RpcServer.handler=5,port=44439]
> > > > regionserver.MemStoreFlusher: Memstore is above high water mark and
> > > > block 2362ms
> > > > 2013-09-02 14:56:59,919 WARN  [RpcServer.handler=18,port=44439]
> > > > regionserver.MemStoreFlusher: Memstore is above high water mark and
> > > > block 2363ms
> > > >
> > >
> > > Yeah that was added in HBASE-6466, it helps tracing when the handlers
> are
> > > blocked on the memstores, else you have to match the "Blocking updates"
> > > with the "Unblocking updates" lines. I'd just be fine adding the time
> > into
> > > the "Unblocking updates" line and remove that WARN.
> > >
> > >
> > > >
> > > > Then when test is over:
> > > > 2013-09-02 14:57:02,280 WARN
> > > > [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxn:
> > > > caught end of stream exception
> > > > EndOfStreamException: Unable to read additional data from client
> > > > sessionid 0x140dfdfc6270044, likely client has closed socket
> > > >     at
> > > >
> org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220)
> > > >     at
> > > >
> > >
> >
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
> > > >     at java.lang.Thread.run(Thread.java:662)
> > > >
> > >
> > > Your 10 clients are disconnecting from ZK, you're letting HBase manage
> > it?
> > >
> > >
> > > >
> > > > /hbase-0.96.0$ cat logs/hbase-jmspaggiari-master-t430s.log  | grep
> > > > "Memstore is above high water mark and block" | wc
> > > >    3555   49770  514558
> > > >
> > > >
> > > > /hbase-0.96.0$ cat logs/hbase-jmspaggiari-master-t430s.log  | grep
> > > > "Unable to read additional data from client sessionid" | wc
> > > >     102    1530   12852
> > > >
> > > > I guess it's only because of PE, but is this something which need to
> > > > be looked at?
> > > >
> > > > JM
> > > >
> > >
> >
>

Re: 0.96.0 keep logging "Memstore is above high water mark"

Posted by Jean-Marc Spaggiari <je...@spaggiari.org>.
> Your 10 clients are disconnecting from ZK, you're letting HBase manage it?
Yep, I start PE from the command line, I don't expect to have to do
anything after that. So issue is on PE side?

> Please don't remove the warn. It is important for troubleshooting and
sizing.
Can you please tell more on how it helps to do sizing? Interested.

Thanks,

JM


2013/9/3 Kevin O'dell <ke...@cloudera.com>

> Please don't remove the warn. It is important for troubleshooting and
> sizing.
> On Sep 3, 2013 1:29 PM, "Jean-Daniel Cryans" <jd...@apache.org> wrote:
>
> > On Mon, Sep 2, 2013 at 12:04 PM, Jean-Marc Spaggiari <
> > jean-marc@spaggiari.org> wrote:
> >
> > > While running PE with 10 clients, server keep logging:
> > > 2013-09-02 14:56:59,919 WARN  [RpcServer.handler=5,port=44439]
> > > regionserver.MemStoreFlusher: Memstore is above high water mark and
> > > block 2362ms
> > > 2013-09-02 14:56:59,919 WARN  [RpcServer.handler=18,port=44439]
> > > regionserver.MemStoreFlusher: Memstore is above high water mark and
> > > block 2363ms
> > >
> >
> > Yeah that was added in HBASE-6466, it helps tracing when the handlers are
> > blocked on the memstores, else you have to match the "Blocking updates"
> > with the "Unblocking updates" lines. I'd just be fine adding the time
> into
> > the "Unblocking updates" line and remove that WARN.
> >
> >
> > >
> > > Then when test is over:
> > > 2013-09-02 14:57:02,280 WARN
> > > [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxn:
> > > caught end of stream exception
> > > EndOfStreamException: Unable to read additional data from client
> > > sessionid 0x140dfdfc6270044, likely client has closed socket
> > >     at
> > > org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220)
> > >     at
> > >
> >
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
> > >     at java.lang.Thread.run(Thread.java:662)
> > >
> >
> > Your 10 clients are disconnecting from ZK, you're letting HBase manage
> it?
> >
> >
> > >
> > > /hbase-0.96.0$ cat logs/hbase-jmspaggiari-master-t430s.log  | grep
> > > "Memstore is above high water mark and block" | wc
> > >    3555   49770  514558
> > >
> > >
> > > /hbase-0.96.0$ cat logs/hbase-jmspaggiari-master-t430s.log  | grep
> > > "Unable to read additional data from client sessionid" | wc
> > >     102    1530   12852
> > >
> > > I guess it's only because of PE, but is this something which need to
> > > be looked at?
> > >
> > > JM
> > >
> >
>

Re: 0.96.0 keep logging "Memstore is above high water mark"

Posted by Jean-Daniel Cryans <jd...@apache.org>.
On Tue, Sep 3, 2013 at 10:30 AM, Kevin O'dell <ke...@cloudera.com>wrote:

> Please don't remove the warn. It is important for troubleshooting and
> sizing.
>

There's nothing the WARN gives you that setting the "Unblocking updates..."
with the waiting time wouldn't give you, and you'd have a whole lot less
log spam.

Also, just to make sure I understand you, you say the warn is important for
troubleshooting, but have you used it? It's only in 0.95+.


> On Sep 3, 2013 1:29 PM, "Jean-Daniel Cryans" <jd...@apache.org> wrote:
>
> > On Mon, Sep 2, 2013 at 12:04 PM, Jean-Marc Spaggiari <
> > jean-marc@spaggiari.org> wrote:
> >
> > > While running PE with 10 clients, server keep logging:
> > > 2013-09-02 14:56:59,919 WARN  [RpcServer.handler=5,port=44439]
> > > regionserver.MemStoreFlusher: Memstore is above high water mark and
> > > block 2362ms
> > > 2013-09-02 14:56:59,919 WARN  [RpcServer.handler=18,port=44439]
> > > regionserver.MemStoreFlusher: Memstore is above high water mark and
> > > block 2363ms
> > >
> >
> > Yeah that was added in HBASE-6466, it helps tracing when the handlers are
> > blocked on the memstores, else you have to match the "Blocking updates"
> > with the "Unblocking updates" lines. I'd just be fine adding the time
> into
> > the "Unblocking updates" line and remove that WARN.
> >
> >
> > >
> > > Then when test is over:
> > > 2013-09-02 14:57:02,280 WARN
> > > [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxn:
> > > caught end of stream exception
> > > EndOfStreamException: Unable to read additional data from client
> > > sessionid 0x140dfdfc6270044, likely client has closed socket
> > >     at
> > > org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220)
> > >     at
> > >
> >
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
> > >     at java.lang.Thread.run(Thread.java:662)
> > >
> >
> > Your 10 clients are disconnecting from ZK, you're letting HBase manage
> it?
> >
> >
> > >
> > > /hbase-0.96.0$ cat logs/hbase-jmspaggiari-master-t430s.log  | grep
> > > "Memstore is above high water mark and block" | wc
> > >    3555   49770  514558
> > >
> > >
> > > /hbase-0.96.0$ cat logs/hbase-jmspaggiari-master-t430s.log  | grep
> > > "Unable to read additional data from client sessionid" | wc
> > >     102    1530   12852
> > >
> > > I guess it's only because of PE, but is this something which need to
> > > be looked at?
> > >
> > > JM
> > >
> >
>

Re: 0.96.0 keep logging "Memstore is above high water mark"

Posted by Kevin O'dell <ke...@cloudera.com>.
Please don't remove the warn. It is important for troubleshooting and
sizing.
On Sep 3, 2013 1:29 PM, "Jean-Daniel Cryans" <jd...@apache.org> wrote:

> On Mon, Sep 2, 2013 at 12:04 PM, Jean-Marc Spaggiari <
> jean-marc@spaggiari.org> wrote:
>
> > While running PE with 10 clients, server keep logging:
> > 2013-09-02 14:56:59,919 WARN  [RpcServer.handler=5,port=44439]
> > regionserver.MemStoreFlusher: Memstore is above high water mark and
> > block 2362ms
> > 2013-09-02 14:56:59,919 WARN  [RpcServer.handler=18,port=44439]
> > regionserver.MemStoreFlusher: Memstore is above high water mark and
> > block 2363ms
> >
>
> Yeah that was added in HBASE-6466, it helps tracing when the handlers are
> blocked on the memstores, else you have to match the "Blocking updates"
> with the "Unblocking updates" lines. I'd just be fine adding the time into
> the "Unblocking updates" line and remove that WARN.
>
>
> >
> > Then when test is over:
> > 2013-09-02 14:57:02,280 WARN
> > [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxn:
> > caught end of stream exception
> > EndOfStreamException: Unable to read additional data from client
> > sessionid 0x140dfdfc6270044, likely client has closed socket
> >     at
> > org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220)
> >     at
> >
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
> >     at java.lang.Thread.run(Thread.java:662)
> >
>
> Your 10 clients are disconnecting from ZK, you're letting HBase manage it?
>
>
> >
> > /hbase-0.96.0$ cat logs/hbase-jmspaggiari-master-t430s.log  | grep
> > "Memstore is above high water mark and block" | wc
> >    3555   49770  514558
> >
> >
> > /hbase-0.96.0$ cat logs/hbase-jmspaggiari-master-t430s.log  | grep
> > "Unable to read additional data from client sessionid" | wc
> >     102    1530   12852
> >
> > I guess it's only because of PE, but is this something which need to
> > be looked at?
> >
> > JM
> >
>

Re: 0.96.0 keep logging "Memstore is above high water mark"

Posted by Jean-Daniel Cryans <jd...@apache.org>.
On Mon, Sep 2, 2013 at 12:04 PM, Jean-Marc Spaggiari <
jean-marc@spaggiari.org> wrote:

> While running PE with 10 clients, server keep logging:
> 2013-09-02 14:56:59,919 WARN  [RpcServer.handler=5,port=44439]
> regionserver.MemStoreFlusher: Memstore is above high water mark and
> block 2362ms
> 2013-09-02 14:56:59,919 WARN  [RpcServer.handler=18,port=44439]
> regionserver.MemStoreFlusher: Memstore is above high water mark and
> block 2363ms
>

Yeah that was added in HBASE-6466, it helps tracing when the handlers are
blocked on the memstores, else you have to match the "Blocking updates"
with the "Unblocking updates" lines. I'd just be fine adding the time into
the "Unblocking updates" line and remove that WARN.


>
> Then when test is over:
> 2013-09-02 14:57:02,280 WARN
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxn:
> caught end of stream exception
> EndOfStreamException: Unable to read additional data from client
> sessionid 0x140dfdfc6270044, likely client has closed socket
>     at
> org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220)
>     at
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
>     at java.lang.Thread.run(Thread.java:662)
>

Your 10 clients are disconnecting from ZK, you're letting HBase manage it?


>
> /hbase-0.96.0$ cat logs/hbase-jmspaggiari-master-t430s.log  | grep
> "Memstore is above high water mark and block" | wc
>    3555   49770  514558
>
>
> /hbase-0.96.0$ cat logs/hbase-jmspaggiari-master-t430s.log  | grep
> "Unable to read additional data from client sessionid" | wc
>     102    1530   12852
>
> I guess it's only because of PE, but is this something which need to
> be looked at?
>
> JM
>