You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Dhirendra Singh <dp...@gmail.com> on 2012/09/10 19:13:35 UTC

Getting ScannerTimeoutException even after several calls in the specified time limit

I am facing this exception while iterating over a big table,  by default i
have specified caching as 100,

i am getting the below exception, even though i checked there are several
calls made to the scanner before it threw this exception, but somehow its
saying 86095ms were passed since last invocation.

i also observed that if it set scan.setCaching(false),  it succeeds,  could
some one please explain or point me to some document as if what's happening
here and what's the best practices to avoid it.



 org.apache.hadoop.hbase.client.ScannerTimeoutException: 86095ms passed
since the last invocation, timeout is currently set to 60000
at
org.apache.hadoop.hbase.client.HTable$ClientScanner$1.hasNext(HTable.java:1402)
 at
org.apache.hadoop.hbase.client.HTable$ClientScanner$1.next(HTable.java:1413)
at
org.apache.hadoop.hbase.client.HTable$ClientScanner$1.next(HTable.java:1388)

Caused by: org.apache.hadoop.hbase.client.ScannerTimeoutException: 86095ms
passed since the last invocation, timeout is currently set to 60000
 at
org.apache.hadoop.hbase.client.HTable$ClientScanner.next(HTable.java:1302)
at
org.apache.hadoop.hbase.client.HTable$ClientScanner$1.hasNext(HTable.java:1399)
 ... 8 more
Caused by: org.apache.hadoop.hbase.UnknownScannerException:
org.apache.hadoop.hbase.UnknownScannerException: Name: -1634449530807377233
 at
org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:2114)
at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source)
 at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
 at
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
at
org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1336)

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
 at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
 at
org.apache.hadoop.hbase.RemoteExceptionHandler.decodeRemoteException(RemoteExceptionHandler.java:96)
at
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:108)
 at
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:42)
at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionServerWithRetries(HConnectionManager.java:1325)
 at
org.apache.hadoop.hbase.client.HTable$ClientScanner.next(HTable.java:1293)
... 9 more


Thank's
Dhirendra

RE: Getting ScannerTimeoutException even after several calls in the specified time limit

Posted by Anoop Sam John <an...@huawei.com>.
>could someone please clarify,   when i say caching 100 or any number,
 where does this actually happen on server (cluster  ) or client

This happens at both places. When the scan calls with caching= N,  the client will pass this number N to the 1st region which is under scan for this specific scan. Server side (RS) will try to find as much results(rows) from this region with max rows=N. If it is able to find the client got the results for that next() call.  If it gets rows less than N, client will try to get the remaining number of rows from the next region and so on.. Mostly this will happen in server side alone.[It might be finding N rows from one region itself]  But when you have some Filter conditions it might not be finding N rows from one region...

Note : Client will try to find N rows with one next() call as N is specified as caching. So it might be contacting many regions across different RSs.  There is a max result size config param also available at client side..  If the total size of the results exceeds this value and there are less than N results, then client will stop scanning even it has not got N results... If this cross of size is not happening well one call of next() might go through all the regions.. [You may be getting ScannerTimeouts due to RPC time outs]

Hope I have answered your question..  :)

-Anoop-
________________________________________
From: Dhirendra Singh [dpsdce@gmail.com]
Sent: Wednesday, September 12, 2012 7:55 AM
To: user@hbase.apache.org
Subject: Re: Getting ScannerTimeoutException even after several calls in the specified time limit

could someone please clarify,   when i say caching 100 or any number,
 where does this actually happen on server (cluster  ) or client.  if i
assume it happens on cluster, so does this ScannerTimeOut is because of
caching as the server might have run out of memory and hence not able to
respond within the specified timeout?

any link related to caching mechanism in HBase would be of great help

Thanks,

On Wed, Sep 12, 2012 at 7:41 AM, Otis Gospodnetic <
otis.gospodnetic@gmail.com> wrote:

> For pretty graphs with JVM GC info + system + HBase metrics you could also
> easily hook up SPM to your cluster.  See URL in signature.
>
> Otis
> --
> Performance Monitoring - http://sematext.com/spm
> On Sep 11, 2012 6:30 AM, "HARI KUMAR" <ha...@gmail.com> wrote:
>
> > For GC Monitoring, Add Parameters "export HBASE_OPTS="$HBASE_OPTS
> > -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps
> > -Xloggc:$HBASE_HOME/logs/gc-hbase.log" to hbase-env.sh and try to view
> the
> > file using tools like "GCViewer".  or use tools like VisualVM to look at
> > your GC Consumption.
> >
> > ./hari
> >
> > Add
> >
> > On Tue, Sep 11, 2012 at 2:11 PM, Dhirendra Singh <dp...@gmail.com>
> wrote:
> >
> > > No i am not doing parallel scans,
> > >
> > > * If yes, check the time taken for GC and
> > > the number of calls that can be served at your end point*.
> > >
> > >  could you please tell me how to do that, where can i see the GC logs?
> > >
> > >
> > > On Tue, Sep 11, 2012 at 12:54 PM, HARI KUMAR <harikum2002@gmail.com
> > >wrote:
> > >
> > >> Hi,
> > >>
> > >> Are u trying to do parallel scans. If yes, check the time taken for GC
> > and
> > >> the number of calls that can be served at your end point.
> > >>
> > >> Best Regards
> > >> N.Hari Kumar
> > >>
> > >> On Tue, Sep 11, 2012 at 8:22 AM, Dhirendra Singh <dp...@gmail.com>
> > >> wrote:
> > >>
> > >> > i tried with a smaller caching i.e 10, it failed again, not its not
> > >> really
> > >> > a big cell. this small cluster(4 nodes) is only used for Hbase, i am
> > >> > currently using hbase-0.92.1-cdh4.0.1. ,  could you just let me know
> > how
> > >> > could i debug this issue ?
> > >> >
> > >> >
> > >> > aused by: org.apache.hadoop.hbase.client.ScannerTimeoutException:
> > >> > 99560ms passed since the last invocation, timeout is currently set
> to
> > >> > 60000
> > >> >         at
> > >> >
> > >>
> >
> org.apache.hadoop.hbase.client.HTable$ClientScanner.next(HTable.java:1302)
> > >> >         at
> > >> >
> > >>
> >
> org.apache.hadoop.hbase.client.HTable$ClientScanner$1.hasNext(HTable.java:1399)
> > >> >         ... 5 more
> > >> > Caused by: org.apache.hadoop.hbase.UnknownScannerException:
> > >> > org.apache.hadoop.hbase.UnknownScannerException: Name:
> > >> > -8889369042827960647
> > >> >         at
> > >> >
> > >>
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:2114)
> > >> >         at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown
> > Source)
> > >> >         at
> > >> >
> > >>
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> > >> >         at java.lang.reflect.Method.invoke(Method.java:597)
> > >> >         at
> > >> >
> > >>
> >
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
> > >> >         at
> > >> >
> > >>
> >
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1336)
> > >> >
> > >> >
> > >> >
> > >> > On Mon, Sep 10, 2012 at 10:53 PM, Stack <st...@duboce.net> wrote:
> > >> >
> > >> > > On Mon, Sep 10, 2012 at 10:13 AM, Dhirendra Singh <
> dpsdce@gmail.com
> > >
> > >> > > wrote:
> > >> > > > I am facing this exception while iterating over a big table,  by
> > >> > default
> > >> > > i
> > >> > > > have specified caching as 100,
> > >> > > >
> > >> > > > i am getting the below exception, even though i checked there
> are
> > >> > several
> > >> > > > calls made to the scanner before it threw this exception, but
> > >> somehow
> > >> > its
> > >> > > > saying 86095ms were passed since last invocation.
> > >> > > >
> > >> > > > i also observed that if it set scan.setCaching(false),  it
> > succeeds,
> > >> > >  could
> > >> > > > some one please explain or point me to some document as if
> what's
> > >> > > happening
> > >> > > > here and what's the best practices to avoid it.
> > >> > > >
> > >> > > >
> > >> > >
> > >> > > Try again cachine < 100.  See if it works.  A big cell?  A GC
> pause?
> > >> > > You should be able to tell roughly which server is being traversed
> > >> > > when you get the timeout.  Anything else going on on that server
> at
> > >> > > the time?  What version of HBase?
> > >> > > St.Ack
> > >> > >
> > >> >
> > >> >
> > >> >
> > >> > --
> > >> > Warm Regards,
> > >> > Dhirendra Pratap
> > >> > +91. 9717394713
> > >> >
> > >>
> > >>
> > >>
> > >> --
> > >> FROM
> > >>     HARI KUMAR.N
> > >>
> > >
> > >
> > >
> > > --
> > > Warm Regards,
> > > Dhirendra Pratap
> > > +91. 9717394713
> > >
> > >
> > >
> > >
> >
> >
> > --
> > FROM
> >     HARI KUMAR.N
> >
>



--
Warm Regards,
Dhirendra Pratap
+91. 9717394713

Re: Getting ScannerTimeoutException even after several calls in the specified time limit

Posted by Dhirendra Singh <dp...@gmail.com>.
could someone please clarify,   when i say caching 100 or any number,
 where does this actually happen on server (cluster  ) or client.  if i
assume it happens on cluster, so does this ScannerTimeOut is because of
caching as the server might have run out of memory and hence not able to
respond within the specified timeout?

any link related to caching mechanism in HBase would be of great help

Thanks,

On Wed, Sep 12, 2012 at 7:41 AM, Otis Gospodnetic <
otis.gospodnetic@gmail.com> wrote:

> For pretty graphs with JVM GC info + system + HBase metrics you could also
> easily hook up SPM to your cluster.  See URL in signature.
>
> Otis
> --
> Performance Monitoring - http://sematext.com/spm
> On Sep 11, 2012 6:30 AM, "HARI KUMAR" <ha...@gmail.com> wrote:
>
> > For GC Monitoring, Add Parameters "export HBASE_OPTS="$HBASE_OPTS
> > -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps
> > -Xloggc:$HBASE_HOME/logs/gc-hbase.log" to hbase-env.sh and try to view
> the
> > file using tools like "GCViewer".  or use tools like VisualVM to look at
> > your GC Consumption.
> >
> > ./hari
> >
> > Add
> >
> > On Tue, Sep 11, 2012 at 2:11 PM, Dhirendra Singh <dp...@gmail.com>
> wrote:
> >
> > > No i am not doing parallel scans,
> > >
> > > * If yes, check the time taken for GC and
> > > the number of calls that can be served at your end point*.
> > >
> > >  could you please tell me how to do that, where can i see the GC logs?
> > >
> > >
> > > On Tue, Sep 11, 2012 at 12:54 PM, HARI KUMAR <harikum2002@gmail.com
> > >wrote:
> > >
> > >> Hi,
> > >>
> > >> Are u trying to do parallel scans. If yes, check the time taken for GC
> > and
> > >> the number of calls that can be served at your end point.
> > >>
> > >> Best Regards
> > >> N.Hari Kumar
> > >>
> > >> On Tue, Sep 11, 2012 at 8:22 AM, Dhirendra Singh <dp...@gmail.com>
> > >> wrote:
> > >>
> > >> > i tried with a smaller caching i.e 10, it failed again, not its not
> > >> really
> > >> > a big cell. this small cluster(4 nodes) is only used for Hbase, i am
> > >> > currently using hbase-0.92.1-cdh4.0.1. ,  could you just let me know
> > how
> > >> > could i debug this issue ?
> > >> >
> > >> >
> > >> > aused by: org.apache.hadoop.hbase.client.ScannerTimeoutException:
> > >> > 99560ms passed since the last invocation, timeout is currently set
> to
> > >> > 60000
> > >> >         at
> > >> >
> > >>
> >
> org.apache.hadoop.hbase.client.HTable$ClientScanner.next(HTable.java:1302)
> > >> >         at
> > >> >
> > >>
> >
> org.apache.hadoop.hbase.client.HTable$ClientScanner$1.hasNext(HTable.java:1399)
> > >> >         ... 5 more
> > >> > Caused by: org.apache.hadoop.hbase.UnknownScannerException:
> > >> > org.apache.hadoop.hbase.UnknownScannerException: Name:
> > >> > -8889369042827960647
> > >> >         at
> > >> >
> > >>
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:2114)
> > >> >         at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown
> > Source)
> > >> >         at
> > >> >
> > >>
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> > >> >         at java.lang.reflect.Method.invoke(Method.java:597)
> > >> >         at
> > >> >
> > >>
> >
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
> > >> >         at
> > >> >
> > >>
> >
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1336)
> > >> >
> > >> >
> > >> >
> > >> > On Mon, Sep 10, 2012 at 10:53 PM, Stack <st...@duboce.net> wrote:
> > >> >
> > >> > > On Mon, Sep 10, 2012 at 10:13 AM, Dhirendra Singh <
> dpsdce@gmail.com
> > >
> > >> > > wrote:
> > >> > > > I am facing this exception while iterating over a big table,  by
> > >> > default
> > >> > > i
> > >> > > > have specified caching as 100,
> > >> > > >
> > >> > > > i am getting the below exception, even though i checked there
> are
> > >> > several
> > >> > > > calls made to the scanner before it threw this exception, but
> > >> somehow
> > >> > its
> > >> > > > saying 86095ms were passed since last invocation.
> > >> > > >
> > >> > > > i also observed that if it set scan.setCaching(false),  it
> > succeeds,
> > >> > >  could
> > >> > > > some one please explain or point me to some document as if
> what's
> > >> > > happening
> > >> > > > here and what's the best practices to avoid it.
> > >> > > >
> > >> > > >
> > >> > >
> > >> > > Try again cachine < 100.  See if it works.  A big cell?  A GC
> pause?
> > >> > > You should be able to tell roughly which server is being traversed
> > >> > > when you get the timeout.  Anything else going on on that server
> at
> > >> > > the time?  What version of HBase?
> > >> > > St.Ack
> > >> > >
> > >> >
> > >> >
> > >> >
> > >> > --
> > >> > Warm Regards,
> > >> > Dhirendra Pratap
> > >> > +91. 9717394713
> > >> >
> > >>
> > >>
> > >>
> > >> --
> > >> FROM
> > >>     HARI KUMAR.N
> > >>
> > >
> > >
> > >
> > > --
> > > Warm Regards,
> > > Dhirendra Pratap
> > > +91. 9717394713
> > >
> > >
> > >
> > >
> >
> >
> > --
> > FROM
> >     HARI KUMAR.N
> >
>



-- 
Warm Regards,
Dhirendra Pratap
+91. 9717394713

Re: Getting ScannerTimeoutException even after several calls in the specified time limit

Posted by Otis Gospodnetic <ot...@gmail.com>.
For pretty graphs with JVM GC info + system + HBase metrics you could also
easily hook up SPM to your cluster.  See URL in signature.

Otis
--
Performance Monitoring - http://sematext.com/spm
On Sep 11, 2012 6:30 AM, "HARI KUMAR" <ha...@gmail.com> wrote:

> For GC Monitoring, Add Parameters "export HBASE_OPTS="$HBASE_OPTS
> -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps
> -Xloggc:$HBASE_HOME/logs/gc-hbase.log" to hbase-env.sh and try to view the
> file using tools like "GCViewer".  or use tools like VisualVM to look at
> your GC Consumption.
>
> ./hari
>
> Add
>
> On Tue, Sep 11, 2012 at 2:11 PM, Dhirendra Singh <dp...@gmail.com> wrote:
>
> > No i am not doing parallel scans,
> >
> > * If yes, check the time taken for GC and
> > the number of calls that can be served at your end point*.
> >
> >  could you please tell me how to do that, where can i see the GC logs?
> >
> >
> > On Tue, Sep 11, 2012 at 12:54 PM, HARI KUMAR <harikum2002@gmail.com
> >wrote:
> >
> >> Hi,
> >>
> >> Are u trying to do parallel scans. If yes, check the time taken for GC
> and
> >> the number of calls that can be served at your end point.
> >>
> >> Best Regards
> >> N.Hari Kumar
> >>
> >> On Tue, Sep 11, 2012 at 8:22 AM, Dhirendra Singh <dp...@gmail.com>
> >> wrote:
> >>
> >> > i tried with a smaller caching i.e 10, it failed again, not its not
> >> really
> >> > a big cell. this small cluster(4 nodes) is only used for Hbase, i am
> >> > currently using hbase-0.92.1-cdh4.0.1. ,  could you just let me know
> how
> >> > could i debug this issue ?
> >> >
> >> >
> >> > aused by: org.apache.hadoop.hbase.client.ScannerTimeoutException:
> >> > 99560ms passed since the last invocation, timeout is currently set to
> >> > 60000
> >> >         at
> >> >
> >>
> org.apache.hadoop.hbase.client.HTable$ClientScanner.next(HTable.java:1302)
> >> >         at
> >> >
> >>
> org.apache.hadoop.hbase.client.HTable$ClientScanner$1.hasNext(HTable.java:1399)
> >> >         ... 5 more
> >> > Caused by: org.apache.hadoop.hbase.UnknownScannerException:
> >> > org.apache.hadoop.hbase.UnknownScannerException: Name:
> >> > -8889369042827960647
> >> >         at
> >> >
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:2114)
> >> >         at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown
> Source)
> >> >         at
> >> >
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> >         at java.lang.reflect.Method.invoke(Method.java:597)
> >> >         at
> >> >
> >>
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
> >> >         at
> >> >
> >>
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1336)
> >> >
> >> >
> >> >
> >> > On Mon, Sep 10, 2012 at 10:53 PM, Stack <st...@duboce.net> wrote:
> >> >
> >> > > On Mon, Sep 10, 2012 at 10:13 AM, Dhirendra Singh <dpsdce@gmail.com
> >
> >> > > wrote:
> >> > > > I am facing this exception while iterating over a big table,  by
> >> > default
> >> > > i
> >> > > > have specified caching as 100,
> >> > > >
> >> > > > i am getting the below exception, even though i checked there are
> >> > several
> >> > > > calls made to the scanner before it threw this exception, but
> >> somehow
> >> > its
> >> > > > saying 86095ms were passed since last invocation.
> >> > > >
> >> > > > i also observed that if it set scan.setCaching(false),  it
> succeeds,
> >> > >  could
> >> > > > some one please explain or point me to some document as if what's
> >> > > happening
> >> > > > here and what's the best practices to avoid it.
> >> > > >
> >> > > >
> >> > >
> >> > > Try again cachine < 100.  See if it works.  A big cell?  A GC pause?
> >> > > You should be able to tell roughly which server is being traversed
> >> > > when you get the timeout.  Anything else going on on that server at
> >> > > the time?  What version of HBase?
> >> > > St.Ack
> >> > >
> >> >
> >> >
> >> >
> >> > --
> >> > Warm Regards,
> >> > Dhirendra Pratap
> >> > +91. 9717394713
> >> >
> >>
> >>
> >>
> >> --
> >> FROM
> >>     HARI KUMAR.N
> >>
> >
> >
> >
> > --
> > Warm Regards,
> > Dhirendra Pratap
> > +91. 9717394713
> >
> >
> >
> >
>
>
> --
> FROM
>     HARI KUMAR.N
>

Re: Getting ScannerTimeoutException even after several calls in the specified time limit

Posted by HARI KUMAR <ha...@gmail.com>.
For GC Monitoring, Add Parameters "export HBASE_OPTS="$HBASE_OPTS
-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps
-Xloggc:$HBASE_HOME/logs/gc-hbase.log" to hbase-env.sh and try to view the
file using tools like "GCViewer".  or use tools like VisualVM to look at
your GC Consumption.

./hari

Add

On Tue, Sep 11, 2012 at 2:11 PM, Dhirendra Singh <dp...@gmail.com> wrote:

> No i am not doing parallel scans,
>
> * If yes, check the time taken for GC and
> the number of calls that can be served at your end point*.
>
>  could you please tell me how to do that, where can i see the GC logs?
>
>
> On Tue, Sep 11, 2012 at 12:54 PM, HARI KUMAR <ha...@gmail.com>wrote:
>
>> Hi,
>>
>> Are u trying to do parallel scans. If yes, check the time taken for GC and
>> the number of calls that can be served at your end point.
>>
>> Best Regards
>> N.Hari Kumar
>>
>> On Tue, Sep 11, 2012 at 8:22 AM, Dhirendra Singh <dp...@gmail.com>
>> wrote:
>>
>> > i tried with a smaller caching i.e 10, it failed again, not its not
>> really
>> > a big cell. this small cluster(4 nodes) is only used for Hbase, i am
>> > currently using hbase-0.92.1-cdh4.0.1. ,  could you just let me know how
>> > could i debug this issue ?
>> >
>> >
>> > aused by: org.apache.hadoop.hbase.client.ScannerTimeoutException:
>> > 99560ms passed since the last invocation, timeout is currently set to
>> > 60000
>> >         at
>> >
>> org.apache.hadoop.hbase.client.HTable$ClientScanner.next(HTable.java:1302)
>> >         at
>> >
>> org.apache.hadoop.hbase.client.HTable$ClientScanner$1.hasNext(HTable.java:1399)
>> >         ... 5 more
>> > Caused by: org.apache.hadoop.hbase.UnknownScannerException:
>> > org.apache.hadoop.hbase.UnknownScannerException: Name:
>> > -8889369042827960647
>> >         at
>> >
>> org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:2114)
>> >         at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source)
>> >         at
>> >
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >         at java.lang.reflect.Method.invoke(Method.java:597)
>> >         at
>> >
>> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>> >         at
>> >
>> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1336)
>> >
>> >
>> >
>> > On Mon, Sep 10, 2012 at 10:53 PM, Stack <st...@duboce.net> wrote:
>> >
>> > > On Mon, Sep 10, 2012 at 10:13 AM, Dhirendra Singh <dp...@gmail.com>
>> > > wrote:
>> > > > I am facing this exception while iterating over a big table,  by
>> > default
>> > > i
>> > > > have specified caching as 100,
>> > > >
>> > > > i am getting the below exception, even though i checked there are
>> > several
>> > > > calls made to the scanner before it threw this exception, but
>> somehow
>> > its
>> > > > saying 86095ms were passed since last invocation.
>> > > >
>> > > > i also observed that if it set scan.setCaching(false),  it succeeds,
>> > >  could
>> > > > some one please explain or point me to some document as if what's
>> > > happening
>> > > > here and what's the best practices to avoid it.
>> > > >
>> > > >
>> > >
>> > > Try again cachine < 100.  See if it works.  A big cell?  A GC pause?
>> > > You should be able to tell roughly which server is being traversed
>> > > when you get the timeout.  Anything else going on on that server at
>> > > the time?  What version of HBase?
>> > > St.Ack
>> > >
>> >
>> >
>> >
>> > --
>> > Warm Regards,
>> > Dhirendra Pratap
>> > +91. 9717394713
>> >
>>
>>
>>
>> --
>> FROM
>>     HARI KUMAR.N
>>
>
>
>
> --
> Warm Regards,
> Dhirendra Pratap
> +91. 9717394713
>
>
>
>


-- 
FROM
    HARI KUMAR.N

Re: Getting ScannerTimeoutException even after several calls in the specified time limit

Posted by HARI KUMAR <ha...@gmail.com>.
Hi,

Are u trying to do parallel scans. If yes, check the time taken for GC and
the number of calls that can be served at your end point.

Best Regards
N.Hari Kumar

On Tue, Sep 11, 2012 at 8:22 AM, Dhirendra Singh <dp...@gmail.com> wrote:

> i tried with a smaller caching i.e 10, it failed again, not its not really
> a big cell. this small cluster(4 nodes) is only used for Hbase, i am
> currently using hbase-0.92.1-cdh4.0.1. ,  could you just let me know how
> could i debug this issue ?
>
>
> aused by: org.apache.hadoop.hbase.client.ScannerTimeoutException:
> 99560ms passed since the last invocation, timeout is currently set to
> 60000
>         at
> org.apache.hadoop.hbase.client.HTable$ClientScanner.next(HTable.java:1302)
>         at
> org.apache.hadoop.hbase.client.HTable$ClientScanner$1.hasNext(HTable.java:1399)
>         ... 5 more
> Caused by: org.apache.hadoop.hbase.UnknownScannerException:
> org.apache.hadoop.hbase.UnknownScannerException: Name:
> -8889369042827960647
>         at
> org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:2114)
>         at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>         at
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1336)
>
>
>
> On Mon, Sep 10, 2012 at 10:53 PM, Stack <st...@duboce.net> wrote:
>
> > On Mon, Sep 10, 2012 at 10:13 AM, Dhirendra Singh <dp...@gmail.com>
> > wrote:
> > > I am facing this exception while iterating over a big table,  by
> default
> > i
> > > have specified caching as 100,
> > >
> > > i am getting the below exception, even though i checked there are
> several
> > > calls made to the scanner before it threw this exception, but somehow
> its
> > > saying 86095ms were passed since last invocation.
> > >
> > > i also observed that if it set scan.setCaching(false),  it succeeds,
> >  could
> > > some one please explain or point me to some document as if what's
> > happening
> > > here and what's the best practices to avoid it.
> > >
> > >
> >
> > Try again cachine < 100.  See if it works.  A big cell?  A GC pause?
> > You should be able to tell roughly which server is being traversed
> > when you get the timeout.  Anything else going on on that server at
> > the time?  What version of HBase?
> > St.Ack
> >
>
>
>
> --
> Warm Regards,
> Dhirendra Pratap
> +91. 9717394713
>



-- 
FROM
    HARI KUMAR.N

Re: Getting ScannerTimeoutException even after several calls in the specified time limit

Posted by Dhirendra Singh <dp...@gmail.com>.
i tried with a smaller caching i.e 10, it failed again, not its not really
a big cell. this small cluster(4 nodes) is only used for Hbase, i am
currently using hbase-0.92.1-cdh4.0.1. ,  could you just let me know how
could i debug this issue ?


aused by: org.apache.hadoop.hbase.client.ScannerTimeoutException:
99560ms passed since the last invocation, timeout is currently set to
60000
	at org.apache.hadoop.hbase.client.HTable$ClientScanner.next(HTable.java:1302)
	at org.apache.hadoop.hbase.client.HTable$ClientScanner$1.hasNext(HTable.java:1399)
	... 5 more
Caused by: org.apache.hadoop.hbase.UnknownScannerException:
org.apache.hadoop.hbase.UnknownScannerException: Name:
-8889369042827960647
	at org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:2114)
	at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
	at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1336)



On Mon, Sep 10, 2012 at 10:53 PM, Stack <st...@duboce.net> wrote:

> On Mon, Sep 10, 2012 at 10:13 AM, Dhirendra Singh <dp...@gmail.com>
> wrote:
> > I am facing this exception while iterating over a big table,  by default
> i
> > have specified caching as 100,
> >
> > i am getting the below exception, even though i checked there are several
> > calls made to the scanner before it threw this exception, but somehow its
> > saying 86095ms were passed since last invocation.
> >
> > i also observed that if it set scan.setCaching(false),  it succeeds,
>  could
> > some one please explain or point me to some document as if what's
> happening
> > here and what's the best practices to avoid it.
> >
> >
>
> Try again cachine < 100.  See if it works.  A big cell?  A GC pause?
> You should be able to tell roughly which server is being traversed
> when you get the timeout.  Anything else going on on that server at
> the time?  What version of HBase?
> St.Ack
>



-- 
Warm Regards,
Dhirendra Pratap
+91. 9717394713

Re: Getting ScannerTimeoutException even after several calls in the specified time limit

Posted by Stack <st...@duboce.net>.
On Mon, Sep 10, 2012 at 10:13 AM, Dhirendra Singh <dp...@gmail.com> wrote:
> I am facing this exception while iterating over a big table,  by default i
> have specified caching as 100,
>
> i am getting the below exception, even though i checked there are several
> calls made to the scanner before it threw this exception, but somehow its
> saying 86095ms were passed since last invocation.
>
> i also observed that if it set scan.setCaching(false),  it succeeds,  could
> some one please explain or point me to some document as if what's happening
> here and what's the best practices to avoid it.
>
>

Try again cachine < 100.  See if it works.  A big cell?  A GC pause?
You should be able to tell roughly which server is being traversed
when you get the timeout.  Anything else going on on that server at
the time?  What version of HBase?
St.Ack