You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Joe Pepersack <jo...@pepersack.net> on 2010/03/10 23:05:57 UTC

Table left unresponsive after Thrift socket timeout

I have a Perl script which is putting data into Hbase via the Thrift
interface.

On the Perl side, I get Thrift::TException 'TSocket: timed out reading 4
bytes from hbase4:9090'

After this happens, the table is completely locked up -- I can't do
anything to the table either via Thrift or via the command shell.  EG:

hbase(main):023:0> truncate 'person'
Truncating person; it may take a while
Disabling table...
NativeException: org.apache.hadoop.hbase.RegionException: Retries
exhausted, it took too long to wait for the table person to be disabled.

hbase(main):024:0> put 'person','test123','final:test','testval'
NativeException: org.apache.hadoop.hbase.client.RetriesExhaustedException:
Trying to contact region server Some server, retryOnlyOne=true, index=0,
islastrow=true, tries=6, numtries=7, i=0, listsize=1,
region=person,,1268257330136 for region person,,1268257330136, row
'test123', but failed after 7 attempts.
Exceptions:

and so forth.

Other tables on the same are not affected and all the region servers are up.

So, two questions:
How to I get it unstuck?
How do I keep it from getting stuck?


Re: Table left unresponsive after Thrift socket timeout

Posted by Jean-Daniel Cryans <jd...@apache.org>.
Joe,

We'll need to learn what happened to that region, they usually don't
throw up after a few inserts ;)

So in that region server's log, before you tried disabling that table,
do you see anything wrong (exceptions probably)? If you have a web
server, it would be nice to drop the full RS log and the master log.

thx!

J-D

On Wed, Mar 10, 2010 at 5:54 PM, Joe Pepersack <jo...@pepersack.net> wrote:
> On 03/10/2010 07:58 PM, Jean-Daniel Cryans wrote:
>>
>> Which HBase version? What's your hardware like? How much data were you
>> inserting? Did you grep the region server logs for any IOException or
>> such? Can we see an excerpt of those logs around the time of the "lock
>> up"?
>>
>
> Version: 0.20.3-1.cloudera
> Hardware: dual Xeon 4 core, 16G, 1.7T disk
> 10x nodes: 1 master, 1 secondary master, 8x regionservers.  2x zookeepers
> running on regionservers
>
>
> It appears to have died after only a few rows were inserted.   There's only
> one region shown on the status page.  Curiously, that region does NOT show
> up in the list of online regions for the listed regionserver.
>
> Master log, from the point where I ran "drop 'Person'" in the shell:
>
> 010-03-10 20:44:44,812 INFO org.apache.hadoop.hbase.master.BaseScanner:
> RegionManager.rootScanner scanning meta region {server: 10.40.0.37:60020,
> regionname: -ROOT-,,0, startKey:<>}
> 2010-03-10 20:44:44,815 INFO org.apache.hadoop.hbase.master.BaseScanner:
> RegionManager.rootScanner scan of 1 row(s) of meta region {server:
> 10.40.0.37:60020, regionname: -ROOT-,,0, startKey:<>} complete
> 2010-03-10 20:44:44,836 INFO org.apache.hadoop.hbase.master.BaseScanner:
> RegionManager.metaScanner scanning meta region {server: 10.40.0.36:60020,
> regionname: .META.,,1, startKey:<>}
> 2010-03-10 20:44:44,844 INFO org.apache.hadoop.hbase.master.BaseScanner:
> RegionManager.metaScanner scan of 3 row(s) of meta region {server:
> 10.40.0.36:60020, regionname: .META.,,1, startKey:<>} complete
> 2010-03-10 20:44:44,844 INFO org.apache.hadoop.hbase.master.BaseScanner: All
> 1 .META. region(s) scanned
> 2010-03-10 20:44:45,357 INFO org.apache.hadoop.hbase.master.ServerManager: 5
> region servers, 0 dead, average load 1.2
> 2010-03-10 20:45:03,209 DEBUG
> org.apache.hadoop.hbase.master.ChangeTableState: Processing unserved regions
> 2010-03-10 20:45:03,209 DEBUG
> org.apache.hadoop.hbase.master.ChangeTableState: Processing regions
> currently being served
> 2010-03-10 20:45:03,210 DEBUG
> org.apache.hadoop.hbase.master.ChangeTableState: Adding region
> Person,,1268251509658 to setClosing list
> 2010-03-10 20:45:04,260 DEBUG
> org.apache.hadoop.hbase.master.ChangeTableState: Processing unserved regions
> 2010-03-10 20:45:04,260 DEBUG
> org.apache.hadoop.hbase.master.ChangeTableState: Processing regions
> currently being served
> 2010-03-10 20:45:04,260 DEBUG
> org.apache.hadoop.hbase.master.ChangeTableState: Adding region
> Person,,1268251509658 to setClosing list
> 2010-03-10 20:45:05,273 DEBUG
> org.apache.hadoop.hbase.master.ChangeTableState: Processing unserved regions
> 2010-03-10 20:45:05,273 DEBUG
> org.apache.hadoop.hbase.master.ChangeTableState: Processing regions
> currently being served
> 2010-03-10 20:45:05,273 DEBUG
> org.apache.hadoop.hbase.master.ChangeTableState: Adding region
> Person,,1268251509658 to setClosing list
> 2010-03-10 20:45:06,287 DEBUG
> org.apache.hadoop.hbase.master.ChangeTableState: Processing unserved regions
> 2010-03-10 20:45:06,287 DEBUG
> org.apache.hadoop.hbase.master.ChangeTableState: Processing regions
> currently being served
> 2010-03-10 20:45:06,287 DEBUG
> org.apache.hadoop.hbase.master.ChangeTableState: Adding region
> Person,,1268251509658 to setClosing list
> 2010-03-10 20:45:08,301 DEBUG
> org.apache.hadoop.hbase.master.ChangeTableState: Processing unserved regions
> 2010-03-10 20:45:08,301 DEBUG
> org.apache.hadoop.hbase.master.ChangeTableState: Processing regions
> currently being served
> 2010-03-10 20:45:08,301 DEBUG
> org.apache.hadoop.hbase.master.ChangeTableState: Adding region
> Person,,1268251509658 to setClosing list
>
>
> Log from the region server where the region is supposed to be for the same
> time frame:
>
> 2010-03-10 20:43:50,889 DEBUG
> org.apache.hadoop.hbase.io.hfile.LruBlockCache: Cache Stats: Sizes:
> Total=1.6213074MB (1700064), Free=195.8787MB (205393696), Max=197.5MB
> (207093760), Counts: Blocks=0, Access=0, Hit=0, Miss=0, Evictions=0,
> Evicted=0, Ratios: Hit Ratio=NaN%, Miss Ratio=NaN%, Evicted/Run=NaN
> 2010-03-10 20:44:50,889 DEBUG
> org.apache.hadoop.hbase.io.hfile.LruBlockCache: Cache Stats: Sizes:
> Total=1.6213074MB (1700064), Free=195.8787MB (205393696), Max=197.5MB
> (207093760), Counts: Blocks=0, Access=0, Hit=0, Miss=0, Evictions=0,
> Evicted=0, Ratios: Hit Ratio=NaN%, Miss Ratio=NaN%, Evicted/Run=NaN
> 2010-03-10 20:45:04,058 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: MSG_REGION_CLOSE:
> Person,,1268251509658
> 2010-03-10 20:45:04,059 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Worker:
> MSG_REGION_CLOSE: Person,,1268251509658
> 2010-03-10 20:45:05,062 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: MSG_REGION_CLOSE:
> Person,,1268251509658
> 2010-03-10 20:45:05,063 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Worker:
> MSG_REGION_CLOSE: Person,,1268251509658
> 2010-03-10 20:45:06,066 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: MSG_REGION_CLOSE:
> Person,,1268251509658
> 2010-03-10 20:45:06,067 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Worker:
> MSG_REGION_CLOSE: Person,,1268251509658
> 2010-03-10 20:45:07,070 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: MSG_REGION_CLOSE:
> Person,,1268251509658
> 2010-03-10 20:45:07,071 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Worker:
> MSG_REGION_CLOSE: Person,,1268251509658
> 2010-03-10 20:45:09,079 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: MSG_REGION_CLOSE:
> Person,,1268251509658
> 2010-03-10 20:45:09,079 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Worker:
> MSG_REGION_CLOSE: Person,,1268251509658
> 2010-03-10 20:45:11,088 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: MSG_REGION_CLOSE:
> Person,,1268251509658
> 2010-03-10 20:45:11,088 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Worker:
> MSG_REGION_CLOSE: Person,,1268251509658
> 2010-03-10 20:45:15,104 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: MSG_REGION_CLOSE:
> Person,,1268251509658
> 2010-03-10 20:45:15,105 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Worker:
> MSG_REGION_CLOSE: Person,,1268251509658
> 2010-03-10 20:45:50,889 DEBUG
> org.apache.hadoop.hbase.io.hfile.LruBlockCache: Cache Stats: Sizes:
> Total=1.6213074MB (1700064), Free=195.8787MB (205393696), Max=197.5MB
> (207093760), Counts: Blocks=0, Access=0, Hit=0, Miss=0, Evictions=0,
> Evicted=0, Ratios: Hit Ratio=NaN%, Miss Ratio=NaN%, Evicted/Run=NaN
>
>
>
>

Re: Table left unresponsive after Thrift socket timeout

Posted by Joe Pepersack <jo...@pepersack.net>.
On 03/10/2010 07:58 PM, Jean-Daniel Cryans wrote:
> Which HBase version? What's your hardware like? How much data were you
> inserting? Did you grep the region server logs for any IOException or
> such? Can we see an excerpt of those logs around the time of the "lock
> up"?
>    
Version: 0.20.3-1.cloudera
Hardware: dual Xeon 4 core, 16G, 1.7T disk
10x nodes: 1 master, 1 secondary master, 8x regionservers.  2x 
zookeepers running on regionservers


It appears to have died after only a few rows were inserted.   There's 
only one region shown on the status page.  Curiously, that region does 
NOT show up in the list of online regions for the listed regionserver.

Master log, from the point where I ran "drop 'Person'" in the shell:

010-03-10 20:44:44,812 INFO org.apache.hadoop.hbase.master.BaseScanner: RegionManager.rootScanner scanning meta region {server: 10.40.0.37:60020, regionname: -ROOT-,,0, startKey:<>}
2010-03-10 20:44:44,815 INFO org.apache.hadoop.hbase.master.BaseScanner: RegionManager.rootScanner scan of 1 row(s) of meta region {server: 10.40.0.37:60020, regionname: -ROOT-,,0, startKey:<>} complete
2010-03-10 20:44:44,836 INFO org.apache.hadoop.hbase.master.BaseScanner: RegionManager.metaScanner scanning meta region {server: 10.40.0.36:60020, regionname: .META.,,1, startKey:<>}
2010-03-10 20:44:44,844 INFO org.apache.hadoop.hbase.master.BaseScanner: RegionManager.metaScanner scan of 3 row(s) of meta region {server: 10.40.0.36:60020, regionname: .META.,,1, startKey:<>} complete
2010-03-10 20:44:44,844 INFO org.apache.hadoop.hbase.master.BaseScanner: All 1 .META. region(s) scanned
2010-03-10 20:44:45,357 INFO org.apache.hadoop.hbase.master.ServerManager: 5 region servers, 0 dead, average load 1.2
2010-03-10 20:45:03,209 DEBUG org.apache.hadoop.hbase.master.ChangeTableState: Processing unserved regions
2010-03-10 20:45:03,209 DEBUG org.apache.hadoop.hbase.master.ChangeTableState: Processing regions currently being served
2010-03-10 20:45:03,210 DEBUG org.apache.hadoop.hbase.master.ChangeTableState: Adding region Person,,1268251509658 to setClosing list
2010-03-10 20:45:04,260 DEBUG org.apache.hadoop.hbase.master.ChangeTableState: Processing unserved regions
2010-03-10 20:45:04,260 DEBUG org.apache.hadoop.hbase.master.ChangeTableState: Processing regions currently being served
2010-03-10 20:45:04,260 DEBUG org.apache.hadoop.hbase.master.ChangeTableState: Adding region Person,,1268251509658 to setClosing list
2010-03-10 20:45:05,273 DEBUG org.apache.hadoop.hbase.master.ChangeTableState: Processing unserved regions
2010-03-10 20:45:05,273 DEBUG org.apache.hadoop.hbase.master.ChangeTableState: Processing regions currently being served
2010-03-10 20:45:05,273 DEBUG org.apache.hadoop.hbase.master.ChangeTableState: Adding region Person,,1268251509658 to setClosing list
2010-03-10 20:45:06,287 DEBUG org.apache.hadoop.hbase.master.ChangeTableState: Processing unserved regions
2010-03-10 20:45:06,287 DEBUG org.apache.hadoop.hbase.master.ChangeTableState: Processing regions currently being served
2010-03-10 20:45:06,287 DEBUG org.apache.hadoop.hbase.master.ChangeTableState: Adding region Person,,1268251509658 to setClosing list
2010-03-10 20:45:08,301 DEBUG org.apache.hadoop.hbase.master.ChangeTableState: Processing unserved regions
2010-03-10 20:45:08,301 DEBUG org.apache.hadoop.hbase.master.ChangeTableState: Processing regions currently being served
2010-03-10 20:45:08,301 DEBUG org.apache.hadoop.hbase.master.ChangeTableState: Adding region Person,,1268251509658 to setClosing list


Log from the region server where the region is supposed to be for the 
same time frame:

2010-03-10 20:43:50,889 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: Cache Stats: Sizes: Total=1.6213074MB (1700064), Free=195.8787MB (205393696), Max=197.5MB (207093760), Counts: Blocks=0, Access=0, Hit=0, Miss=0, Evictions=0, Evicted=0, Ratios: Hit Ratio=NaN%, Miss Ratio=NaN%, Evicted/Run=NaN
2010-03-10 20:44:50,889 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: Cache Stats: Sizes: Total=1.6213074MB (1700064), Free=195.8787MB (205393696), Max=197.5MB (207093760), Counts: Blocks=0, Access=0, Hit=0, Miss=0, Evictions=0, Evicted=0, Ratios: Hit Ratio=NaN%, Miss Ratio=NaN%, Evicted/Run=NaN
2010-03-10 20:45:04,058 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: MSG_REGION_CLOSE: Person,,1268251509658
2010-03-10 20:45:04,059 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Worker: MSG_REGION_CLOSE: Person,,1268251509658
2010-03-10 20:45:05,062 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: MSG_REGION_CLOSE: Person,,1268251509658
2010-03-10 20:45:05,063 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Worker: MSG_REGION_CLOSE: Person,,1268251509658
2010-03-10 20:45:06,066 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: MSG_REGION_CLOSE: Person,,1268251509658
2010-03-10 20:45:06,067 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Worker: MSG_REGION_CLOSE: Person,,1268251509658
2010-03-10 20:45:07,070 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: MSG_REGION_CLOSE: Person,,1268251509658
2010-03-10 20:45:07,071 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Worker: MSG_REGION_CLOSE: Person,,1268251509658
2010-03-10 20:45:09,079 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: MSG_REGION_CLOSE: Person,,1268251509658
2010-03-10 20:45:09,079 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Worker: MSG_REGION_CLOSE: Person,,1268251509658
2010-03-10 20:45:11,088 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: MSG_REGION_CLOSE: Person,,1268251509658
2010-03-10 20:45:11,088 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Worker: MSG_REGION_CLOSE: Person,,1268251509658
2010-03-10 20:45:15,104 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: MSG_REGION_CLOSE: Person,,1268251509658
2010-03-10 20:45:15,105 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Worker: MSG_REGION_CLOSE: Person,,1268251509658
2010-03-10 20:45:50,889 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: Cache Stats: Sizes: Total=1.6213074MB (1700064), Free=195.8787MB (205393696), Max=197.5MB (207093760), Counts: Blocks=0, Access=0, Hit=0, Miss=0, Evictions=0, Evicted=0, Ratios: Hit Ratio=NaN%, Miss Ratio=NaN%, Evicted/Run=NaN




Re: Table left unresponsive after Thrift socket timeout

Posted by Jean-Daniel Cryans <jd...@apache.org>.
Which HBase version? What's your hardware like? How much data were you
inserting? Did you grep the region server logs for any IOException or
such? Can we see an excerpt of those logs around the time of the "lock
up"?

Thx,

J-D

On Wed, Mar 10, 2010 at 2:05 PM, Joe Pepersack <jo...@pepersack.net> wrote:
> I have a Perl script which is putting data into Hbase via the Thrift
> interface.
>
> On the Perl side, I get Thrift::TException 'TSocket: timed out reading 4
> bytes from hbase4:9090'
>
> After this happens, the table is completely locked up -- I can't do
> anything to the table either via Thrift or via the command shell.  EG:
>
> hbase(main):023:0> truncate 'person'
> Truncating person; it may take a while
> Disabling table...
> NativeException: org.apache.hadoop.hbase.RegionException: Retries
> exhausted, it took too long to wait for the table person to be disabled.
>
> hbase(main):024:0> put 'person','test123','final:test','testval'
> NativeException: org.apache.hadoop.hbase.client.RetriesExhaustedException:
> Trying to contact region server Some server, retryOnlyOne=true, index=0,
> islastrow=true, tries=6, numtries=7, i=0, listsize=1,
> region=person,,1268257330136 for region person,,1268257330136, row
> 'test123', but failed after 7 attempts.
> Exceptions:
>
> and so forth.
>
> Other tables on the same are not affected and all the region servers are up.
>
> So, two questions:
> How to I get it unstuck?
> How do I keep it from getting stuck?
>
>