You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Alex Newman (JIRA)" <ji...@apache.org> on 2012/10/02 21:49:07 UTC

[jira] [Commented] (HBASE-6924) HBase Master spews into a loop if a user attempts to create a snappy table when snappy isn't properly configured

    [ https://issues.apache.org/jira/browse/HBASE-6924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13468012#comment-13468012 ] 

Alex Newman commented on HBASE-6924:
------------------------------------

I assume the issue is that opening a region can often fail and we want some retry logic.
                
> HBase Master spews into a loop if a user attempts to create a snappy table when snappy isn't properly configured
> ----------------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-6924
>                 URL: https://issues.apache.org/jira/browse/HBASE-6924
>             Project: HBase
>          Issue Type: Bug
>            Reporter: Alex Newman
>
> If a user attempts to create a table, for instance
>  create 't1',{ NAME=>'c1', COMPRESSION=>'snappy'}
> on stock HBase(Without snappy setup), 
> the master will spew this error in a loop
> 12/10/02 12:41:38 INFO handler.OpenRegionHandler: Opening of region {NAME => 't1,,1349206881317.2d34e32205ffe677496b03faa7e66063.', STARTKEY => '', ENDKEY => '', ENCODED => 2d34e32205ffe677496b03faa7e66063,} failed, marking as FAILED_OPEN in ZK
> 12/10/02 12:41:38 INFO regionserver.HRegionServer: Received request to open region: t1,,1349206881317.2d34e32205ffe677496b03faa7e66063.
> 12/10/02 12:41:38 INFO regionserver.HRegion: Setting up tabledescriptor config now ...
> 12/10/02 12:41:38 ERROR handler.OpenRegionHandler: Failed open of region=t1,,1349206881317.2d34e32205ffe677496b03faa7e66063., starting to roll back the global memstore size.
> java.io.IOException: Compression algorithm 'snappy' previously failed test.
>         at org.apache.hadoop.hbase.util.CompressionTest.testCompression(CompressionTest.java:78)
>         at org.apache.hadoop.hbase.regionserver.HRegion.checkCompressionCodecs(HRegion.java:3822)
>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3811)
>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3761)
>         at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:332)
>         at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108)
>         at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> Even after the shell is killed.
> In fact, even after a hbase reboot we will endlessly spew this painful and high overhead error.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira