You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by "Sean Busbey (JIRA)" <ji...@apache.org> on 2014/08/20 18:48:26 UTC
[jira] [Resolved] (HBASE-6924) HBase Master spews into a loop if a
user attempts to create a snappy table when snappy isn't properly
configured
[ https://issues.apache.org/jira/browse/HBASE-6924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Sean Busbey resolved HBASE-6924.
--------------------------------
Resolution: Duplicate
Closing as a specific instance of the general problem described in HBASE-4458. Please update if there's something missing in that ticket.
> HBase Master spews into a loop if a user attempts to create a snappy table when snappy isn't properly configured
> ----------------------------------------------------------------------------------------------------------------
>
> Key: HBASE-6924
> URL: https://issues.apache.org/jira/browse/HBASE-6924
> Project: HBase
> Issue Type: Bug
> Reporter: Alex Newman
>
> If a user attempts to create a table, for instance
> create 't1',{ NAME=>'c1', COMPRESSION=>'snappy'}
> on stock HBase(Without snappy setup),
> the master will spew this error in a loop
> 12/10/02 12:41:38 INFO handler.OpenRegionHandler: Opening of region {NAME => 't1,,1349206881317.2d34e32205ffe677496b03faa7e66063.', STARTKEY => '', ENDKEY => '', ENCODED => 2d34e32205ffe677496b03faa7e66063,} failed, marking as FAILED_OPEN in ZK
> 12/10/02 12:41:38 INFO regionserver.HRegionServer: Received request to open region: t1,,1349206881317.2d34e32205ffe677496b03faa7e66063.
> 12/10/02 12:41:38 INFO regionserver.HRegion: Setting up tabledescriptor config now ...
> 12/10/02 12:41:38 ERROR handler.OpenRegionHandler: Failed open of region=t1,,1349206881317.2d34e32205ffe677496b03faa7e66063., starting to roll back the global memstore size.
> java.io.IOException: Compression algorithm 'snappy' previously failed test.
> at org.apache.hadoop.hbase.util.CompressionTest.testCompression(CompressionTest.java:78)
> at org.apache.hadoop.hbase.regionserver.HRegion.checkCompressionCodecs(HRegion.java:3822)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3811)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3761)
> at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:332)
> at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108)
> at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Even after the shell is killed.
> In fact, even after a hbase reboot we will endlessly spew this painful and high overhead error.
--
This message was sent by Atlassian JIRA
(v6.2#6252)