You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Arvid Warnecke <ar...@nostalgix.org> on 2012/07/08 09:48:05 UTC

Enable Snappy compression - not able to load the libs on startup

Hello,

I already found some old entries from mailinglists and articles at
Cloudera how to use the Snappy library from Hadoop in HBase, but it does
not seem to work for me.

I installed Hadoop and HBase from the tarballs, because there are no
packages available for Arch Linux. Everything worked fine, but I am not
able to use any compression for my tables.

When I use

hbase> create 'table', {NAME=>'fam', COMPRESSION=>'snappy'} 

I see in the logs from the regionserver lots of the same error messages:
2012-07-07 17:00:17,646 ERROR
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Failed
open of region=rawdb,,1341672997475.31ecf39289eb5034fb6a3c9f1a0cad2b.
java.io.IOException: Compression algorithm 'snappy' previously failed
test.
	at org.apache.hadoop.hbase.util.CompressionTest.testCompression(CompressionTest.java:78)
	at org.apache.hadoop.hbase.regionserver.HRegion.checkCompressionCodecs(HRegion.java:2797)
	at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2786)
	at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2774)
	at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:319)
	at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:105)
	at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:163)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
	at java.lang.Thread.run(Unknown Source)

I already tried to use the following in the hbase-env.sh file:

export HBASE_LIBRARY_PATH=/home/madhatter/CDH3/hadoop/lib/native/Linux-amd64-64

That is where my Cloudera Hadoop & HBase are located, but it seems that
it does not do the trick. Do I need to set other variables aswell?
CLASSPATHes or anything like that? Compression seems to be the only
thing which is not working. When I installed HBase as Cloudera Packages
in Debian I never had such issues.

Best regards,
Arvid

-- 
[ Arvid Warnecke ][ arvid (at) nostalgix (dot) org ]
[ IRC/OPN: "madhatter" ][ http://www.nostalgix.org ]
---[  ThreePiO was right: Let the Wookiee win.  ]---

Re: Enable Snappy compression - not able to load the libs on startup

Posted by Asaf Mesika <as...@gmail.com>.
start-hbase is wrapper script on of hbase-daemon, which is on top of hbase. HBase itself takes, in some cases (as shown below), environment variables from hadoop shel script.
The first thing you need to check is what I wrote before: the value of the -Djava.library.path using "ps -ef|grep hbase".


-- 
Asaf Mesika
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)


On Tuesday 10 July 2012 at 20:04, Arvid Warnecke wrote:

> Hello Asaf,
> 
> On Tue, Jul 10, 2012 at 02:20:03PM +0300, Asaf Mesika wrote:
> > On Jul 10, 2012, at 8:57 AM, Arvid Warnecke wrote:
> > > On Mon, Jul 09, 2012 at 09:10:12PM +0300, Asaf Mesika wrote:
> > > > On Jul 9, 2012, at 21:00 PM, Harsh J wrote:
> > > > > The hbase-daemon.sh (http://hbase-daemon.sh) does not ssh back into the host, so preserves any
> > > > > environment variables you haven't otherwise set in the hbase-env.sh (http://hbase-env.sh)
> > > > > file. I guess that did the trick for you.
> > > > > 
> > > > 
> > > > Maybe you should look at the content of the jvm argument switch
> > > > -Djava.library.path, (ps -ef | grep hbase , to see the command line).
> > > > This will give you a hint on the directories the .so object is being
> > > > looked for.
> > > > 
> > > 
> > > It seems that that switch is only in the 'hbase' script itself. But
> > > something like that must be the difference, because in my shell I only
> > > set $HADOOP_HOME and $HBASE_HOME and $HADOOP_CLASSPATH via ~/.zshrc.
> > > 
> > 
> > It's not only there. Since inside hbase file, you see the following bash section which pulls the value of java.library.path from the hadoop shell script:
> > 
> > #If avail, add Hadoop to the CLASSPATH and to the JAVA_LIBRARY_PATH
> > HADOOP_IN_PATH=$(PATH="${HADOOP_HOME:-${HADOOP_PREFIX}}/bin:$PATH" which hadoop 2>/dev/null)
> > if [ -f ${HADOOP_IN_PATH} ]; then
> > HADOOP_JAVA_LIBRARY_PATH=$(HADOOP_CLASSPATH="$CLASSPATH" ${HADOOP_IN_PATH} \
> > org.apache.hadoop.hbase.util.GetJavaProperty java.library.path 2>/dev/null)
> > if [ -n "$HADOOP_JAVA_LIBRARY_PATH" ]; then
> > JAVA_LIBRARY_PATH=$(append_path "${JAVA_LIBRARY_PATH}" "$HADOOP_JAVA_LIBRARY_PATH")
> > fi
> > CLASSPATH=$(append_path "${CLASSPATH}" `${HADOOP_IN_PATH} classpath 2>/dev/null`)
> > fi
> > 
> > if [ -d "${HBASE_HOME}/build/native" -o -d "${HBASE_HOME}/lib/native" ]; then
> > if [ -z $JAVA_PLATFORM ]; then
> > JAVA_PLATFORM=`CLASSPATH=${CLASSPATH} ${JAVA} org.apache.hadoop.util.PlatformName | sed -e "s/ /_/g"`
> > fi
> > if [ -d "$HBASE_HOME/build/native" ]; then
> > JAVA_LIBRARY_PATH=$(append_path "$JAVA_LIBRARY_PATH" ${HBASE_HOME}/build/native/${JAVA_PLATFORM}/lib)
> > fi
> > 
> > if [ -d "${HBASE_HOME}/lib/native" ]; then
> > JAVA_LIBRARY_PATH=$(append_path "$JAVA_LIBRARY_PATH" ${HBASE_HOME}/lib/native/${JAVA_PLATFORM})
> > fi
> > fi
> > 
> 
> Thank you. So the start-hbase.sh (http://start-hbase.sh) file should take care of such things,
> too. Or it might be easier to write a wrapper script to call
> hbase-daemon.sh (http://hbase-daemon.sh) for master and regionserver in a row.
> 
> Cheers,
> Arvid
> 
> -- 
> [ Arvid Warnecke ][ arvid (at) nostalgix (dot) org ]
> [ IRC/OPN: "madhatter" ][ http://www.nostalgix.org ]
> ---[ ThreePiO was right: Let the Wookiee win. ]---
> 
> 
> 



Re: Enable Snappy compression - not able to load the libs on startup

Posted by Arvid Warnecke <ar...@nostalgix.org>.
Hello Asaf,

On Tue, Jul 10, 2012 at 02:20:03PM +0300, Asaf Mesika wrote:
> On Jul 10, 2012, at 8:57 AM, Arvid Warnecke wrote:
> > On Mon, Jul 09, 2012 at 09:10:12PM +0300, Asaf Mesika wrote:
> >> On Jul 9, 2012, at 21:00 PM, Harsh J wrote:
> >>> The hbase-daemon.sh does not ssh back into the host, so preserves any
> >>> environment variables you haven't otherwise set in the hbase-env.sh
> >>> file. I guess that did the trick for you.
> >>> 
> >> Maybe you should look at the content of the jvm argument switch
> >> -Djava.library.path, (ps -ef | grep hbase , to see the command line).
> >> This will give you a hint on the directories the .so object is being
> >> looked for.
> >> 
> > It seems that that switch is only in the 'hbase' script itself. But
> > something like that must be the difference, because in my shell I only
> > set $HADOOP_HOME and $HBASE_HOME and $HADOOP_CLASSPATH via ~/.zshrc.
> It's not only there. Since inside hbase file, you see the following bash section which pulls the value of java.library.path from the hadoop shell script:
> 
> #If avail, add Hadoop to the CLASSPATH and to the JAVA_LIBRARY_PATH
> HADOOP_IN_PATH=$(PATH="${HADOOP_HOME:-${HADOOP_PREFIX}}/bin:$PATH" which hadoop 2>/dev/null)
> if [ -f ${HADOOP_IN_PATH} ]; then
>   HADOOP_JAVA_LIBRARY_PATH=$(HADOOP_CLASSPATH="$CLASSPATH" ${HADOOP_IN_PATH} \
>                              org.apache.hadoop.hbase.util.GetJavaProperty java.library.path 2>/dev/null)
>   if [ -n "$HADOOP_JAVA_LIBRARY_PATH" ]; then
>     JAVA_LIBRARY_PATH=$(append_path "${JAVA_LIBRARY_PATH}" "$HADOOP_JAVA_LIBRARY_PATH")
>   fi
>   CLASSPATH=$(append_path "${CLASSPATH}" `${HADOOP_IN_PATH} classpath 2>/dev/null`)
> fi
> 
> if [ -d "${HBASE_HOME}/build/native" -o -d "${HBASE_HOME}/lib/native" ]; then
>   if [ -z $JAVA_PLATFORM ]; then
>     JAVA_PLATFORM=`CLASSPATH=${CLASSPATH} ${JAVA} org.apache.hadoop.util.PlatformName | sed -e "s/ /_/g"`
>   fi
>   if [ -d "$HBASE_HOME/build/native" ]; then
>     JAVA_LIBRARY_PATH=$(append_path "$JAVA_LIBRARY_PATH" ${HBASE_HOME}/build/native/${JAVA_PLATFORM}/lib)
>   fi
> 
>   if [ -d "${HBASE_HOME}/lib/native" ]; then
>     JAVA_LIBRARY_PATH=$(append_path "$JAVA_LIBRARY_PATH" ${HBASE_HOME}/lib/native/${JAVA_PLATFORM})
>   fi
> fi
> 
Thank you. So the start-hbase.sh file should take care of such things,
too. Or it might be easier to write a wrapper script to call
hbase-daemon.sh for master and regionserver in a row.

Cheers,
Arvid

-- 
[ Arvid Warnecke ][ arvid (at) nostalgix (dot) org ]
[ IRC/OPN: "madhatter" ][ http://www.nostalgix.org ]
---[  ThreePiO was right: Let the Wookiee win.  ]---

Re: Enable Snappy compression - not able to load the libs on startup

Posted by Asaf Mesika <as...@gmail.com>.
On Jul 10, 2012, at 8:57 AM, Arvid Warnecke wrote:

> Hello,
> 
> On Mon, Jul 09, 2012 at 09:10:12PM +0300, Asaf Mesika wrote:
>> On Jul 9, 2012, at 21:00 PM, Harsh J wrote:
>>> The hbase-daemon.sh does not ssh back into the host, so preserves any
>>> environment variables you haven't otherwise set in the hbase-env.sh
>>> file. I guess that did the trick for you.
>>> 
>> Maybe you should look at the content of the jvm argument switch
>> -Djava.library.path, (ps -ef | grep hbase , to see the command line).
>> This will give you a hint on the directories the .so object is being
>> looked for.
>> 
> It seems that that switch is only in the 'hbase' script itself. But
> something like that must be the difference, because in my shell I only
> set $HADOOP_HOME and $HBASE_HOME and $HADOOP_CLASSPATH via ~/.zshrc.
It's not only there. Since inside hbase file, you see the following bash section which pulls the value of java.library.path from the hadoop shell script:

#If avail, add Hadoop to the CLASSPATH and to the JAVA_LIBRARY_PATH
HADOOP_IN_PATH=$(PATH="${HADOOP_HOME:-${HADOOP_PREFIX}}/bin:$PATH" which hadoop 2>/dev/null)
if [ -f ${HADOOP_IN_PATH} ]; then
  HADOOP_JAVA_LIBRARY_PATH=$(HADOOP_CLASSPATH="$CLASSPATH" ${HADOOP_IN_PATH} \
                             org.apache.hadoop.hbase.util.GetJavaProperty java.library.path 2>/dev/null)
  if [ -n "$HADOOP_JAVA_LIBRARY_PATH" ]; then
    JAVA_LIBRARY_PATH=$(append_path "${JAVA_LIBRARY_PATH}" "$HADOOP_JAVA_LIBRARY_PATH")
  fi
  CLASSPATH=$(append_path "${CLASSPATH}" `${HADOOP_IN_PATH} classpath 2>/dev/null`)
fi

if [ -d "${HBASE_HOME}/build/native" -o -d "${HBASE_HOME}/lib/native" ]; then
  if [ -z $JAVA_PLATFORM ]; then
    JAVA_PLATFORM=`CLASSPATH=${CLASSPATH} ${JAVA} org.apache.hadoop.util.PlatformName | sed -e "s/ /_/g"`
  fi
  if [ -d "$HBASE_HOME/build/native" ]; then
    JAVA_LIBRARY_PATH=$(append_path "$JAVA_LIBRARY_PATH" ${HBASE_HOME}/build/native/${JAVA_PLATFORM}/lib)
  fi

  if [ -d "${HBASE_HOME}/lib/native" ]; then
    JAVA_LIBRARY_PATH=$(append_path "$JAVA_LIBRARY_PATH" ${HBASE_HOME}/lib/native/${JAVA_PLATFORM})
  fi
fi


> 
> Cheers,
> Arvid
> 
> PS: Too bad that all those handy scripts have been removed in CDH4 completly
> btw. Now you have to send the output to a log file by yourself.
> 
> 
> -- 
> [ Arvid Warnecke ][ arvid (at) nostalgix (dot) org ]
> [ IRC/OPN: "madhatter" ][ http://www.nostalgix.org ]
> ---[  ThreePiO was right: Let the Wookiee win.  ]---


Re: Enable Snappy compression - not able to load the libs on startup

Posted by Arvid Warnecke <ar...@nostalgix.org>.
Hello,

On Mon, Jul 09, 2012 at 09:10:12PM +0300, Asaf Mesika wrote:
> On Jul 9, 2012, at 21:00 PM, Harsh J wrote:
> > The hbase-daemon.sh does not ssh back into the host, so preserves any
> > environment variables you haven't otherwise set in the hbase-env.sh
> > file. I guess that did the trick for you.
> > 
> Maybe you should look at the content of the jvm argument switch
> -Djava.library.path, (ps -ef | grep hbase , to see the command line).
> This will give you a hint on the directories the .so object is being
> looked for.
> 
It seems that that switch is only in the 'hbase' script itself. But
something like that must be the difference, because in my shell I only
set $HADOOP_HOME and $HBASE_HOME and $HADOOP_CLASSPATH via ~/.zshrc.

Cheers,
Arvid

PS: Too bad that all those handy scripts have been removed in CDH4 completly
btw. Now you have to send the output to a log file by yourself.


-- 
[ Arvid Warnecke ][ arvid (at) nostalgix (dot) org ]
[ IRC/OPN: "madhatter" ][ http://www.nostalgix.org ]
---[  ThreePiO was right: Let the Wookiee win.  ]---

Re: Enable Snappy compression - not able to load the libs on startup

Posted by Asaf Mesika <as...@gmail.com>.
Maybe you should look at the content of the jvm argument switch -Djava.library.path, (ps -ef | grep hbase , to see the command line). This will give you a hint on the directories the .so object is being looked for.

On Jul 9, 2012, at 21:00 PM, Harsh J wrote:

> The hbase-daemon.sh does not ssh back into the host, so preserves any
> environment variables you haven't otherwise set in the hbase-env.sh
> file. I guess that did the trick for you.
> 
> On Mon, Jul 9, 2012 at 11:14 PM, Arvid Warnecke <ar...@nostalgix.org> wrote:
>> Hello Harsh,
>> 
>> On Mon, Jul 09, 2012 at 07:14:56AM +0530, Harsh J wrote:
>>> Perhaps the pre-compiled set does not work against the version of libs
>>> in your ArchLinux. We've noticed this to be the case between CentOS 5
>>> and 6 versions too (5 doesn't pick up the Snappy codec for some
>>> reason).
>>> 
>>> Try recompiling them on the hadoop side (ant compile-native, etc.).
>>> For a loose dependency set to compile the natives, see
>>> http://wiki.apache.org/hadoop/QwertyManiac/BuildingHadoopTrunk.
>>> Alternatively, you can also run the CDH build script under
>>> $HADOOP_HOME/cloudera/do-release-build to get it going automatically
>>> and producing a new tarball.
>>> 
>> Thank you for those suggestions. I tried to rebuild with ant, which
>> broke with some useless error message and I tried to use the
>> do-release-build script which threw some other errors. At last I tried
>> to build only the snappy sources which had been downloaded via the
>> do-release-build script....
>> 
>> What really did the trick was not to use ./bin/hbase-start.sh to start
>> up the Master and Regionserver, but to use ./bin/hbase-daemon.sh start
>> master and ./bin/hbase-daemon.sh start regionserver.
>> 
>> I did not find the main difference between those scripts yet to tell
>> what is missing in the start-hbase.sh.
>> 
>> Cheers,
>> Arvid
>> 
>> --
>> [ Arvid Warnecke ][ arvid (at) nostalgix (dot) org ]
>> [ IRC/OPN: "madhatter" ][ http://www.nostalgix.org ]
>> ---[  ThreePiO was right: Let the Wookiee win.  ]---
> 
> 
> 
> -- 
> Harsh J


Re: Enable Snappy compression - not able to load the libs on startup

Posted by Harsh J <ha...@cloudera.com>.
The hbase-daemon.sh does not ssh back into the host, so preserves any
environment variables you haven't otherwise set in the hbase-env.sh
file. I guess that did the trick for you.

On Mon, Jul 9, 2012 at 11:14 PM, Arvid Warnecke <ar...@nostalgix.org> wrote:
> Hello Harsh,
>
> On Mon, Jul 09, 2012 at 07:14:56AM +0530, Harsh J wrote:
>> Perhaps the pre-compiled set does not work against the version of libs
>> in your ArchLinux. We've noticed this to be the case between CentOS 5
>> and 6 versions too (5 doesn't pick up the Snappy codec for some
>> reason).
>>
>> Try recompiling them on the hadoop side (ant compile-native, etc.).
>> For a loose dependency set to compile the natives, see
>> http://wiki.apache.org/hadoop/QwertyManiac/BuildingHadoopTrunk.
>> Alternatively, you can also run the CDH build script under
>> $HADOOP_HOME/cloudera/do-release-build to get it going automatically
>> and producing a new tarball.
>>
> Thank you for those suggestions. I tried to rebuild with ant, which
> broke with some useless error message and I tried to use the
> do-release-build script which threw some other errors. At last I tried
> to build only the snappy sources which had been downloaded via the
> do-release-build script....
>
> What really did the trick was not to use ./bin/hbase-start.sh to start
> up the Master and Regionserver, but to use ./bin/hbase-daemon.sh start
> master and ./bin/hbase-daemon.sh start regionserver.
>
> I did not find the main difference between those scripts yet to tell
> what is missing in the start-hbase.sh.
>
> Cheers,
> Arvid
>
> --
> [ Arvid Warnecke ][ arvid (at) nostalgix (dot) org ]
> [ IRC/OPN: "madhatter" ][ http://www.nostalgix.org ]
> ---[  ThreePiO was right: Let the Wookiee win.  ]---



-- 
Harsh J

Re: Enable Snappy compression - not able to load the libs on startup

Posted by Arvid Warnecke <ar...@nostalgix.org>.
Hello Harsh,

On Mon, Jul 09, 2012 at 07:14:56AM +0530, Harsh J wrote:
> Perhaps the pre-compiled set does not work against the version of libs
> in your ArchLinux. We've noticed this to be the case between CentOS 5
> and 6 versions too (5 doesn't pick up the Snappy codec for some
> reason).
> 
> Try recompiling them on the hadoop side (ant compile-native, etc.).
> For a loose dependency set to compile the natives, see
> http://wiki.apache.org/hadoop/QwertyManiac/BuildingHadoopTrunk.
> Alternatively, you can also run the CDH build script under
> $HADOOP_HOME/cloudera/do-release-build to get it going automatically
> and producing a new tarball.
> 
Thank you for those suggestions. I tried to rebuild with ant, which
broke with some useless error message and I tried to use the
do-release-build script which threw some other errors. At last I tried
to build only the snappy sources which had been downloaded via the
do-release-build script....

What really did the trick was not to use ./bin/hbase-start.sh to start
up the Master and Regionserver, but to use ./bin/hbase-daemon.sh start
master and ./bin/hbase-daemon.sh start regionserver.

I did not find the main difference between those scripts yet to tell
what is missing in the start-hbase.sh.

Cheers,
Arvid

-- 
[ Arvid Warnecke ][ arvid (at) nostalgix (dot) org ]
[ IRC/OPN: "madhatter" ][ http://www.nostalgix.org ]
---[  ThreePiO was right: Let the Wookiee win.  ]---

Re: Enable Snappy compression - not able to load the libs on startup

Posted by Harsh J <ha...@cloudera.com>.
Perhaps the pre-compiled set does not work against the version of libs
in your ArchLinux. We've noticed this to be the case between CentOS 5
and 6 versions too (5 doesn't pick up the Snappy codec for some
reason).

Try recompiling them on the hadoop side (ant compile-native, etc.).
For a loose dependency set to compile the natives, see
http://wiki.apache.org/hadoop/QwertyManiac/BuildingHadoopTrunk.
Alternatively, you can also run the CDH build script under
$HADOOP_HOME/cloudera/do-release-build to get it going automatically
and producing a new tarball.

On Mon, Jul 9, 2012 at 1:51 AM, Arvid Warnecke <ar...@nostalgix.org> wrote:
> Hello Paul,
>
> On Sun, Jul 08, 2012 at 06:45:46PM +0200, Paul Cavallaro wrote:
>> On Sun, Jul 8, 2012 at 9:48 AM, Arvid Warnecke <ar...@nostalgix.org> wrote:
>> > I already found some old entries from mailinglists and articles at
>> > Cloudera how to use the Snappy library from Hadoop in HBase, but it does
>> > not seem to work for me.
>> >
>> > I installed Hadoop and HBase from the tarballs, because there are no
>> > packages available for Arch Linux. Everything worked fine, but I am not
>> > able to use any compression for my tables.
>> >
>> > When I use
>> >
>> > hbase> create 'table', {NAME=>'fam', COMPRESSION=>'snappy'}
>> >
>> I would first ask if you've installed the native snappy libraries on the
>> machine?
>>
>> http://hbase.apache.org/book/snappy.compression.html
>>
>> That seems to be the likely culprit here.
>>
> No, I did not. I installed Hadoop via Cloudera tarball. There are libs
> for different compressions available at $HADOOP_HOME/lib/native.
> Is there a difference?
>
> Cheers,
> Arvid
>
> --
> [ Arvid Warnecke ][ arvid (at) nostalgix (dot) org ]
> [ IRC/OPN: "madhatter" ][ http://www.nostalgix.org ]
> ---[  ThreePiO was right: Let the Wookiee win.  ]---



-- 
Harsh J

Re: Enable Snappy compression - not able to load the libs on startup

Posted by Arvid Warnecke <ar...@nostalgix.org>.
Hello Paul,

On Sun, Jul 08, 2012 at 06:45:46PM +0200, Paul Cavallaro wrote:
> On Sun, Jul 8, 2012 at 9:48 AM, Arvid Warnecke <ar...@nostalgix.org> wrote:
> > I already found some old entries from mailinglists and articles at
> > Cloudera how to use the Snappy library from Hadoop in HBase, but it does
> > not seem to work for me.
> >
> > I installed Hadoop and HBase from the tarballs, because there are no
> > packages available for Arch Linux. Everything worked fine, but I am not
> > able to use any compression for my tables.
> >
> > When I use
> >
> > hbase> create 'table', {NAME=>'fam', COMPRESSION=>'snappy'}
> >
> I would first ask if you've installed the native snappy libraries on the
> machine?
> 
> http://hbase.apache.org/book/snappy.compression.html
> 
> That seems to be the likely culprit here.
> 
No, I did not. I installed Hadoop via Cloudera tarball. There are libs
for different compressions available at $HADOOP_HOME/lib/native.
Is there a difference?

Cheers,
Arvid

-- 
[ Arvid Warnecke ][ arvid (at) nostalgix (dot) org ]
[ IRC/OPN: "madhatter" ][ http://www.nostalgix.org ]
---[  ThreePiO was right: Let the Wookiee win.  ]---

Re: Enable Snappy compression - not able to load the libs on startup

Posted by Paul Cavallaro <pa...@gmail.com>.
I would first ask if you've installed the native snappy libraries on the
machine?

http://hbase.apache.org/book/snappy.compression.html

That seems to be the likely culprit here.

Thanks,

-Paul

On Sun, Jul 8, 2012 at 9:48 AM, Arvid Warnecke <ar...@nostalgix.org> wrote:

> Hello,
>
> I already found some old entries from mailinglists and articles at
> Cloudera how to use the Snappy library from Hadoop in HBase, but it does
> not seem to work for me.
>
> I installed Hadoop and HBase from the tarballs, because there are no
> packages available for Arch Linux. Everything worked fine, but I am not
> able to use any compression for my tables.
>
> When I use
>
> hbase> create 'table', {NAME=>'fam', COMPRESSION=>'snappy'}
>
> I see in the logs from the regionserver lots of the same error messages:
> 2012-07-07 17:00:17,646 ERROR
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Failed
> open of region=rawdb,,1341672997475.31ecf39289eb5034fb6a3c9f1a0cad2b.
> java.io.IOException: Compression algorithm 'snappy' previously failed
> test.
>         at
> org.apache.hadoop.hbase.util.CompressionTest.testCompression(CompressionTest.java:78)
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.checkCompressionCodecs(HRegion.java:2797)
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2786)
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2774)
>         at
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:319)
>         at
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:105)
>         at
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:163)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown
> Source)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown
> Source)
>         at java.lang.Thread.run(Unknown Source)
>
> I already tried to use the following in the hbase-env.sh file:
>
> export
> HBASE_LIBRARY_PATH=/home/madhatter/CDH3/hadoop/lib/native/Linux-amd64-64
>
> That is where my Cloudera Hadoop & HBase are located, but it seems that
> it does not do the trick. Do I need to set other variables aswell?
> CLASSPATHes or anything like that? Compression seems to be the only
> thing which is not working. When I installed HBase as Cloudera Packages
> in Debian I never had such issues.
>
> Best regards,
> Arvid
>
> --
> [ Arvid Warnecke ][ arvid (at) nostalgix (dot) org ]
> [ IRC/OPN: "madhatter" ][ http://www.nostalgix.org ]
> ---[  ThreePiO was right: Let the Wookiee win.  ]---
>