You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Stack <st...@duboce.net> on 2011/03/16 18:35:09 UTC

Re: after upgrade, fatal error in regionserver compacter, LzoCompressor, "AbstractMethodError"

Poking in our mail archives, does it help?  For example:
http://search-hadoop.com/m/QMDV41Sh1GI/lzo+compression&subj=LZO+Compression
St.Ack

On Wed, Mar 16, 2011 at 10:28 AM, Ferdy Galema <fe...@kalooga.com> wrote:
> We upgraded to Hadoop 0.20.1 and Hbase 0.90.1 (both CDH3B4). We are using
> 64bit machines.
>
> Starting goes great, only right after the first compaction we get this
> error:
> Uncaught exception in service thread regionserver60020.compactor
> java.lang.AbstractMethodError:
> com.hadoop.compression.lzo.LzoCompressor.reinit(Lorg/apache/hadoop/conf/Configuration;)V
>        at
> org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:105)
>        at
> org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:112)
>        at
> org.apache.hadoop.hbase.io.hfile.Compression$Algorithm.getCompressor(Compression.java:200)
>        at
> org.apache.hadoop.hbase.io.hfile.HFile$Writer.getCompressingStream(HFile.java:397)
>        at
> org.apache.hadoop.hbase.io.hfile.HFile$Writer.newBlock(HFile.java:383)
>        at
> org.apache.hadoop.hbase.io.hfile.HFile$Writer.checkBlockBoundary(HFile.java:354)
>        at
> org.apache.hadoop.hbase.io.hfile.HFile$Writer.append(HFile.java:536)
>        at
> org.apache.hadoop.hbase.io.hfile.HFile$Writer.append(HFile.java:501)
>        at
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:836)
>        at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:935)
>        at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:733)
>        at
> org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:769)
>        at
> org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:714)
>        at
> org.apache.hadoop.hbase.regionserver.CompactSplitThread.run(CompactSplitThread.java:81)
>
> Lzo worked fine. This is how I believe we used it.
> # LZO compression in Hbase will pass through three layers:
> # 1) hadoop-gpl-compression-*.jar in the hbase/lib directory; the entry
> point
> # 2) libgplcompression.* in the hbase native lib directory; the native
> connectors
> # 3) liblzo2.so.2 in the hbase native lib directory; the base native library
>
> Anyway, it would be great if somebody could help us out.
>

Re: after upgrade, fatal error in regionserver compacter, LzoCompressor, "AbstractMethodError"

Posted by Jiajun Chen <cj...@gmail.com>.
use this version http://code.google.com/a/apache-extras.org/p/hadoop-gpl-
compression/?redir=1

add codes in LzoDecompressor

@Override
    public int getRemaining()
    {
        return uncompressedDirectBuf.remaining();
    }

change return type in LzopCodec

protected int getCompressedData()
{
...
return len;
}

and then ant build successful .

On 10 April 2012 18:10, Jiajun Chen <cj...@gmail.com> wrote:

> Why can't i use this version:
> http://code.google.com/a/apache-extras.org/p/hadoop-gpl-compression/?redir=1
>
>
> On 17 March 2011 17:29, Ferdy Galema <fe...@kalooga.com> wrote:
>
>> Updating the lzo libraries resolved the problem. Thanks for pointing it
>> out and thanks to Todd Lipcon for his hadoop-lzo-packager.
>>
>>
>> On 03/16/2011 06:35 PM, Stack wrote:
>>
>>> Poking in our mail archives, does it help?  For example:
>>> http://search-hadoop.com/m/**QMDV41Sh1GI/lzo+compression&**
>>> subj=LZO+Compression<http://search-hadoop.com/m/QMDV41Sh1GI/lzo+compression&subj=LZO+Compression>
>>> St.Ack
>>>
>>> On Wed, Mar 16, 2011 at 10:28 AM, Ferdy Galema<ferdy.galema@kalooga.**
>>> com <fe...@kalooga.com>>  wrote:
>>>
>>>> We upgraded to Hadoop 0.20.1 and Hbase 0.90.1 (both CDH3B4). We are
>>>> using
>>>> 64bit machines.
>>>>
>>>> Starting goes great, only right after the first compaction we get this
>>>> error:
>>>> Uncaught exception in service thread regionserver60020.compactor
>>>> java.lang.AbstractMethodError:
>>>> com.hadoop.compression.lzo.**LzoCompressor.reinit(Lorg/**
>>>> apache/hadoop/conf/**Configuration;)V
>>>>        at
>>>> org.apache.hadoop.io.compress.**CodecPool.getCompressor(**
>>>> CodecPool.java:105)
>>>>        at
>>>> org.apache.hadoop.io.compress.**CodecPool.getCompressor(**
>>>> CodecPool.java:112)
>>>>        at
>>>> org.apache.hadoop.hbase.io.**hfile.Compression$Algorithm.**
>>>> getCompressor(Compression.**java:200)
>>>>        at
>>>> org.apache.hadoop.hbase.io.**hfile.HFile$Writer.**
>>>> getCompressingStream(HFile.**java:397)
>>>>        at
>>>> org.apache.hadoop.hbase.io.**hfile.HFile$Writer.newBlock(**
>>>> HFile.java:383)
>>>>        at
>>>> org.apache.hadoop.hbase.io.**hfile.HFile$Writer.**
>>>> checkBlockBoundary(HFile.java:**354)
>>>>        at
>>>> org.apache.hadoop.hbase.io.**hfile.HFile$Writer.append(**
>>>> HFile.java:536)
>>>>        at
>>>> org.apache.hadoop.hbase.io.**hfile.HFile$Writer.append(**
>>>> HFile.java:501)
>>>>        at
>>>> org.apache.hadoop.hbase.**regionserver.StoreFile$Writer.**
>>>> append(StoreFile.java:836)
>>>>        at org.apache.hadoop.hbase.**regionserver.Store.compact(**
>>>> Store.java:935)
>>>>        at org.apache.hadoop.hbase.**regionserver.Store.compact(**
>>>> Store.java:733)
>>>>        at
>>>> org.apache.hadoop.hbase.**regionserver.HRegion.**
>>>> compactStores(HRegion.java:**769)
>>>>        at
>>>> org.apache.hadoop.hbase.**regionserver.HRegion.**
>>>> compactStores(HRegion.java:**714)
>>>>        at
>>>> org.apache.hadoop.hbase.**regionserver.**CompactSplitThread.run(**
>>>> CompactSplitThread.java:81)
>>>>
>>>> Lzo worked fine. This is how I believe we used it.
>>>> # LZO compression in Hbase will pass through three layers:
>>>> # 1) hadoop-gpl-compression-*.jar in the hbase/lib directory; the entry
>>>> point
>>>> # 2) libgplcompression.* in the hbase native lib directory; the native
>>>> connectors
>>>> # 3) liblzo2.so.2 in the hbase native lib directory; the base native
>>>> library
>>>>
>>>> Anyway, it would be great if somebody could help us out.
>>>>
>>>>
>
>
> --
>
> 陈加俊  项目经理
> 优讯时代(北京)网络技术有限公司
> 优讯网 www.uuwatch.com
>
> 地址:北京市海淀区上地五街7号昊海大厦207
>
> 电话:010-82895510
> 传真:010-82896636
> 手机:15110038983
> 电邮:*cjjvictory@gmail.com*
>
>


-- 

陈加俊  项目经理
优讯时代(北京)网络技术有限公司
优讯网 www.uuwatch.com

地址:北京市海淀区上地五街7号昊海大厦207

电话:010-82895510
传真:010-82896636
手机:15110038983
电邮:*cjjvictory@gmail.com*

Re: after upgrade, fatal error in regionserver compacter, LzoCompressor, "AbstractMethodError"

Posted by Jiajun Chen <cj...@gmail.com>.
I replaced the hadoop-core-0.20.2-cdh3u1.jar to hadoop-core-1.0.1.jar ,but
build failed .

compile-java:
    [javac] Compiling 24 source files to
/app/setup/cloud/toddlipcon-hadoop-lzo-c7d54ff/build/classes
    [javac]
/app/setup/cloud/toddlipcon-hadoop-lzo-c7d54ff/src/java/com/hadoop/compression/lzo/LzoDecompressor.java:34:
com.hadoop.compression.lzo.LzoDecompressor is not abstract and does not
override abstract method getRemaining() in
org.apache.hadoop.io.compress.Decompressor
    [javac] class LzoDecompressor implements Decompressor {
    [javac] ^
    [javac]
/app/setup/cloud/toddlipcon-hadoop-lzo-c7d54ff/src/java/com/hadoop/compression/lzo/LzopInputStream.java:277:
getCompressedData() in com.hadoop.compression.lzo.LzopInputStream cannot
override getCompressedData() in
org.apache.hadoop.io.compress.BlockDecompressorStream; attempting to use
incompatible return type
    [javac] found   : void
    [javac] required: int
    [javac]   protected void getCompressedData() throws IOException {
    [javac]                  ^
    [javac]
/app/setup/cloud/toddlipcon-hadoop-lzo-c7d54ff/src/java/com/hadoop/compression/lzo/LzopInputStream.java:276:
method does not override or implement a method from a supertype
    [javac]   @Override
    [javac]   ^
    [javac]
/app/setup/cloud/toddlipcon-hadoop-lzo-c7d54ff/src/java/com/hadoop/mapreduce/LzoIndexOutputFormat.java:31:
warning: [deprecation] cleanupJob(org.apache.hadoop.mapreduce.JobContext)
in org.apache.hadoop.mapreduce.OutputCommitter has been deprecated
    [javac]       @Override public void cleanupJob(JobContext jobContext)
throws IOException {}
    [javac]                             ^
    [javac] 3 errors
    [javac] 1 warning

BUILD FAILED
/app/setup/cloud/toddlipcon-hadoop-lzo-c7d54ff/build.xml:216: Compile
failed; see the compiler error output for details.


On 10 April 2012 18:10, Jiajun Chen <cj...@gmail.com> wrote:

> Why can't i use this version:
> http://code.google.com/a/apache-extras.org/p/hadoop-gpl-compression/?redir=1
>
>
> On 17 March 2011 17:29, Ferdy Galema <fe...@kalooga.com> wrote:
>
>> Updating the lzo libraries resolved the problem. Thanks for pointing it
>> out and thanks to Todd Lipcon for his hadoop-lzo-packager.
>>
>>
>> On 03/16/2011 06:35 PM, Stack wrote:
>>
>>> Poking in our mail archives, does it help?  For example:
>>> http://search-hadoop.com/m/**QMDV41Sh1GI/lzo+compression&**
>>> subj=LZO+Compression<http://search-hadoop.com/m/QMDV41Sh1GI/lzo+compression&subj=LZO+Compression>
>>> St.Ack
>>>
>>> On Wed, Mar 16, 2011 at 10:28 AM, Ferdy Galema<ferdy.galema@kalooga.**
>>> com <fe...@kalooga.com>>  wrote:
>>>
>>>> We upgraded to Hadoop 0.20.1 and Hbase 0.90.1 (both CDH3B4). We are
>>>> using
>>>> 64bit machines.
>>>>
>>>> Starting goes great, only right after the first compaction we get this
>>>> error:
>>>> Uncaught exception in service thread regionserver60020.compactor
>>>> java.lang.AbstractMethodError:
>>>> com.hadoop.compression.lzo.**LzoCompressor.reinit(Lorg/**
>>>> apache/hadoop/conf/**Configuration;)V
>>>>        at
>>>> org.apache.hadoop.io.compress.**CodecPool.getCompressor(**
>>>> CodecPool.java:105)
>>>>        at
>>>> org.apache.hadoop.io.compress.**CodecPool.getCompressor(**
>>>> CodecPool.java:112)
>>>>        at
>>>> org.apache.hadoop.hbase.io.**hfile.Compression$Algorithm.**
>>>> getCompressor(Compression.**java:200)
>>>>        at
>>>> org.apache.hadoop.hbase.io.**hfile.HFile$Writer.**
>>>> getCompressingStream(HFile.**java:397)
>>>>        at
>>>> org.apache.hadoop.hbase.io.**hfile.HFile$Writer.newBlock(**
>>>> HFile.java:383)
>>>>        at
>>>> org.apache.hadoop.hbase.io.**hfile.HFile$Writer.**
>>>> checkBlockBoundary(HFile.java:**354)
>>>>        at
>>>> org.apache.hadoop.hbase.io.**hfile.HFile$Writer.append(**
>>>> HFile.java:536)
>>>>        at
>>>> org.apache.hadoop.hbase.io.**hfile.HFile$Writer.append(**
>>>> HFile.java:501)
>>>>        at
>>>> org.apache.hadoop.hbase.**regionserver.StoreFile$Writer.**
>>>> append(StoreFile.java:836)
>>>>        at org.apache.hadoop.hbase.**regionserver.Store.compact(**
>>>> Store.java:935)
>>>>        at org.apache.hadoop.hbase.**regionserver.Store.compact(**
>>>> Store.java:733)
>>>>        at
>>>> org.apache.hadoop.hbase.**regionserver.HRegion.**
>>>> compactStores(HRegion.java:**769)
>>>>        at
>>>> org.apache.hadoop.hbase.**regionserver.HRegion.**
>>>> compactStores(HRegion.java:**714)
>>>>        at
>>>> org.apache.hadoop.hbase.**regionserver.**CompactSplitThread.run(**
>>>> CompactSplitThread.java:81)
>>>>
>>>> Lzo worked fine. This is how I believe we used it.
>>>> # LZO compression in Hbase will pass through three layers:
>>>> # 1) hadoop-gpl-compression-*.jar in the hbase/lib directory; the entry
>>>> point
>>>> # 2) libgplcompression.* in the hbase native lib directory; the native
>>>> connectors
>>>> # 3) liblzo2.so.2 in the hbase native lib directory; the base native
>>>> library
>>>>
>>>> Anyway, it would be great if somebody could help us out.
>>>>
>>>>
>
>
> --
>
> 陈加俊  项目经理
> 优讯时代(北京)网络技术有限公司
> 优讯网 www.uuwatch.com
>
> 地址:北京市海淀区上地五街7号昊海大厦207
>
> 电话:010-82895510
> 传真:010-82896636
> 手机:15110038983
> 电邮:*cjjvictory@gmail.com*
>
>


-- 

陈加俊  项目经理
优讯时代(北京)网络技术有限公司
优讯网 www.uuwatch.com

地址:北京市海淀区上地五街7号昊海大厦207

电话:010-82895510
传真:010-82896636
手机:15110038983
电邮:*cjjvictory@gmail.com*

Re: after upgrade, fatal error in regionserver compacter, LzoCompressor, "AbstractMethodError"

Posted by Jiajun Chen <cj...@gmail.com>.
Why can't i use this version:
http://code.google.com/a/apache-extras.org/p/hadoop-gpl-compression/?redir=1


On 17 March 2011 17:29, Ferdy Galema <fe...@kalooga.com> wrote:

> Updating the lzo libraries resolved the problem. Thanks for pointing it
> out and thanks to Todd Lipcon for his hadoop-lzo-packager.
>
>
> On 03/16/2011 06:35 PM, Stack wrote:
>
>> Poking in our mail archives, does it help?  For example:
>> http://search-hadoop.com/m/**QMDV41Sh1GI/lzo+compression&**
>> subj=LZO+Compression<http://search-hadoop.com/m/QMDV41Sh1GI/lzo+compression&subj=LZO+Compression>
>> St.Ack
>>
>> On Wed, Mar 16, 2011 at 10:28 AM, Ferdy Galema<fe...@kalooga.com>>
>>  wrote:
>>
>>> We upgraded to Hadoop 0.20.1 and Hbase 0.90.1 (both CDH3B4). We are using
>>> 64bit machines.
>>>
>>> Starting goes great, only right after the first compaction we get this
>>> error:
>>> Uncaught exception in service thread regionserver60020.compactor
>>> java.lang.AbstractMethodError:
>>> com.hadoop.compression.lzo.**LzoCompressor.reinit(Lorg/**
>>> apache/hadoop/conf/**Configuration;)V
>>>        at
>>> org.apache.hadoop.io.compress.**CodecPool.getCompressor(**
>>> CodecPool.java:105)
>>>        at
>>> org.apache.hadoop.io.compress.**CodecPool.getCompressor(**
>>> CodecPool.java:112)
>>>        at
>>> org.apache.hadoop.hbase.io.**hfile.Compression$Algorithm.**
>>> getCompressor(Compression.**java:200)
>>>        at
>>> org.apache.hadoop.hbase.io.**hfile.HFile$Writer.**
>>> getCompressingStream(HFile.**java:397)
>>>        at
>>> org.apache.hadoop.hbase.io.**hfile.HFile$Writer.newBlock(**
>>> HFile.java:383)
>>>        at
>>> org.apache.hadoop.hbase.io.**hfile.HFile$Writer.**
>>> checkBlockBoundary(HFile.java:**354)
>>>        at
>>> org.apache.hadoop.hbase.io.**hfile.HFile$Writer.append(**HFile.java:536)
>>>        at
>>> org.apache.hadoop.hbase.io.**hfile.HFile$Writer.append(**HFile.java:501)
>>>        at
>>> org.apache.hadoop.hbase.**regionserver.StoreFile$Writer.**
>>> append(StoreFile.java:836)
>>>        at org.apache.hadoop.hbase.**regionserver.Store.compact(**
>>> Store.java:935)
>>>        at org.apache.hadoop.hbase.**regionserver.Store.compact(**
>>> Store.java:733)
>>>        at
>>> org.apache.hadoop.hbase.**regionserver.HRegion.**
>>> compactStores(HRegion.java:**769)
>>>        at
>>> org.apache.hadoop.hbase.**regionserver.HRegion.**
>>> compactStores(HRegion.java:**714)
>>>        at
>>> org.apache.hadoop.hbase.**regionserver.**CompactSplitThread.run(**
>>> CompactSplitThread.java:81)
>>>
>>> Lzo worked fine. This is how I believe we used it.
>>> # LZO compression in Hbase will pass through three layers:
>>> # 1) hadoop-gpl-compression-*.jar in the hbase/lib directory; the entry
>>> point
>>> # 2) libgplcompression.* in the hbase native lib directory; the native
>>> connectors
>>> # 3) liblzo2.so.2 in the hbase native lib directory; the base native
>>> library
>>>
>>> Anyway, it would be great if somebody could help us out.
>>>
>>>


-- 

陈加俊  项目经理
优讯时代(北京)网络技术有限公司
优讯网 www.uuwatch.com

地址:北京市海淀区上地五街7号昊海大厦207

电话:010-82895510
传真:010-82896636
手机:15110038983
电邮:*cjjvictory@gmail.com*

Re: after upgrade, fatal error in regionserver compacter, LzoCompressor, "AbstractMethodError"

Posted by Ferdy Galema <fe...@kalooga.com>.
Updating the lzo libraries resolved the problem. Thanks for pointing it 
out and thanks to Todd Lipcon for his hadoop-lzo-packager.

On 03/16/2011 06:35 PM, Stack wrote:
> Poking in our mail archives, does it help?  For example:
> http://search-hadoop.com/m/QMDV41Sh1GI/lzo+compression&subj=LZO+Compression
> St.Ack
>
> On Wed, Mar 16, 2011 at 10:28 AM, Ferdy Galema<fe...@kalooga.com>  wrote:
>> We upgraded to Hadoop 0.20.1 and Hbase 0.90.1 (both CDH3B4). We are using
>> 64bit machines.
>>
>> Starting goes great, only right after the first compaction we get this
>> error:
>> Uncaught exception in service thread regionserver60020.compactor
>> java.lang.AbstractMethodError:
>> com.hadoop.compression.lzo.LzoCompressor.reinit(Lorg/apache/hadoop/conf/Configuration;)V
>>         at
>> org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:105)
>>         at
>> org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:112)
>>         at
>> org.apache.hadoop.hbase.io.hfile.Compression$Algorithm.getCompressor(Compression.java:200)
>>         at
>> org.apache.hadoop.hbase.io.hfile.HFile$Writer.getCompressingStream(HFile.java:397)
>>         at
>> org.apache.hadoop.hbase.io.hfile.HFile$Writer.newBlock(HFile.java:383)
>>         at
>> org.apache.hadoop.hbase.io.hfile.HFile$Writer.checkBlockBoundary(HFile.java:354)
>>         at
>> org.apache.hadoop.hbase.io.hfile.HFile$Writer.append(HFile.java:536)
>>         at
>> org.apache.hadoop.hbase.io.hfile.HFile$Writer.append(HFile.java:501)
>>         at
>> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:836)
>>         at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:935)
>>         at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:733)
>>         at
>> org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:769)
>>         at
>> org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:714)
>>         at
>> org.apache.hadoop.hbase.regionserver.CompactSplitThread.run(CompactSplitThread.java:81)
>>
>> Lzo worked fine. This is how I believe we used it.
>> # LZO compression in Hbase will pass through three layers:
>> # 1) hadoop-gpl-compression-*.jar in the hbase/lib directory; the entry
>> point
>> # 2) libgplcompression.* in the hbase native lib directory; the native
>> connectors
>> # 3) liblzo2.so.2 in the hbase native lib directory; the base native library
>>
>> Anyway, it would be great if somebody could help us out.
>>