You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by "Abinash Karana (Bizosys)" <ab...@bizosys.com> on 2011/01/07 11:32:44 UTC

java.lang.NoSuchMethodException: hbase-0.90

11/01/07 14:46:11 WARN wal.SequenceFileLogReader: Error while trying to get
accurate file length.  Truncation / data loss may occur if RegionServers d
ie.
java.lang.NoSuchMethodException:
org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.getFileLength
()
        at java.lang.Class.getMethod(Unknown Source)
        at
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader$WAL
ReaderFSDataInputStream.getPos(SequenceFileLogReader.java:107)
        at
org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1434)
        at
org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1424)
        at
org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1419)
        at
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.<in
it>(SequenceFileLogReader.java:57)
        at
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(Sequence
FileLogReader.java:158)
        at
org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:576)
        at
org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.ja
va:1848)
        at
org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegi
on.java:1808)
        at
org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:350)
        at
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2505)
        at
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2491)
        at
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(Op
enRegionHandler.java:262)
        at
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenR
egionHandler.java:94)
        at
org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:151)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown
Source)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown
Source)
        at java.lang.Thread.run(Unknown Source)


Re: java.lang.NoSuchMethodException: hbase-0.90

Posted by Stack <st...@duboce.net>.
On Fri, Jan 7, 2011 at 8:24 AM, Abinash Karana (Bizosys)
<ab...@bizosys.com> wrote:
> One more finding > The 0.90 seems slower than 0.89
>
> Test Result: I indexed using HSearch (HSearch Uses HBase for storing
> indexes) around 1 Million records of Freebase location information. The
> warmed Search for keyword "Hill" returned around 6000 matching records and
> 10 teasers in around 250ms. In the same test bed with 0.90 it went up to
> 280ms on average. May be the ugly session warnings are causing it!!
>
What are the session warnings?

0.90.0 should be faster -- least thats our impression -- since 0.90.0
has some work the 0.89 doesn't (I know its hard to believe, but we
have been wrong in the past).

St.Ack

RE: java.lang.NoSuchMethodException: hbase-0.90

Posted by "Abinash Karana (Bizosys)" <ab...@bizosys.com>.
Yes.. It's in my Laptop only... 
As I develop I keep on testing here.

-----Original Message-----
From: saint.ack@gmail.com [mailto:saint.ack@gmail.com] On Behalf Of Stack
Sent: Tuesday, January 18, 2011 11:41 AM
To: dev@hbase.apache.org; abinash@bizosys.com
Subject: Re: java.lang.NoSuchMethodException: hbase-0.90

On Sun, Jan 16, 2011 at 6:58 PM, Abinash Karana (Bizosys)
<ab...@bizosys.com> wrote:
> Here goes the exception details.. I again encountered...
>

You are running on local filesystem?

> java.io.IOException: Unable to delete src dir:
> file:/tmp/hbase-karan/hbase/.logs/abinash,3620,1295188307109

You have any idea why we can't delete files?

Yeah, looks like file splitting doesn't work on local filesystem.
Complains about missing method.  Maybe there is hole in our reflection
code where we figure out if key functionality is available in
underlying FS.  Want to file and issue and one of us will take a look?
Meantime move to HDFS (pseudo-distributed mode if you are on one
server only)?  You might get further?

Thanks,
St.Ack


Re: java.lang.NoSuchMethodException: hbase-0.90

Posted by Stack <st...@duboce.net>.
On Sun, Jan 16, 2011 at 6:58 PM, Abinash Karana (Bizosys)
<ab...@bizosys.com> wrote:
> Here goes the exception details.. I again encountered...
>

You are running on local filesystem?

> java.io.IOException: Unable to delete src dir:
> file:/tmp/hbase-karan/hbase/.logs/abinash,3620,1295188307109

You have any idea why we can't delete files?

Yeah, looks like file splitting doesn't work on local filesystem.
Complains about missing method.  Maybe there is hole in our reflection
code where we figure out if key functionality is available in
underlying FS.  Want to file and issue and one of us will take a look?
Meantime move to HDFS (pseudo-distributed mode if you are on one
server only)?  You might get further?

Thanks,
St.Ack

RE: java.lang.NoSuchMethodException: hbase-0.90

Posted by "Abinash Karana (Bizosys)" <ab...@bizosys.com>.
Here goes the exception details.. I again encountered... 

 

11/01/16 20:22:59 DEBUG mortbay.log: Skipping entry:
org/apache/hadoop/hbase/thrift/generated/Hbase$mutateRowsTs_result$1.class

11/01/16 20:22:59 INFO wal.HLogSplitter: Split writers finished

11/01/16 20:22:59 ERROR master.MasterFileSystem: Failed splitting
file:/tmp/hbase-karan/hbase/.logs/abinash,3620,1295188307109

java.io.IOException: Unable to delete src dir:
file:/tmp/hbase-karan/hbase/.logs/abinash,3620,1295188307109

        at
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.archiveLogs(HLogSplitt
er.java:341)

        at
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.
java:290)

        at
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.
java:187)

        at
org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.ja
va:196)

        at
org.apache.hadoop.hbase.master.MasterFileSystem.splitLogAfterStartup(MasterF
ileSystem.java:180)

        at
org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:378
)

        at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:277)

        at
org.apache.hadoop.hbase.master.HMasterCommandLine$LocalHMaster.run(HMasterCo
mmandLine.java:193)

        at java.lang.Thread.run(Unknown Source)

11/01/16 20:22:59 DEBUG mortbay.log: Skipping entry:
org/apache/hadoop/hbase/thrift/generated/Hbase$Iface.class

11/01/16 20:22:59 INFO master.MasterFileSystem: Log folder
file:/tmp/hbase-karan/hbase/.logs/abinash,4030,1295189497984 doesn't belong
to a known regi

on server, splitting

11/01/16 20:22:59 DEBUG mortbay.log: Skipping entry:
org/apache/hadoop/hbase/thrift/generated/Hbase$deleteAllTs_args.class

11/01/16 20:22:59 INFO wal.HLogSplitter: Splitting 1 hlog(s) in
file:/tmp/hbase-karan/hbase/.logs/abinash,4030,1295189497984

11/01/16 20:22:59 DEBUG mortbay.log: Skipping entry:
org/apache/hadoop/hbase/thrift/generated/Hbase$Processor$atomicIncrement.cla
ss

11/01/16 20:22:59 DEBUG wal.HLogSplitter: Splitting hlog 1 of 1:
file:/tmp/hbase-karan/hbase/.logs/abinash,4030,1295189497984/abinash%3A4030.
129518950

2031, length=8192

11/01/16 20:22:59 WARN util.FSUtils: Running on HDFS without append enabled
may result in data loss

11/01/16 20:22:59 DEBUG wal.HLogSplitter: Writer thread
Thread[WriterThread-0,6,main]: starting

11/01/16 20:22:59 DEBUG mortbay.log: Skipping entry:
org/apache/hadoop/hbase/thrift/generated/Hbase$mutateRow_args$_Fields.class

11/01/16 20:22:59 WARN fs.FSInputChecker: Problem opening checksum file:
file:/tmp/hbase-karan/hbase/.logs/abinash,4030,1295189497984/abinash%3A4030.
1

295189502031.  Ignoring exception: java.io.EOFException

        at java.io.DataInputStream.readFully(Unknown Source)

        at java.io.DataInputStream.readFully(Unknown Source)

        at
org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(Checks
umFileSystem.java:134)

        at
org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:283)

        at
org.apache.hadoop.io.SequenceFile$Reader.openFile(SequenceFile.java:1444)

        at
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.ope
nFile(SequenceFileLogReader.java:65)

        at
org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1431)

        at
org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1424)

        at
org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1419)

        at
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.<in
it>(SequenceFileLogReader.java:57)

        at
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(Sequence
FileLogReader.java:158)

        at
org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:576)

        at
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter
.java:469)

        at
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.parseHLog(HLogSplitter
.java:406)

        at
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.
java:261)

        at
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.
java:187)

        at
org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.ja
va:196)

        at
org.apache.hadoop.hbase.master.MasterFileSystem.splitLogAfterStartup(MasterF
ileSystem.java:180)

        at
org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:378
)

        at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:277)

        at
org.apache.hadoop.hbase.master.HMasterCommandLine$LocalHMaster.run(HMasterCo
mmandLine.java:193)

        at java.lang.Thread.run(Unknown Source)

 

11/01/16 20:22:59 DEBUG wal.HLogSplitter: Writer thread
Thread[WriterThread-2,6,main]: starting

11/01/16 20:22:59 DEBUG wal.HLogSplitter: Writer thread
Thread[WriterThread-1,6,main]: starting

11/01/16 20:22:59 WARN wal.SequenceFileLogReader: Error while trying to get
accurate file length.  Truncation / data loss may occur if RegionServers d

ie.

java.lang.NoSuchMethodException:
org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.getFileLength
()

        at java.lang.Class.getMethod(Unknown Source)

        at
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader$WAL
ReaderFSDataInputStream.getPos(SequenceFileLogReader.java:107)

        at
org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1434)

        at
org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1424)

        at
org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1419)

        at
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.<in
it>(SequenceFileLogReader.java:57)

        at
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(Sequence
FileLogReader.java:158)

        at
org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:576)

        at
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter
.java:469)

        at
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.parseHLog(HLogSplitter
.java:406)

        at
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.
java:261)

        at
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.
java:187)

        at
org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.ja
va:196)

        at
org.apache.hadoop.hbase.master.MasterFileSystem.splitLogAfterStartup(MasterF
ileSystem.java:180)

        at
org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:378
)

        at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:277)

        at
org.apache.hadoop.hbase.master.HMasterCommandLine$LocalHMaster.run(HMasterCo
mmandLine.java:193)

        at java.lang.Thread.run(Unknown Source)

11/01/16 20:22:59 DEBUG mortbay.log: Skipping entry:
org/apache/hadoop/hbase/thrift/generated/Hbase$getColumnDescriptors_result$1
.class

11/01/16 20:22:59 DEBUG mortbay.log: Skipping entry:
org/apache/hadoop/hbase/thrift/generated/Hbase$scannerOpenWithPrefix_result$
_Fields.class

11/01/16 20:22:59 DEBUG mortbay.log: Skipping entry:
org/apache/hadoop/hbase/thrift/ThriftServer$HBaseHandler$1.class

11/01/16 20:22:59 DEBUG mortbay.log: Skipping entry:
org/apache/hadoop/hbase/thrift/ThriftServer$HBaseHandler.class

11/01/16 20:22:59 DEBUG mortbay.log: Skipping entry:
org/apache/hadoop/hbase/thrift/ThriftUtilities.class

11/01/16 20:22:59 DEBUG mortbay.log: Skipping entry:
org/apache/hadoop/hbase/thrift/ThriftServer.class

11/01/16 20:22:59 DEBUG mortbay.log: Skipping entry:
org/apache/hadoop/hbase/HColumnDescriptor$CompressionType.class

11/01/16 20:22:59 DEBUG wal.HLogSplitter: Pushed=31 entries from
file:/tmp/hbase-karan/hbase/.logs/abinash,4030,1295189497984/abinash%3A4030.
129518950

2031

11/01/16 20:22:59 DEBUG mortbay.log: Skipping entry: META-INF/LICENSE

11/01/16 20:22:59 INFO wal.HLogSplitter: EOF from hlog
file:/tmp/hbase-karan/hbase/.logs/abinash,4030,1295189497984/abinash%3A4030.
1295189502031.  con

tinuing

11/01/16 20:22:59 DEBUG mortbay.log: Skipping entry: META-INF/NOTICE

11/01/16 20:22:59 DEBUG mortbay.log: Skipping entry: META-INF/DEPENDENCIES

11/01/16 20:22:59 DEBUG mortbay.log: Skipping entry:
hbase-webapps/master/index.html

11/01/16 20:22:59 DEBUG mortbay.log: Skipping entry:
hbase-webapps/master/WEB-INF/web.xml

11/01/16 20:22:59 INFO wal.SequenceFileLogWriter: syncFs -- HDFS-200 -- not
available, dfs.support.append=false

11/01/16 20:22:59 DEBUG wal.HLogSplitter: Creating writer
path=file:/tmp/hbase-karan/hbase/-ROOT-/70236052/recovered.edits/00000000000
00002697 region=

70236052

11/01/16 20:22:59 INFO wal.HLogSplitter: Archived processed log
file:/tmp/hbase-karan/hbase/.logs/abinash,4030,1295189497984/abinash%3A4030.
1295189502

031 to file:/tmp/hbase-karan/hbase/.oldlogs/abinash%3A4030.1295189502031

11/01/16 20:22:59 DEBUG mortbay.log: Skipping entry:
hbase-webapps/static/hbase_logo_med.gif

 

-----Original Message-----
From: Ted Dunning [mailto:tdunning@maprtech.com] 
Sent: Saturday, January 08, 2011 2:28 AM
To: dev@hbase.apache.org
Subject: Re: java.lang.NoSuchMethodException: hbase-0.90

 

Great.  I will file a patch to move the check to the constructor and fail

back to

old process if the method is missing.

 

For our case, I just implemented getFileLength and all is happy (on that

front)

 

On Fri, Jan 7, 2011 at 12:38 PM, Stack <st...@duboce.net> wrote:

 

> Let me open an issue to add more checks around the reflection; e.g.

> check type as you fellas suggest.  If anything is not as expected,

> then we'd fallback on old getPos behavior.  It should not fail for

> 'pure' HDFS.  'Alternative' HDFS's probably don't have this 2G

> problem.

>


Re: java.lang.NoSuchMethodException: hbase-0.90

Posted by Ted Dunning <td...@maprtech.com>.
Great.  I will file a patch to move the check to the constructor and fail
back to
old process if the method is missing.

For our case, I just implemented getFileLength and all is happy (on that
front)

On Fri, Jan 7, 2011 at 12:38 PM, Stack <st...@duboce.net> wrote:

> Let me open an issue to add more checks around the reflection; e.g.
> check type as you fellas suggest.  If anything is not as expected,
> then we'd fallback on old getPos behavior.  It should not fail for
> 'pure' HDFS.  'Alternative' HDFS's probably don't have this 2G
> problem.
>

Re: java.lang.NoSuchMethodException: hbase-0.90

Posted by Stack <st...@duboce.net>.
On Fri, Jan 7, 2011 at 11:34 AM, Ted Dunning <td...@maprtech.com> wrote:
> It also assumes the type of this.in.in (at least the duck type).
>

True.

> This broke when we gave it a non HDFS instream.

You manifest as HDFS or as something else?

> I have added a
> getFileLength method so that works now.  But it sounds like it may break
> under other conditions as well.
>

Its breaking for Abinash for some reason, yes.


> Is there a reasonable fall back if the method doesn't exist?  Perhaps it
> would be good to just test if the getMethod call returns null or throws.
>  Then the fallback could be used.
>

The code was added to fix problem w/ DFSClient not being able to deal
with files > 2G.

Let me open an issue to add more checks around the reflection; e.g.
check type as you fellas suggest.  If anything is not as expected,
then we'd fallback on old getPos behavior.  It should not fail for
'pure' HDFS.  'Alternative' HDFS's probably don't have this 2G
problem.

St.Ack

>
>>
>>
>> > My own situation is a bit unusual since I was testing hbase on a non-HDFS
>> > file system, but Abinash's experience makes it seem that there is
>> something
>> > worse going on.
>> >
>>
>> Ted, you need something changed?  If so, lets do it now before I roll
>> next 0.90.0RC.
>>
>
> I am happy to make the change.  Just need a bit of context to avoid messing
> up.
>

Re: java.lang.NoSuchMethodException: hbase-0.90

Posted by Ted Dunning <td...@maprtech.com>.
On Fri, Jan 7, 2011 at 10:30 AM, Stack <st...@duboce.net> wrote:

> As to your question Ted, it does seem like we could do the reflection
> once-only in the constructor rather than every time we do a getPos.
> Let me ask Nicolas.  Maybe he had reason for having to do it each
> time.  As to its assumptions, what you think?  It assumes class has a
> data member named 'in' and that file length is a long which seems safe
> enough.  Otherwise, its just changing accessibility.
>

It also assumes the type of this.in.in (at least the duck type).

This broke when we gave it a non HDFS instream.  I have added a
getFileLength method so that works now.  But it sounds like it may break
under other conditions as well.

Is there a reasonable fall back if the method doesn't exist?  Perhaps it
would be good to just test if the getMethod call returns null or throws.
 Then the fallback could be used.


>
>
> > My own situation is a bit unusual since I was testing hbase on a non-HDFS
> > file system, but Abinash's experience makes it seem that there is
> something
> > worse going on.
> >
>
> Ted, you need something changed?  If so, lets do it now before I roll
> next 0.90.0RC.
>

I am happy to make the change.  Just need a bit of context to avoid messing
up.

Re: java.lang.NoSuchMethodException: hbase-0.90

Posted by Ted Dunning <td...@maprtech.com>.
I think a simple check for the presence of the method is better.

On Fri, Jan 7, 2011 at 11:32 AM, M. C. Srivas <mc...@gmail.com> wrote:

>
> How about checking to see if "in" is instanceOf  DFSInputStream before
> doing
> the rest of the stuff?
>
>
>
> >
> > St.Ack
> >
>

Re: java.lang.NoSuchMethodException: hbase-0.90

Posted by "M. C. Srivas" <mc...@gmail.com>.
On Fri, Jan 7, 2011 at 10:30 AM, Stack <st...@duboce.net> wrote:

> On Fri, Jan 7, 2011 at 7:53 AM, Ted Dunning <td...@maprtech.com> wrote:
> > This is on 0.90, right?  Were you using HDFS to store your region tables?
> >
> > I just ran into the same thing and looked into the
> >
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader$WALReaderFSDataInputStream.getPos
> > method.
> >
> > That method does some truly hideous reflection things without checking
> that
> > the objects involved actually are the correct type.  It also pierces the
> > visibility constraints on fields internal to objects by manipulating
> their
> > visibility.
> >
> > Is that code really necessary?  Is there a good way to make it less
> > sensitive to violation of its assumptions?
> >
>
> Yeah, its ugly, but acrobatics were required to get around dumb
> dfsclient limitation (See hbase-3038).
>
> As to your question Ted, it does seem like we could do the reflection
> once-only in the constructor rather than every time we do a getPos.
> Let me ask Nicolas.  Maybe he had reason for having to do it each
> time.  As to its assumptions, what you think?  It assumes class has a
> data member named 'in' and that file length is a long which seems safe
> enough.  Otherwise, its just changing accessibility.
>
>
> > My own situation is a bit unusual since I was testing hbase on a non-HDFS
> > file system, but Abinash's experience makes it seem that there is
> something
> > worse going on.
> >
>
> Ted, you need something changed?  If so, lets do it now before I roll
> next 0.90.0RC.
>

How about checking to see if "in" is instanceOf  DFSInputStream before doing
the rest of the stuff?



>
> St.Ack
>

Re: java.lang.NoSuchMethodException: hbase-0.90

Posted by Stack <st...@duboce.net>.
On Fri, Jan 7, 2011 at 7:53 AM, Ted Dunning <td...@maprtech.com> wrote:
> This is on 0.90, right?  Were you using HDFS to store your region tables?
>
> I just ran into the same thing and looked into the
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader$WALReaderFSDataInputStream.getPos
> method.
>
> That method does some truly hideous reflection things without checking that
> the objects involved actually are the correct type.  It also pierces the
> visibility constraints on fields internal to objects by manipulating their
> visibility.
>
> Is that code really necessary?  Is there a good way to make it less
> sensitive to violation of its assumptions?
>

Yeah, its ugly, but acrobatics were required to get around dumb
dfsclient limitation (See hbase-3038).

As to your question Ted, it does seem like we could do the reflection
once-only in the constructor rather than every time we do a getPos.
Let me ask Nicolas.  Maybe he had reason for having to do it each
time.  As to its assumptions, what you think?  It assumes class has a
data member named 'in' and that file length is a long which seems safe
enough.  Otherwise, its just changing accessibility.


> My own situation is a bit unusual since I was testing hbase on a non-HDFS
> file system, but Abinash's experience makes it seem that there is something
> worse going on.
>

Ted, you need something changed?  If so, lets do it now before I roll
next 0.90.0RC.

St.Ack

RE: java.lang.NoSuchMethodException: hbase-0.90

Posted by "Abinash Karana (Bizosys)" <ab...@bizosys.com>.
Yep.. I have only filters which are deployed along with HBase 0.90. 
You can browse CVS code @ http://bizosyshsearch.sourceforge.net/

One more finding > The 0.90 seems slower than 0.89

Test Result: I indexed using HSearch (HSearch Uses HBase for storing
indexes) around 1 Million records of Freebase location information. The
warmed Search for keyword "Hill" returned around 6000 matching records and
10 teasers in around 250ms. In the same test bed with 0.90 it went up to
280ms on average. May be the ugly session warnings are causing it!! 

However, with 0.90 Get Batching for teasers it came down to 235ms.

Regards
Abinash

-----Original Message-----
From: Ted Dunning [mailto:tdunning@maprtech.com] 
Sent: Friday, January 07, 2011 9:23 PM
To: dev@hbase.apache.org; abinash@bizosys.com
Subject: Re: java.lang.NoSuchMethodException: hbase-0.90

This is on 0.90, right?  Were you using HDFS to store your region tables?

I just ran into the same thing and looked into the
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader$WAL
ReaderFSDataInputStream.getPos
method.

That method does some truly hideous reflection things without checking that
the objects involved actually are the correct type.  It also pierces the
visibility constraints on fields internal to objects by manipulating their
visibility.

Is that code really necessary?  Is there a good way to make it less
sensitive to violation of its assumptions?

My own situation is a bit unusual since I was testing hbase on a non-HDFS
file system, but Abinash's experience makes it seem that there is something
worse going on.

On Fri, Jan 7, 2011 at 2:32 AM, Abinash Karana (Bizosys) <
abinash@bizosys.com> wrote:

> 11/01/07 14:46:11 WARN wal.SequenceFileLogReader: Error while trying to
get
> accurate file length.  Truncation / data loss may occur if RegionServers d
> ie.
> java.lang.NoSuchMethodException:
>
>
org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.getFileLength
> ()
>        at java.lang.Class.getMethod(Unknown Source)
>        at
>
>
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader$WAL
> ReaderFSDataInputStream.getPos(SequenceFileLogReader.java:107)
>        at
> org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1434)
>        at
> org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1424)
>        at
> org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1419)
>        at
>
>
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.<in
> it>(SequenceFileLogReader.java:57)
>        at
>
>
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(Sequence
> FileLogReader.java:158)
>        at
> org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:576)
>        at
>
>
org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.ja
> va:1848)
>        at
>
>
org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegi
> on.java:1808)
>        at
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:350)
>        at
>
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2505)
>        at
>
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2491)
>        at
>
>
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(Op
> enRegionHandler.java:262)
>        at
>
>
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenR
> egionHandler.java:94)
>        at
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:151)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown
> Source)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown
> Source)
>        at java.lang.Thread.run(Unknown Source)
>
>


Re: java.lang.NoSuchMethodException: hbase-0.90

Posted by Ted Dunning <td...@maprtech.com>.
This is on 0.90, right?  Were you using HDFS to store your region tables?

I just ran into the same thing and looked into the
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader$WALReaderFSDataInputStream.getPos
method.

That method does some truly hideous reflection things without checking that
the objects involved actually are the correct type.  It also pierces the
visibility constraints on fields internal to objects by manipulating their
visibility.

Is that code really necessary?  Is there a good way to make it less
sensitive to violation of its assumptions?

My own situation is a bit unusual since I was testing hbase on a non-HDFS
file system, but Abinash's experience makes it seem that there is something
worse going on.

On Fri, Jan 7, 2011 at 2:32 AM, Abinash Karana (Bizosys) <
abinash@bizosys.com> wrote:

> 11/01/07 14:46:11 WARN wal.SequenceFileLogReader: Error while trying to get
> accurate file length.  Truncation / data loss may occur if RegionServers d
> ie.
> java.lang.NoSuchMethodException:
>
> org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.getFileLength
> ()
>        at java.lang.Class.getMethod(Unknown Source)
>        at
>
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader$WAL
> ReaderFSDataInputStream.getPos(SequenceFileLogReader.java:107)
>        at
> org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1434)
>        at
> org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1424)
>        at
> org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1419)
>        at
>
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.<in
> it>(SequenceFileLogReader.java:57)
>        at
>
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(Sequence
> FileLogReader.java:158)
>        at
> org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:576)
>        at
>
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.ja
> va:1848)
>        at
>
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegi
> on.java:1808)
>        at
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:350)
>        at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2505)
>        at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2491)
>        at
>
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(Op
> enRegionHandler.java:262)
>        at
>
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenR
> egionHandler.java:94)
>        at
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:151)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown
> Source)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown
> Source)
>        at java.lang.Thread.run(Unknown Source)
>
>

Re: java.lang.NoSuchMethodException: hbase-0.90

Posted by Stack <st...@duboce.net>.
Abinash, can we have more of the stack trace?  How/Where did this
happen?  As part of normal running?
St.Ack

On Fri, Jan 7, 2011 at 2:32 AM, Abinash Karana (Bizosys)
<ab...@bizosys.com> wrote:
> 11/01/07 14:46:11 WARN wal.SequenceFileLogReader: Error while trying to get
> accurate file length.  Truncation / data loss may occur if RegionServers d
> ie.
> java.lang.NoSuchMethodException:
> org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.getFileLength
> ()
>        at java.lang.Class.getMethod(Unknown Source)
>        at
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader$WAL
> ReaderFSDataInputStream.getPos(SequenceFileLogReader.java:107)
>        at
> org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1434)
>        at
> org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1424)
>        at
> org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1419)
>        at
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.<in
> it>(SequenceFileLogReader.java:57)
>        at
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(Sequence
> FileLogReader.java:158)
>        at
> org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:576)
>        at
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.ja
> va:1848)
>        at
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegi
> on.java:1808)
>        at
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:350)
>        at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2505)
>        at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2491)
>        at
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(Op
> enRegionHandler.java:262)
>        at
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenR
> egionHandler.java:94)
>        at
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:151)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown
> Source)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown
> Source)
>        at java.lang.Thread.run(Unknown Source)
>
>