You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by "ramkrishna.s.vasudevan (JIRA)" <ji...@apache.org> on 2014/01/29 11:06:09 UTC

[jira] [Created] (HBASE-10438) NPE from LRUDictionary when size reaches the max init value

ramkrishna.s.vasudevan created HBASE-10438:
----------------------------------------------

             Summary: NPE from LRUDictionary when size reaches the max init value
                 Key: HBASE-10438
                 URL: https://issues.apache.org/jira/browse/HBASE-10438
             Project: HBase
          Issue Type: Bug
    Affects Versions: 0.98.0
            Reporter: ramkrishna.s.vasudevan
            Assignee: ramkrishna.s.vasudevan
            Priority: Critical
             Fix For: 0.98.0


This happened while testing tags with COMPRESS_TAG=true/false. I was trying to change this attribute of compressing tags by altering the HCD.  The DBE used is FAST_DIFF. 
In one particular case I got this 
{code}
2014-01-29 16:20:03,023 ERROR [regionserver60020-smallCompactions-1390983591688] regionserver.CompactSplitThread: Compaction failed Request = regionName=usertable,user5146961419203824653,1390979618897.2dd477d0aed888c615a29356c0bbb19d., storeName=f1, fileCount=4, fileSize=498.6 M (226.0 M, 163.7 M, 67.0 M, 41.8 M), priority=6, time=1994941280334574
java.lang.NullPointerException
        at org.apache.hadoop.hbase.io.util.LRUDictionary$BidirectionalLRUMap.put(LRUDictionary.java:109)
        at org.apache.hadoop.hbase.io.util.LRUDictionary$BidirectionalLRUMap.access$200(LRUDictionary.java:76)
        at org.apache.hadoop.hbase.io.util.LRUDictionary.addEntry(LRUDictionary.java:62)
        at org.apache.hadoop.hbase.io.TagCompressionContext.uncompressTags(TagCompressionContext.java:147)
        at org.apache.hadoop.hbase.io.encoding.BufferedDataBlockEncoder$BufferedEncodedSeeker.decodeTags(BufferedDataBlockEncoder.java:270)
        at org.apache.hadoop.hbase.io.encoding.FastDiffDeltaEncoder$1.decode(FastDiffDeltaEncoder.java:522)
        at org.apache.hadoop.hbase.io.encoding.FastDiffDeltaEncoder$1.decodeFirst(FastDiffDeltaEncoder.java:535)
        at org.apache.hadoop.hbase.io.encoding.BufferedDataBlockEncoder$BufferedEncodedSeeker.setCurrentBuffer(BufferedDataBlockEncoder.java:188)
        at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.updateCurrentBlock(HFileReaderV2.java:1017)
        at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.next(HFileReaderV2.java:1068)
        at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:137)
        at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:108)
        at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:509)
        at org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:217)
        at org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:76)
        at org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:109)
        at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1074)
        at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1382)
        at org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:475)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:744)
{code}
I am not able to reproduce this.  One thing is I altered the table to use COMPRESS_TAGS here before that it was false.  
My feeling is this is not due to the COMPRESS_TAGS because we try to handle this per file by adding it in FILE_INFO. 
In the above stack trace the problem has occured while compaction and so the flushed file should have this property set.  I think the problem could be with LRUDicitonary.
the reason for NPE is 
{code}
 if (currSize < initSize) {
        // There is space to add without evicting.
        indexToNode[currSize].setContents(stored, 0, stored.length);
        setHead(indexToNode[currSize]);
        short ret = (short) currSize++;
        nodeToIndex.put(indexToNode[ret], ret);
        System.out.println(currSize);
        return ret;
      } else {
        short s = nodeToIndex.remove(tail);
        tail.setContents(stored, 0, stored.length);
        // we need to rehash this.
        nodeToIndex.put(tail, s);
        moveToHead(tail);
        return s;
      }
{code}
Here 
{code}
short s = nodeToIndex.remove(tail);
{code}
is giving a null value and the typecasting to short primitive is throwing NPE.  Am digging this further to see if am able to reproduce this.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)