You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hbase.apache.org by st...@apache.org on 2009/10/22 18:42:24 UTC

svn commit: r828778 - in /hadoop/hbase/branches/0.20: CHANGES.txt src/java/org/apache/hadoop/hbase/regionserver/Store.java

Author: stack
Date: Thu Oct 22 16:42:23 2009
New Revision: 828778

URL: http://svn.apache.org/viewvc?rev=828778&view=rev
Log:
HBASE-1925 IllegalAccessError: Has not been initialized (getMaxSequenceId)

Modified:
    hadoop/hbase/branches/0.20/CHANGES.txt
    hadoop/hbase/branches/0.20/src/java/org/apache/hadoop/hbase/regionserver/Store.java

Modified: hadoop/hbase/branches/0.20/CHANGES.txt
URL: http://svn.apache.org/viewvc/hadoop/hbase/branches/0.20/CHANGES.txt?rev=828778&r1=828777&r2=828778&view=diff
==============================================================================
--- hadoop/hbase/branches/0.20/CHANGES.txt (original)
+++ hadoop/hbase/branches/0.20/CHANGES.txt Thu Oct 22 16:42:23 2009
@@ -17,6 +17,7 @@
    HBASE-1924  MapReduce Driver lost hsf2sf backporting hbase-1684
    HBASE-1777  column length is not checked before saved to memstore
    HBASE-1895  HConstants.MAX_ROW_LENGTH is incorrectly 64k, should be 32k
+   HBASE-1925  IllegalAccessError: Has not been initialized (getMaxSequenceId)
 
   IMPROVEMENTS
    HBASE-1899  Use scanner caching in shell count

Modified: hadoop/hbase/branches/0.20/src/java/org/apache/hadoop/hbase/regionserver/Store.java
URL: http://svn.apache.org/viewvc/hadoop/hbase/branches/0.20/src/java/org/apache/hadoop/hbase/regionserver/Store.java?rev=828778&r1=828777&r2=828778&view=diff
==============================================================================
--- hadoop/hbase/branches/0.20/src/java/org/apache/hadoop/hbase/regionserver/Store.java (original)
+++ hadoop/hbase/branches/0.20/src/java/org/apache/hadoop/hbase/regionserver/Store.java Thu Oct 22 16:42:23 2009
@@ -555,10 +555,10 @@
             flushed += this.memstore.heapSizeChange(kv, true);
           }
         }
-        // B. Write out the log sequence number that corresponds to this output
-        // MapFile.  The MapFile is current up to and including logCacheFlushId.
-        StoreFile.appendMetadata(writer, logCacheFlushId);
       } finally {
+        // Write out the log sequence number that corresponds to this output
+        // hfile.  The hfile is current up to and including logCacheFlushId.
+        StoreFile.appendMetadata(writer, logCacheFlushId);
         writer.close();
       }
     }