You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hbase.apache.org by st...@apache.org on 2008/12/20 06:30:07 UTC

svn commit: r728237 - in /hadoop/hbase/branches/0.18: CHANGES.txt conf/hbase-default.xml

Author: stack
Date: Fri Dec 19 21:29:58 2008
New Revision: 728237

URL: http://svn.apache.org/viewvc?rev=728237&view=rev
Log:
HBASE-1070 Up default index interval in TRUNK and branch

Modified:
    hadoop/hbase/branches/0.18/CHANGES.txt
    hadoop/hbase/branches/0.18/conf/hbase-default.xml

Modified: hadoop/hbase/branches/0.18/CHANGES.txt
URL: http://svn.apache.org/viewvc/hadoop/hbase/branches/0.18/CHANGES.txt?rev=728237&r1=728236&r2=728237&view=diff
==============================================================================
--- hadoop/hbase/branches/0.18/CHANGES.txt (original)
+++ hadoop/hbase/branches/0.18/CHANGES.txt Fri Dec 19 21:29:58 2008
@@ -10,6 +10,7 @@
                from org.apache.hadoop.hbase.DroppedSnapshotException
    HBASE-981   hbase.io.index.interval doesn't seem to have an effect;
                interval is 128 rather than the configured 32
+   HBASE-1070  Up default index interval in TRUNK and branch
 
   IMPROVEMENTS
    HBASE-1046  Narrow getClosestRowBefore by passing column family (backport)

Modified: hadoop/hbase/branches/0.18/conf/hbase-default.xml
URL: http://svn.apache.org/viewvc/hadoop/hbase/branches/0.18/conf/hbase-default.xml?rev=728237&r1=728236&r2=728237&view=diff
==============================================================================
--- hadoop/hbase/branches/0.18/conf/hbase-default.xml (original)
+++ hadoop/hbase/branches/0.18/conf/hbase-default.xml Fri Dec 19 21:29:58 2008
@@ -279,12 +279,13 @@
   </property>
   <property>
     <name>hbase.io.index.interval</name>
-    <value>32</value>
+    <value>128</value>
     <description>The interval at which we record offsets in hbase
     store files/mapfiles.  Default for stock mapfiles is 128.  Index
     files are read into memory.  If there are many of them, could prove
     a burden.  If so play with the hadoop io.map.index.skip property and
     skip every nth index member when reading back the index into memory.
+    Downside to high index interval is lowered access times.
     </description>
   </property>
   <property>