You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-commits@hadoop.apache.org by Apache Wiki <wi...@apache.org> on 2007/05/31 00:43:28 UTC

[Lucene-hadoop Wiki] Trivial Update of "Hbase/HbaseArchitecture" by stack

Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Lucene-hadoop Wiki" for change notification.

The following page has been changed by stack:
http://wiki.apache.org/lucene-hadoop/Hbase/HbaseArchitecture

The comment on the change is:
Move News from here to HBase home page

------------------------------------------------------------------------------
  This effort is still a "work in progress". Please feel free to add
  comments, but please make them stand out by bolding or underlining
  them. Thanks!
- 
- '''NEWS:''' (updated 2007/05/30)
-  1. HBase is being updated frequently. The latest code can always be found in the [http://svn.apache.org/viewvc/lucene/hadoop/trunk/src/contrib/hbase/ trunk of the Hadoop svn tree]. 
-  1. HBase now has its own component in the [https://issues.apache.org/jira/browse/HADOOP Hadoop Jira]. Bug reports, contributions, etc. should be tagged with the component '''contrib/hbase'''.
-  1. It is now possible to add or delete column families after a table exists. Before either of these operations the table being updated must be taken off-line (disabled).
-  1. Data compression is available on a per-column family basis. The options are:
-   * no compression
-   * record level compression
-   * block level compression
  
  = Table of Contents =
  
@@ -74, +65 @@

  ||<^|5> "com.cnn.www" ||<:> t9 || ||<)> "anchor:cnnsi.com" ||<:> "CNN" || ||
  ||<:> t8 || ||<)> "anchor:my.look.ca" ||<:> "CNN.com" || ||
  ||<:> t6 ||<:> "<html>..." || || ||<:> "text/html" ||
- ||<:> t5 ||<:> `"<html>..."` || || || ||
+ ||<:> t5 ||<:> "<html>..." || || || ||
- ||<:> t3 ||<:> `"<html>..."` || || || ||
+ ||<:> t3 ||<:> "<html>..." || || || ||
  
  [[Anchor(physical)]]
  == Physical Storage View ==
@@ -90, +81 @@

  
  ||<:> '''Row Key''' ||<:> '''Time Stamp''' ||<:> '''Column''' ''"contents:"'' ||
  ||<^|3> "com.cnn.www" ||<:> t6 ||<:> "<html>..." ||
- ||<:> t5 ||<:> `"<html>..."` ||
+ ||<:> t5 ||<:> "<html>..." ||
- ||<:> t3 ||<:> `"<html>..."` ||
+ ||<:> t3 ||<:> "<html>..." ||
  
  [[BR]]
  
@@ -174, +165 @@

  [[Anchor(scanner)]]
  == Scanner API ==
  
- To obtain a scanner, [http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/javadoc/org/apache/hadoop/hbase/HClient.html#openTable(org.apache.hadoop.io.Text) open the table], and use [http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/javadoc/org/apache/hadoop/hbase/HClient.html#obtainScanner(org.apache.hadoop.io.Text%5B%5D,%20org.apache.hadoop.io.Text) obtainScanner].
+ To obtain a scanner, a 'iterator' that must be closed, [http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/javadoc/org/apache/hadoop/hbase/HClient.html#openTable(org.apache.hadoop.io.Text) open the table], and use [http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/javadoc/org/apache/hadoop/hbase/HClient.html#obtainScanner(org.apache.hadoop.io.Text%5B%5D,%20org.apache.hadoop.io.Text) obtainScanner].
  
  Then use the [http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/javadoc/org/apache/hadoop/hbase/HScannerInterface.html scanner API]
  
@@ -185, +176 @@

  key. Physically, tables are broken into HRegions. An HRegion is
  identified by its tablename plus a start/end-key pair. A given HRegion
  with keys <start> and <end> will store all the rows from (<start>,
- <end>]. A set of HRegions, sorted appropriately, forms an entire
+ <end>). A set of HRegions, sorted appropriately, forms an entire
  table.
  
  All data is physically stored using Hadoop's DFS. Data is served to