You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@accumulo.apache.org by bi...@apache.org on 2012/03/23 20:07:52 UTC

svn commit: r1304563 [3/4] - in /incubator/accumulo/site/trunk: content/accumulo/ content/accumulo/1.4/ content/accumulo/1.4/examples/ content/accumulo/1.4/user_manual/ content/accumulo/user_manual_1.3-incubating/examples/ content/accumulo/user_manual_...

Added: incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/Shell_Commands.mdtext
URL: http://svn.apache.org/viewvc/incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/Shell_Commands.mdtext?rev=1304563&view=auto
==============================================================================
--- incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/Shell_Commands.mdtext (added)
+++ incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/Shell_Commands.mdtext Fri Mar 23 19:07:51 2012
@@ -0,0 +1,684 @@
+Title: Apache Accumulo User Manual: Shell Commands
+Notice:    Licensed to the Apache Software Foundation (ASF) under one
+           or more contributor license agreements.  See the NOTICE file
+           distributed with this work for additional information
+           regarding copyright ownership.  The ASF licenses this file
+           to you under the Apache License, Version 2.0 (the
+           "License"); you may not use this file except in compliance
+           with the License.  You may obtain a copy of the License at
+           .
+             http://www.apache.org/licenses/LICENSE-2.0
+           .
+           Unless required by applicable law or agreed to in writing,
+           software distributed under the License is distributed on an
+           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+           KIND, either express or implied.  See the License for the
+           specific language governing permissions and limitations
+           under the License.
+
+** Up:** [Apache Accumulo User Manual Version 1.4][3] ** Previous:** [Administration][5]   ** [Contents][7]**   
+  
+
+
+## <a id=Shell_Commands></a> Shell Commands
+
+  
+**?**   
+  
+    usage: ? [ <command> <command> ] [-?] [-np] [-nw]   
+    description: provides information about the available commands   
+      -?,-help  display this help   
+      -np,-no-pagination  disables pagination of output   
+      -nw,-no-wrap  disables wrapping of output   
+  
+**about**   
+  
+    usage: about [-?] [-v]   
+    description: displays information about this program   
+      -?,-help  display this help   
+      -v,-verbose  displays details session information   
+  
+**addsplits**   
+  
+    usage: addsplits [<split> <split> ] [-?] [-b64] [-sf <filename>] [-t <tableName>]   
+    description: add split points to an existing table   
+      -?,-help  display this help   
+      -b64,-base64encoded  decode encoded split points   
+      -sf,-splits-file <filename>  file with newline separated list of rows to add to   
+              table   
+      -t,-table <tableName>  name of a table to add split points to   
+  
+**authenticate**   
+  
+    usage: authenticate <username> [-?]   
+    description: verifies a user's credentials   
+      -?,-help  display this help   
+  
+**bye**   
+  
+    usage: bye [-?]   
+    description: exits the shell   
+      -?,-help  display this help   
+  
+**classpath**   
+  
+    usage: classpath [-?]   
+    description: lists the current files on the classpath   
+      -?,-help  display this help   
+  
+**clear**   
+  
+    usage: clear [-?]   
+    description: clears the screen   
+      -?,-help  display this help   
+  
+**clonetable**   
+  
+    usage: clonetable <current table name> <new table name> [-?] [-e <arg>] [-nf] [-s   
+              <arg>]   
+    description: clone a table   
+      -?,-help  display this help   
+      -e,-exclude <arg>  properties that should not be copied from source table.   
+              Expects <prop>,<prop>   
+      -nf,-noFlush  do not flush table data in memory before cloning.   
+      -s,-set <arg>  set initial properties before the table comes online. Expects   
+              <prop>=<value>,<prop>=<value>   
+  
+**cls**   
+  
+    usage: cls [-?]   
+    description: clears the screen   
+      -?,-help  display this help   
+  
+**compact**   
+  
+    usage: compact [-?] [-b <arg>] [-e <arg>] [-nf] [-p <pattern> | -t <tableName>]   
+              [-w]   
+    description: sets all tablets for a table to major compact as soon as possible   
+              (based on current time)   
+      -?,-help  display this help   
+      -b,-begin-row <arg>  begin row   
+      -e,-end-row <arg>  end row   
+      -nf,-noFlush  do not flush table data in memory before compacting.   
+      -p,-pattern <pattern>  regex pattern of table names to flush   
+      -t,-table <tableName>  name of a table to flush   
+      -w,-wait  wait for compact to finish   
+  
+**config**   
+  
+    usage: config [-?] [-d <property> | -f <string> | -s <property=value>]  [-np]  [-t   
+              <table>]   
+    description: prints system properties and table specific properties   
+      -?,-help  display this help   
+      -d,-delete <property>  delete a per-table property   
+      -f,-filter <string>  show only properties that contain this string   
+      -np,-no-pagination  disables pagination of output   
+      -s,-set <property=value>  set a per-table property   
+      -t,-table <table>  display/set/delete properties for specified table   
+  
+**createtable**   
+  
+    usage: createtable <tableName> [-?] [-a   
+              <<columnfamily>[:<columnqualifier>]=<aggregation class>>] [-b64] [-cc   
+              <table>] [-cs <table> | -sf <filename>] [-evc] [-f <className>] [-ndi]   
+              [-tl | -tm]   
+    description: creates a new table, with optional aggregators and optionally pre-split   
+      -?,-help  display this help   
+      -a,-aggregator <<columnfamily>[:<columnqualifier>]=<aggregation class>>  comma   
+              separated column=aggregator   
+      -b64,-base64encoded  decode encoded split points   
+      -cc,-copy-config <table>  table to copy configuration from   
+      -cs,-copy-splits <table>  table to copy current splits from   
+      -evc,-enable-visibility-constraint  prevents users from writing data they can not   
+              read.  When enabling this may want to consider disabling bulk import and   
+              alter table   
+      -f,-formatter <className>  default formatter to set   
+      -ndi,-no-default-iterators  prevents creation of the normal default iterator set   
+      -sf,-splits-file <filename>  file with newline separated list of rows to create a   
+              pre-split table   
+      -tl,-time-logical  use logical time   
+      -tm,-time-millis  use time in milliseconds   
+  
+**createuser**   
+  
+    usage: createuser <username> [-?] [-s <comma-separated-authorizations>]   
+    description: creates a new user   
+      -?,-help  display this help   
+      -s,-scan-authorizations <comma-separated-authorizations>  scan authorizations   
+  
+**debug**   
+  
+    usage: debug [ on | off ] [-?]   
+    description: turns debug logging on or off   
+      -?,-help  display this help   
+  
+**delete**   
+  
+    usage: delete <row> <colfamily> <colqualifier> [-?] [-l <expression>] [-t   
+              <timestamp>]   
+    description: deletes a record from a table   
+      -?,-help  display this help   
+      -l,-authorization-label <expression>  formatted authorization label expression   
+      -t,-timestamp <timestamp>  timestamp to use for insert   
+  
+**deleteiter**   
+  
+    usage: deleteiter [-?] [-majc] [-minc] -n <itername> [-scan] [-t <table>]   
+    description: deletes a table-specific iterator   
+      -?,-help  display this help   
+      -majc,-major-compaction  applied at major compaction   
+      -minc,-minor-compaction  applied at minor compaction   
+      -n,-name <itername>  iterator to delete   
+      -scan,-scan-time  applied at scan time   
+      -t,-table <table>  tableName   
+  
+**deletemany**   
+  
+    usage: deletemany [-?] [-b <start-row>] [-c   
+              «columnfamily>[:<columnqualifier>],<columnfamily>[:<columnqualifier>]>]   
+              [-e <end-row>] [-f] [-fm <className>] [-np] [-r <row>] [-s   
+              <comma-separated-authorizations>] [-st] [-t <table>]   
+    description: scans a table and deletes the resulting records   
+      -?,-help  display this help   
+      -b,-begin-row <start-row>  begin row (inclusive)   
+      -c,-columns   
+              «columnfamily>[:<columnqualifier>],<columnfamily>[:<columnqualifier>]>   
+              comma-separated columns   
+      -e,-end-row <end-row>  end row (inclusive)   
+      -f,-force  forces deletion without prompting   
+      -fm,-formatter <className>  fully qualified name of the formatter class to use   
+      -np,-no-pagination  disables pagination of output   
+      -r,-row <row>  row to scan   
+      -s,-scan-authorizations <comma-separated-authorizations>  scan authorizations   
+              (all user auths are used if this argument is not specified)   
+      -st,-show-timestamps  enables displaying timestamps   
+      -t,-table <table>  table to be created   
+  
+**deleterows**   
+  
+    usage: deleterows [-?] [-b <arg>] [-e <arg>] [-f] [-t <table>]   
+    description: delete a range of rows in a table.  Note that rows matching the start   
+              row ARE NOT deleted, but rows matching the end row ARE deleted.   
+      -?,-help  display this help   
+      -b,-begin-row <arg>  begin row   
+      -e,-end-row <arg>  end row   
+      -f,-force  delete data even if start or end are not specified   
+      -t,-tableName <table>  table to delete row range   
+  
+**deletescaniter**   
+  
+    usage: deletescaniter [-?] [-a] [-n <itername>] [-t <table>]   
+    description: deletes a table-specific scan iterator so it is no longer used during   
+              this shell session   
+      -?,-help  display this help   
+      -a,-all  delete all for tableName   
+      -n,-name <itername>  iterator to delete   
+      -t,-table <table>  tableName   
+  
+**deletetable**   
+  
+    usage: deletetable <tableName> [-?] [-t <arg>]   
+    description: deletes a table   
+      -?,-help  display this help   
+      -t,-tableName <arg>  deletes a table   
+  
+**deleteuser**   
+  
+    usage: deleteuser <username> [-?]   
+    description: deletes a user   
+      -?,-help  display this help   
+  
+**droptable**   
+  
+    usage: droptable <tableName> [-?] [-t <arg>]   
+    description: deletes a table   
+      -?,-help  display this help   
+      -t,-tableName <arg>  deletes a table   
+  
+**dropuser**   
+  
+    usage: dropuser <username> [-?]   
+    description: deletes a user   
+      -?,-help  display this help   
+  
+**du**   
+  
+    usage: du <table> <table> [-?] [-p <pattern>]   
+    description: Prints how much space is used by files referenced by a table.  When   
+              multiple tables are specified it prints how much space is used by files   
+              shared between tables, if any.   
+      -?,-help  display this help   
+      -p,-pattern <pattern>  regex pattern of table names   
+  
+**egrep**   
+  
+    usage: egrep <regex> <regex> [-?] [-b <start-row>] [-c   
+              «columnfamily>[:<columnqualifier>],<columnfamily>[:<columnqualifier>]>]   
+              [-e <end-row>] [-f <int>] [-fm <className>] [-np] [-nt <arg>] [-r <row>]   
+              [-s <comma-separated-authorizations>] [-st] [-t <table>]   
+    description: searches each row, column family, column qualifier and value, in   
+              parallel, on the server side (using a java Matcher, so put .* before and   
+              after your term if you're not matching the whole element)   
+      -?,-help  display this help   
+      -b,-begin-row <start-row>  begin row (inclusive)   
+      -c,-columns   
+              «columnfamily>[:<columnqualifier>],<columnfamily>[:<columnqualifier>]>   
+              comma-separated columns   
+      -e,-end-row <end-row>  end row (inclusive)   
+      -f,-show few <int>  Only shows certain amount of characters   
+      -fm,-formatter <className>  fully qualified name of the formatter class to use   
+      -np,-no-pagination  disables pagination of output   
+      -nt,-num-threads <arg>  num threads   
+      -r,-row <row>  row to scan   
+      -s,-scan-authorizations <comma-separated-authorizations>  scan authorizations   
+              (all user auths are used if this argument is not specified)   
+      -st,-show-timestamps  enables displaying timestamps   
+      -t,-tableName <table>  table to grep through   
+  
+**execfile**   
+  
+    usage: execfile [-?] [-v]   
+    description: specifies a file containing accumulo commands to execute   
+      -?,-help  display this help   
+      -v,-verbose  displays command prompt as commands are executed   
+  
+**exit**   
+  
+    usage: exit [-?]   
+    description: exits the shell   
+      -?,-help  display this help   
+  
+**flush**   
+  
+    usage: flush [-?] [-b <arg>] [-e <arg>] [-p <pattern> | -t <tableName>]  [-w]   
+    description: flushes a tables data that is currently in memory to disk   
+      -?,-help  display this help   
+      -b,-begin-row <arg>  begin row   
+      -e,-end-row <arg>  end row   
+      -p,-pattern <pattern>  regex pattern of table names to flush   
+      -t,-table <tableName>  name of a table to flush   
+      -w,-wait  wait for flush to finish   
+  
+**formatter**   
+  
+    usage: formatter [-?] -f <className> | -l | -r  [-t <table>]   
+    description: specifies a formatter to use for displaying table entries   
+      -?,-help  display this help   
+      -f,-formatter <className>  fully qualified name of the formatter class to use   
+      -l,-list  display the current formatter   
+      -r,-remove  remove the current formatter   
+      -t,-table <table>  table to set the formatter on   
+  
+**getauths**   
+  
+    usage: getauths [-?] [-u <user>]   
+    description: displays the maximum scan authorizations for a user   
+      -?,-help  display this help   
+      -u,-user <user>  user to operate on   
+  
+**getgroups**   
+  
+    usage: getgroups [-?] [-t <table>]   
+    description: gets the locality groups for a given table   
+      -?,-help  display this help   
+      -t,-table <table>  get locality groups for specified table   
+  
+**getsplits**   
+  
+    usage: getsplits [-?] [-b64] [-m <num>] [-o <file>] [-t <table>] [-v]   
+    description: retrieves the current split points for tablets in the current table   
+      -?,-help  display this help   
+      -b64,-base64encoded  encode the split points   
+      -m,-max <num>  specifies the maximum number of splits to create   
+      -o,-output <file>  specifies a local file to write the splits to   
+      -t,-tableName <table>  table to get splits on   
+      -v,-verbose  print out the tablet information with start/end rows   
+  
+**grant**   
+  
+    usage: grant <permission> [-?] -p <pattern> | -s | -t <table>  -u <username>   
+    description: grants system or table permissions for a user   
+      -?,-help  display this help   
+      -p,-pattern <pattern>  regex pattern of tables to grant permissions on   
+      -s,-system  grant a system permission   
+      -t,-table <table>  grant a table permission on this table   
+      -u,-user <username>  user to operate on   
+  
+**grep**   
+  
+    usage: grep <term> <term> [-?] [-b <start-row>] [-c   
+              «columnfamily>[:<columnqualifier>],<columnfamily>[:<columnqualifier>]>]   
+              [-e <end-row>] [-f <int>] [-fm <className>] [-np] [-nt <arg>] [-r <row>]   
+              [-s <comma-separated-authorizations>] [-st] [-t <table>]   
+    description: searches each row, column family, column qualifier and value in a table   
+              for a substring (not a regular expression), in parallel, on the server   
+              side   
+      -?,-help  display this help   
+      -b,-begin-row <start-row>  begin row (inclusive)   
+      -c,-columns   
+              «columnfamily>[:<columnqualifier>],<columnfamily>[:<columnqualifier>]>   
+              comma-separated columns   
+      -e,-end-row <end-row>  end row (inclusive)   
+      -f,-show few <int>  Only shows certain amount of characters   
+      -fm,-formatter <className>  fully qualified name of the formatter class to use   
+      -np,-no-pagination  disables pagination of output   
+      -nt,-num-threads <arg>  num threads   
+      -r,-row <row>  row to scan   
+      -s,-scan-authorizations <comma-separated-authorizations>  scan authorizations   
+              (all user auths are used if this argument is not specified)   
+      -st,-show-timestamps  enables displaying timestamps   
+      -t,-tableName <table>  table to grep through   
+  
+**help**   
+  
+    usage: help [ <command> <command> ] [-?] [-np] [-nw]   
+    description: provides information about the available commands   
+      -?,-help  display this help   
+      -np,-no-pagination  disables pagination of output   
+      -nw,-no-wrap  disables wrapping of output   
+  
+**history**   
+  
+    usage: history [-?] [-c]   
+    description: Generates a list of commands previously executed   
+      -?,-help  display this help   
+      -c,-Clears History, takes no arguments.  Clears History File   
+  
+**importdirectory**   
+  
+    usage: importdirectory <directory> <failureDirectory> true|false [-?]   
+    description: bulk imports an entire directory of data files to the current table.   
+              The boolean argument determines if accumulo sets the time.   
+      -?,-help  display this help   
+  
+**info**   
+  
+    usage: info [-?] [-v]   
+    description: displays information about this program   
+      -?,-help  display this help   
+      -v,-verbose  displays details session information   
+  
+**insert**   
+  
+    usage: insert <row> <colfamily> <colqualifier> <value> [-?] [-l <expression>] [-t   
+              <timestamp>]   
+    description: inserts a record   
+      -?,-help  display this help   
+      -l,-authorization-label <expression>  formatted authorization label expression   
+      -t,-timestamp <timestamp>  timestamp to use for insert   
+  
+**listiter**   
+  
+    usage: listiter [-?] [-majc] [-minc] [-n <itername>] [-scan] [-t <table>]   
+    description: lists table-specific iterators   
+      -?,-help  display this help   
+      -majc,-major-compaction  applied at major compaction   
+      -minc,-minor-compaction  applied at minor compaction   
+      -n,-name <itername>  iterator to delete   
+      -scan,-scan-time  applied at scan time   
+      -t,-table <table>  tableName   
+  
+**listscans**   
+  
+    usage: listscans [-?] [-np] [-ts <tablet server>]   
+    description: list what scans are currently running in accumulo. See the   
+              accumulo.core.client.admin.ActiveScan javadoc for more information about   
+              columns.   
+      -?,-help  display this help   
+      -np,-no-pagination  disables pagination of output   
+      -ts,-tabletServer <tablet server>  list scans for a specific tablet server   
+  
+**masterstate**   
+  
+    usage: masterstate is deprecated, use the command line utility instead [-?]   
+    description: DEPRECATED: use the command line utility instead   
+      -?,-help  display this help   
+  
+**maxrow**   
+  
+    usage: maxrow [-?] [-b <begin-row>] [-be] [-e <end-row>] [-ee] [-s   
+              <comma-separated-authorizations>] [-t <table>]   
+    description: find the max row in a table within a given range   
+      -?,-help  display this help   
+      -b,-begin-row <begin-row>  begin row   
+      -be,-begin-exclusive  make start row exclusive, by defaults it inclusive   
+      -e,-end-row <end-row>  end row   
+      -ee,-end-exclusive  make end row exclusive, by defaults it inclusive   
+      -s,-scan-authorizations <comma-separated-authorizations>  scan authorizations   
+              (all user auths are used if this argument is not specified)   
+      -t,-table <table>  table to be created   
+  
+**merge**   
+  
+    usage: merge [-?] [-b <arg>] [-e <arg>] [-f] [-s <arg>] [-t <table>] [-v]   
+    description: merge tablets in a table   
+      -?,-help  display this help   
+      -b,-begin-row <arg>  begin row   
+      -e,-end-row <arg>  end row   
+      -f,-force  merge small tablets to large tablets, even if it goes over the given   
+              size   
+      -s,-size <arg>  merge tablets to the given size over the entire table   
+      -t,-tableName <table>  table to be merged   
+      -v,-verbose  verbose output during merge   
+  
+**notable**   
+  
+    usage: notable [-?] [-t <arg>]   
+    description: returns to a tableless shell state   
+      -?,-help  display this help   
+      -t,-tableName <arg>  Returns to a no table state   
+  
+**offline**   
+  
+    usage: offline [-?] [-p <pattern> | -t <tableName>]   
+    description: starts the process of taking table offline   
+      -?,-help  display this help   
+      -p,-pattern <pattern>  regex pattern of table names to flush   
+      -t,-table <tableName>  name of a table to flush   
+  
+**online**   
+  
+    usage: online [-?] [-p <pattern> | -t <tableName>]   
+    description: starts the process of putting a table online   
+      -?,-help  display this help   
+      -p,-pattern <pattern>  regex pattern of table names to flush   
+      -t,-table <tableName>  name of a table to flush   
+  
+**passwd**   
+  
+    usage: passwd [-?] [-u <user>]   
+    description: changes a user's password   
+      -?,-help  display this help   
+      -u,-user <user>  user to operate on   
+  
+**quit**   
+  
+    usage: quit [-?]   
+    description: exits the shell   
+      -?,-help  display this help   
+  
+**renametable**   
+  
+    usage: renametable <current table name> <new table name> [-?]   
+    description: rename a table   
+      -?,-help  display this help   
+  
+**revoke**   
+  
+    usage: revoke <permission> [-?] -s | -t <table>  -u <username>   
+    description: revokes system or table permissions from a user   
+      -?,-help  display this help   
+      -s,-system  revoke a system permission   
+      -t,-table <table>  revoke a table permission on this table   
+      -u,-user <username>  user to operate on   
+  
+**scan**   
+  
+    usage: scan [-?] [-b <start-row>] [-c   
+              «columnfamily>[:<columnqualifier>],<columnfamily>[:<columnqualifier>]>]   
+              [-e <end-row>] [-f <int>] [-fm <className>] [-np] [-r <row>] [-s   
+              <comma-separated-authorizations>] [-st] [-t <table>]   
+    description: scans the table, and displays the resulting records   
+      -?,-help  display this help   
+      -b,-begin-row <start-row>  begin row (inclusive)   
+      -c,-columns   
+              «columnfamily>[:<columnqualifier>],<columnfamily>[:<columnqualifier>]>   
+              comma-separated columns   
+      -e,-end-row <end-row>  end row (inclusive)   
+      -f,-show few <int>  Only shows certain amount of characters   
+      -fm,-formatter <className>  fully qualified name of the formatter class to use   
+      -np,-no-pagination  disables pagination of output   
+      -r,-row <row>  row to scan   
+      -s,-scan-authorizations <comma-separated-authorizations>  scan authorizations   
+              (all user auths are used if this argument is not specified)   
+      -st,-show-timestamps  enables displaying timestamps   
+      -t,-tableName <table>  table to be scanned   
+  
+**select**   
+  
+    usage: select <row> <columnfamily> <columnqualifier> [-?] [-np] [-s   
+              <comma-separated-authorizations>] [-st] [-t <table>]   
+    description: scans for and displays a single record   
+      -?,-help  display this help   
+      -np,-no-pagination  disables pagination of output   
+      -s,-scan-authorizations <comma-separated-authorizations>  scan authorizations   
+      -st,-show-timestamps  enables displaying timestamps   
+      -t,-tableName <table>  table   
+  
+**selectrow**   
+  
+    usage: selectrow <row> [-?] [-np] [-s <comma-separated-authorizations>] [-st] [-t   
+              <table>]   
+    description: scans a single row and displays all resulting records   
+      -?,-help  display this help   
+      -np,-no-pagination  disables pagination of output   
+      -s,-scan-authorizations <comma-separated-authorizations>  scan authorizations   
+      -st,-show-timestamps  enables displaying timestamps   
+      -t,-tableName <table>  table to row select   
+  
+**setauths**   
+  
+    usage: setauths [-?] -c | -s <comma-separated-authorizations>  [-u <user>]   
+    description: sets the maximum scan authorizations for a user   
+      -?,-help  display this help   
+      -c,-clear-authorizations  clears the scan authorizations   
+      -s,-scan-authorizations <comma-separated-authorizations>  set the scan   
+              authorizations   
+      -u,-user <user>  user to operate on   
+  
+**setgroups**   
+  
+    usage: setgroups <group>=<col fam>,<col fam> <group>=<col fam>,<col fam> [-?]   
+              [-t <table>]   
+    description: sets the locality groups for a given table (for binary or commas, use   
+              Java API)   
+      -?,-help  display this help   
+      -t,-table <table>  get locality groups for specified table   
+  
+**setiter**   
+  
+    usage: setiter [-?] -ageoff | -agg | -class <name> | -regex | -reqvis | -vers   
+              [-majc] [-minc] [-n <itername>] -p <pri>  [-scan] [-t <table>]   
+    description: sets a table-specific iterator   
+      -?,-help  display this help   
+      -ageoff,-ageoff  an aging off type   
+      -agg,-aggregator  an aggregating type   
+      -class,-class-name <name>  a java class type   
+      -majc,-major-compaction  applied at major compaction   
+      -minc,-minor-compaction  applied at minor compaction   
+      -n,-name <itername>  iterator to set   
+      -p,-priority <pri>  the order in which the iterator is applied   
+      -regex,-regular-expression  a regex matching type   
+      -reqvis,-require-visibility  a type that omits entries with empty visibilities   
+      -scan,-scan-time  applied at scan time   
+      -t,-table <table>  tableName   
+      -vers,-version  a versioning type   
+  
+**setscaniter**   
+  
+    usage: setscaniter [-?] -ageoff | -agg | -class <name> | -regex | -reqvis | -vers   
+              [-n <itername>] -p <pri>  [-t <table>]   
+    description: sets a table-specific scan iterator for this shell session   
+      -?,-help  display this help   
+      -ageoff,-ageoff  an aging off type   
+      -agg,-aggregator  an aggregating type   
+      -class,-class-name <name>  a java class type   
+      -n,-name <itername>  iterator to set   
+      -p,-priority <pri>  the order in which the iterator is applied   
+      -regex,-regular-expression  a regex matching type   
+      -reqvis,-require-visibility  a type that omits entries with empty visibilities   
+      -t,-table <table>  tableName   
+      -vers,-version  a versioning type   
+  
+**sleep**   
+  
+    usage: sleep [-?]   
+    description: sleep for the given number of seconds   
+      -?,-help  display this help   
+  
+**systempermissions**   
+  
+    usage: systempermissions [-?]   
+    description: displays a list of valid system permissions   
+      -?,-help  display this help   
+  
+**table**   
+  
+    usage: table <tableName> [-?]   
+    description: switches to the specified table   
+      -?,-help  display this help   
+  
+**tablepermissions**   
+  
+    usage: tablepermissions [-?]   
+    description: displays a list of valid table permissions   
+      -?,-help  display this help   
+  
+**tables**   
+  
+    usage: tables [-?] [-l]   
+    description: displays a list of all existing tables   
+      -?,-help  display this help   
+      -l,-list-ids  display internal table ids along with the table name   
+  
+**trace**   
+  
+    usage: trace [ on | off ] [-?]   
+    description: turns trace logging on or off   
+      -?,-help  display this help   
+  
+**user**   
+  
+    usage: user <username> [-?]   
+    description: switches to the specified user   
+      -?,-help  display this help   
+  
+**userpermissions**   
+  
+    usage: userpermissions [-?] [-u <user>]   
+    description: displays a user's system and table permissions   
+      -?,-help  display this help   
+      -u,-user <user>  user to operate on   
+  
+**users**   
+  
+    usage: users [-?]   
+    description: displays a list of existing users   
+      -?,-help  display this help   
+  
+**whoami**   
+  
+    usage: whoami [-?]   
+    description: reports the current user name   
+      -?,-help  display this help   
+  
+
+
+* * *
+
+** Up:** [Apache Accumulo User Manual Version 1.4][3] ** Previous:** [Administration][5]   ** [Contents][7]**
+
+   [3]: accumulo_user_manual.html
+   [5]: Administration.html
+   [7]: Contents.html
+

Propchange: incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/Shell_Commands.mdtext
------------------------------------------------------------------------------
    svn:eol-style = native

Added: incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/Table_Configuration.mdtext
URL: http://svn.apache.org/viewvc/incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/Table_Configuration.mdtext?rev=1304563&view=auto
==============================================================================
--- incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/Table_Configuration.mdtext (added)
+++ incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/Table_Configuration.mdtext Fri Mar 23 19:07:51 2012
@@ -0,0 +1,491 @@
+Title: Apache Accumulo User Manual: Table Configuration
+Notice:    Licensed to the Apache Software Foundation (ASF) under one
+           or more contributor license agreements.  See the NOTICE file
+           distributed with this work for additional information
+           regarding copyright ownership.  The ASF licenses this file
+           to you under the Apache License, Version 2.0 (the
+           "License"); you may not use this file except in compliance
+           with the License.  You may obtain a copy of the License at
+           .
+             http://www.apache.org/licenses/LICENSE-2.0
+           .
+           Unless required by applicable law or agreed to in writing,
+           software distributed under the License is distributed on an
+           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+           KIND, either express or implied.  See the License for the
+           specific language governing permissions and limitations
+           under the License.
+
+** Next:** [Table Design][2] ** Up:** [Apache Accumulo User Manual Version 1.4][4] ** Previous:** [Writing Accumulo Clients][6]   ** [Contents][8]**   
+  
+<a id=CHILD_LINKS></a>**Subsections**
+
+* [Locality Groups][9]
+* [Constraints][10]
+* [Bloom Filters][11]
+* [Iterators][12]
+* [Block Cache][13]
+* [Compaction][14]
+* [Pre-splitting tables][15]
+* [Merging tablets][16]
+* [Delete Range][17]
+* [Cloning Tables][18]
+
+* * *
+
+## <a id=Table_Configuration></a> Table Configuration
+
+Accumulo tables have a few options that can be configured to alter the default behavior of Accumulo as well as improve performance based on the data stored. These include locality groups, constraints, bloom filters, iterators, and block cache. 
+
+## <a id=Locality_Groups></a> Locality Groups
+
+Accumulo supports storing of sets of column families separately on disk to allow clients to scan over columns that are frequently used together efficient and to avoid scanning over column families that are not requested. After a locality group is set Scanner and BatchScanner operations will automatically take advantage of them whenever the fetchColumnFamilies() method is used. 
+
+By default tables place all column families into the same ``default" locality group. Additional locality groups can be configured anytime via the shell or programmatically as follows: 
+
+### <a id=Managing_Locality_Groups_via_the_Shell></a> Managing Locality Groups via the Shell
+    
+    
+    usage: setgroups <group>=<col fam>{,<col fam>}{ <group>=<col fam>{,<col
+    fam>}} [-?] -t <table>
+    
+    user@myinstance mytable> setgroups -t mytable group_one=colf1,colf2
+    
+    user@myinstance mytable> getgroups -t mytable
+    group_one=colf1,colf2
+    
+
+### <a id=Managing_Locality_Groups_via_the_Client_API></a> Managing Locality Groups via the Client API
+    
+    
+    Connector conn;
+    
+    HashMap<String,Set<Text>> localityGroups =
+        new HashMap<String, Set<Text>>();
+    
+    HashSet<Text> metadataColumns = new HashSet<Text>();
+    metadataColumns.add(new Text("domain"));
+    metadataColumns.add(new Text("link"));
+    
+    HashSet<Text> contentColumns = new HashSet<Text>();
+    contentColumns.add(new Text("body"));
+    contentColumns.add(new Text("images"));
+    
+    localityGroups.put("metadata", metadataColumns);
+    localityGroups.put("content", contentColumns);
+    
+    conn.tableOperations().setLocalityGroups("mytable", localityGroups);
+    
+    // existing locality groups can be obtained as follows
+    Map<String, Set<Text>> groups =
+        conn.tableOperations().getLocalityGroups("mytable");
+    
+
+The assignment of Column Families to Locality Groups can be changed anytime. The physical movement of column families into their new locality groups takes place via the periodic Major Compaction process that takes place continuously in the background. Major Compaction can also be scheduled to take place immediately through the shell: 
+    
+    
+    user@myinstance mytable> compact -t mytable
+    
+
+## <a id=Constraints></a> Constraints
+
+Accumulo supports constraints applied on mutations at insert time. This can be used to disallow certain inserts according to a user defined policy. Any mutation that fails to meet the requirements of the constraint is rejected and sent back to the client. 
+
+Constraints can be enabled by setting a table property as follows: 
+    
+    
+    user@myinstance mytable> config -t mytable -s table.constraint.1=com.test.ExampleConstraint
+    user@myinstance mytable> config -t mytable -s table.constraint.2=com.test.AnotherConstraint
+    user@myinstance mytable> config -t mytable -f constraint
+    ---------+--------------------------------+----------------------------
+    SCOPE    | NAME                           | VALUE
+    ---------+--------------------------------+----------------------------
+    table    | table.constraint.1............ | com.test.ExampleConstraint
+    table    | table.constraint.2............ | com.test.AnotherConstraint
+    ---------+--------------------------------+----------------------------
+    
+
+Currently there are no general-purpose constraints provided with the Accumulo distribution. New constraints can be created by writing a Java class that implements the org.apache.accumulo.core.constraints.Constraint interface. 
+
+To deploy a new constraint, create a jar file containing the class implementing the new constraint and place it in the lib directory of the Accumulo installation. New constraint jars can be added to Accumulo and enabled without restarting but any change to an existing constraint class requires Accumulo to be restarted. 
+
+An example of constraints can be found in   
+accumulo/docs/examples/README.constraints with corresponding code under   
+accumulo/src/examples/simple/main/java/accumulo/examples/simple/constraints . 
+
+## <a id=Bloom_Filters></a> Bloom Filters
+
+As mutations are applied to an Accumulo table, several files are created per tablet. If bloom filters are enabled, Accumulo will create and load a small data structure into memory to determine whether a file contains a given key before opening the file. This can speed up lookups considerably. 
+
+To enable bloom filters, enter the following command in the Shell: 
+    
+    
+    user@myinstance> config -t mytable -s table.bloom.enabled=true
+    
+
+An extensive example of using Bloom Filters can be found at   
+accumulo/docs/examples/README.bloom . 
+
+## <a id=Iterators></a> Iterators
+
+Iterators provide a modular mechanism for adding functionality to be executed by TabletServers when scanning or compacting data. This allows users to efficiently summarize, filter, and aggregate data. In fact, the built-in features of cell-level security and column fetching are implemented using Iterators. Some useful Iterators are provided with Accumulo and can be found in the org.apache.accumulo.core.iterators.user package. 
+
+### <a id=Setting_Iterators_via_the_Shell></a> Setting Iterators via the Shell
+    
+    
+    usage: setiter [-?] -ageoff | -agg | -class <name> | -regex | 
+    -reqvis | -vers   [-majc] [-minc] [-n <itername>] -p <pri>   
+    [-scan] [-t <table>]
+    
+    user@myinstance mytable> setiter -t mytable -scan -p 10 -n myiter
+    
+
+### <a id=Setting_Iterators_Programmatically></a> Setting Iterators Programmatically
+    
+    
+    scanner.addIterator(new IteratorSetting(
+        15, // priority
+        "com.company.MyIterator", // class name
+        "myiter" // name this iterator
+    ));
+    
+
+Some iterators take additional parameters from client code, as in the following example: 
+    
+    
+    IteratorSetting iter = new IteratorSetting(...);
+    iter.addOption("myoptionname", "myoptionvalue");
+    scanner.addIterator(iter)
+    
+
+Tables support separate Iterator settings to be applied at scan time, upon minor compaction and upon major compaction. For most uses, tables will have identical iterator settings for all three to avoid inconsistent results. 
+
+### <a id=Versioning_Iterators_and_Timestamps></a> Versioning Iterators and Timestamps
+
+Accumulo provides the capability to manage versioned data through the use of timestamps within the Key. If a timestamp is not specified in the key created by the client then the system will set the timestamp to the current time. Two keys with identical rowIDs and columns but different timestamps are considered two versions of the same key. If two inserts are made into accumulo with the same rowID, column, and timestamp, then the behavior is non-deterministic. 
+
+Timestamps are sorted in descending order, so the most recent data comes first. Accumulo can be configured to return the top k versions, or versions later than a given date. The default is to return the one most recent version. 
+
+The version policy can be changed by changing the VersioningIterator options for a table as follows: 
+    
+    
+    user@myinstance mytable> config -t mytable -s
+    table.iterator.scan.vers.opt.maxVersions=3
+    
+    user@myinstance mytable> config -t mytable -s
+    table.iterator.minc.vers.opt.maxVersions=3
+    
+    user@myinstance mytable> config -t mytable -s
+    table.iterator.majc.vers.opt.maxVersions=3
+    
+
+#### <a id=Logical_Time></a> Logical Time
+
+Accumulo 1.2 introduces the concept of logical time. This ensures that timestamps set by accumulo always move forward. This helps avoid problems caused by TabletServers that have different time settings. The per tablet counter gives unique one up time stamps on a per mutation basis. When using time in milliseconds, if two things arrive within the same millisecond then both receive the same timestamp. When using time in milliseconds, accumulo set times will still always move forward and never backwards. 
+
+A table can be configured to use logical timestamps at creation time as follows: 
+    
+    
+    user@myinstance> createtable -tl logical
+    
+
+#### <a id=Deletes></a> Deletes
+
+Deletes are special keys in accumulo that get sorted along will all the other data. When a delete key is inserted, accumulo will not show anything that has a timestamp less than or equal to the delete key. During major compaction, any keys older than a delete key are omitted from the new file created, and the omitted keys are removed from disk as part of the regular garbage collection process. 
+
+### <a id=Filters></a> Filters
+
+When scanning over a set of key-value pairs it is possible to apply an arbitrary filtering policy through the use of a Filter. Filters are types of iterators that return only key-value pairs that satisfy the filter logic. Accumulo has a few built-in filters that can be configured on any table: AgeOff, ColumnAgeOff, Timestamp, NoVis, and RegEx. More can be added by writing a Java class that extends the   
+org.apache.accumulo.core.iterators.Filter class. 
+
+The AgeOff filter can be configured to remove data older than a certain date or a fixed amount of time from the present. The following example sets a table to delete everything inserted over 30 seconds ago: 
+    
+    
+    user@myinstance> createtable filtertest
+    user@myinstance filtertest> setiter -t filtertest -scan -minc -majc -p 10 -n myfilter -ageoff
+    AgeOffFilter removes entries with timestamps more than <ttl> milliseconds old
+    ----------> set org.apache.accumulo.core.iterators.user.AgeOffFilter parameter negate, default false keeps k/v that pass accept method, true rejects k/v that pass accept method: 
+    ----------> set org.apache.accumulo.core.iterators.user.AgeOffFilter parameter ttl, time to live (milliseconds): 3000
+    ----------> set org.apache.accumulo.core.iterators.user.AgeOffFilter parameter currentTime, if set, use the given value as the absolute time in milliseconds as the current time of day: 
+    user@myinstance filtertest> 
+    user@myinstance filtertest> scan
+    user@myinstance filtertest> insert foo a b c
+    user@myinstance filtertest> scan
+    foo a:b [] c
+    user@myinstance filtertest> sleep 4
+    user@myinstance filtertest> scan
+    user@myinstance filtertest>
+    
+
+To see the iterator settings for a table, use: 
+    
+    
+    user@example filtertest> config -t filtertest -f iterator
+    ---------+---------------------------------------------+------------------
+    SCOPE    | NAME                                        | VALUE
+    ---------+---------------------------------------------+------------------
+    table    | table.iterator.majc.myfilter .............. | 10,org.apache.accumulo.core.iterators.user.AgeOffFilter
+    table    | table.iterator.majc.myfilter.opt.ttl ...... | 3000
+    table    | table.iterator.majc.vers .................. | 20,org.apache.accumulo.core.iterators.VersioningIterator
+    table    | table.iterator.majc.vers.opt.maxVersions .. | 1
+    table    | table.iterator.minc.myfilter .............. | 10,org.apache.accumulo.core.iterators.user.AgeOffFilter
+    table    | table.iterator.minc.myfilter.opt.ttl ...... | 3000
+    table    | table.iterator.minc.vers .................. | 20,org.apache.accumulo.core.iterators.VersioningIterator
+    table    | table.iterator.minc.vers.opt.maxVersions .. | 1
+    table    | table.iterator.scan.myfilter .............. | 10,org.apache.accumulo.core.iterators.user.AgeOffFilter
+    table    | table.iterator.scan.myfilter.opt.ttl ...... | 3000
+    table    | table.iterator.scan.vers .................. | 20,org.apache.accumulo.core.iterators.VersioningIterator
+    table    | table.iterator.scan.vers.opt.maxVersions .. | 1
+    ---------+------------------------------------------+------------------
+    
+
+### <a id=Combiners></a> Combiners
+
+Accumulo allows Combiners to be configured on tables and column families. When a Combiner is set it is applied across the values associated with any keys that share rowID, column family, and column qualifier. This is similar to the reduce step in MapReduce, which applied some function to all the values associated with a particular key. 
+
+For example, if a summing combiner were configured on a table and the following mutations were inserted: 
+    
+    
+    Row     Family Qualifier Timestamp  Value
+    rowID1  colfA  colqA     20100101   1
+    rowID1  colfA  colqA     20100102   1
+    
+
+The table would reflect only one aggregate value: 
+    
+    
+    rowID1  colfA  colqA     -          2
+    
+
+Combiners can be enabled for a table using the setiter command in the shell. Below is an example. 
+    
+    
+    root@a14 perDayCounts> setiter -t perDayCounts -p 10 -scan -minc -majc -n daycount 
+                           -class org.apache.accumulo.core.iterators.user.SummingCombiner
+    TypedValueCombiner can interpret Values as a variety of number encodings 
+      (VLong, Long, or String) before combining
+    ----------> set SummingCombiner parameter columns, 
+                <col fam>[:<col qual>]{,<col fam>[:<col qual>]} : day
+    ----------> set SummingCombiner parameter type, <VARNUM|LONG|STRING>: STRING
+    
+    root@a14 perDayCounts> insert foo day 20080101 1
+    root@a14 perDayCounts> insert foo day 20080101 1
+    root@a14 perDayCounts> insert foo day 20080103 1
+    root@a14 perDayCounts> insert bar day 20080101 1
+    root@a14 perDayCounts> insert bar day 20080101 1
+    
+    root@a14 perDayCounts> scan
+    bar day:20080101 []    2
+    foo day:20080101 []    2
+    foo day:20080103 []    1
+    
+
+Accumulo includes some useful Combiners out of the box. To find these look in the   
+**org.apache.accumulo.core.iterators.user** package. 
+
+Additional Combiners can be added by creating a Java class that extends   
+**org.apache.accumulo.core.iterators.Combiner** and adding a jar containing that class to Accumulo's lib/ext directory. 
+
+An example of a Combiner can be found under   
+accumulo/src/examples/simple/main/java/org/apache/accumulo/examples/simple/combiner/StatsCombiner.java 
+
+## <a id=Block_Cache></a> Block Cache
+
+In order to increase throughput of commonly accessed entries, Accumulo employs a block cache. This block cache buffers data in memory so that it doesn't have to be read off of disk. The RFile format that Accumulo prefers is a mix of index blocks and data blocks, where the index blocks are used to find the appropriate data blocks. Typical queries to Accumulo result in a binary search over several index blocks followed by a linear scan of one or more data blocks. 
+
+The block cache can be configured on a per-table basis, and all tablets hosted on a tablet server share a single resource pool. To configure the size of the tablet server's block cache, set the following properties: 
+    
+    
+    tserver.cache.data.size: Specifies the size of the cache for file data blocks.
+    tserver.cache.index.size: Specifies the size of the cache for file indices.
+    
+
+To enable the block cache for your table, set the following properties: 
+    
+    
+    table.cache.block.enable: Determines whether file (data) block cache is enabled.
+    table.cache.index.enable: Determines whether index cache is enabled.
+    
+
+The block cache can have a significant effect on alleviating hot spots, as well as reducing query latency. It is enabled by default for the !METADATA table. 
+
+## <a id=Compaction></a> Compaction
+
+As data is written to Accumulo it is buffered in memory. The data buffered in memory is eventually written to HDFS on a per tablet basis. Files can also be added to tablets directly by bulk import. In the background tablet servers run major compactions to merge multiple files into one. The tablet server has to decide which tablets to compact and which files within a tablet to compact. This decision is made using the compaction ratio, which is configurable on a per table basis. To configure this ratio modify the following property: 
+    
+    
+    table.compaction.major.ratio
+    
+
+Increasing this ratio will result in more files per tablet and less compaction work. More files per tablet means more higher query latency. So adjusting this ratio is a trade off between ingest and query performance. The ratio defaults to 3. 
+
+The way the ratio works is that a set of files is compacted into one file if the sum of the sizes of the files in the set is larger than the ratio multiplied by the size of the largest file in the set. If this is not true for the set of all files in a tablet, the largest file is removed from consideration, and the remaining files are considered for compaction. This is repeated until a compaction is triggered or there are no files left to consider. 
+
+The number of background threads tablet servers use to run major compactions is configurable. To configure this modify the following property: 
+    
+    
+    tserver.compaction.major.concurrent.max
+    
+
+Also, the number of threads tablet servers use for minor compactions is configurable. To configure this modify the following property: 
+    
+    
+    tserver.compaction.minor.concurrent.max
+    
+
+The numbers of minor and major compactions running and queued is visible on the Accumulo monitor page. This allows you to see if compactions are backing up and adjustments to the above settings are needed. When adjusting the number of threads available for compactions, consider the number of cores and other tasks running on the nodes such as maps and reduces. 
+
+If major compactions are not keeping up, then the number of files per tablet will grow to a point such that query performance starts to suffer. One way to handle this situation is to increase the compaction ratio. For example, if the compaction ratio were set to 1, then every new file added to a tablet by minor compaction would immediately queue the tablet for major compaction. So if a tablet has a 200M file and minor compaction writes a 1M file, then the major compaction will attempt to merge the 200M and 1M file. If the tablet server has lots of tablets trying to do this sort of thing, then major compactions will back up and the number of files per tablet will start to grow, assuming data is being continuously written. Increasing the compaction ratio will alleviate backups by lowering the amount of major compaction work that needs to be done. 
+
+Another option to deal with the files per tablet growing too large is to adjust the following property: 
+    
+    
+    table.file.max
+    
+
+When a tablet reaches this number of files and needs to flush its in-memory data to disk, it will choose to do a merging minor compaction. A merging minor compaction will merge the tablet's smallest file with the data in memory at minor compaction time. Therefore the number of files will not grow beyond this limit. This will make minor compactions take longer, which will cause ingest performance to decrease. This can cause ingest to slow down until major compactions have enough time to catch up. When adjusting this property, also consider adjusting the compaction ratio. Ideally, merging minor compactions never need to occur and major compactions will keep up. It is possible to configure the file max and compaction ratio such that only merging minor compactions occur and major compactions never occur. This should be avoided because doing only merging minor compactions causes ![$O(N^2)$][19] work to be done. The amount of work done by major compactions is  ![$O(N*\log_R(N))$][
 20] where *R* is the compaction ratio. 
+
+Compactions can be initiated manually for a table. To initiate a minor compaction, use the flush command in the shell. To initiate a major compaction, use the compact command in the shell. The compact command will compact all tablets in a table to one file. Even tablets with one file are compacted. This is useful for the case where a major compaction filter is configured for a table. In 1.4 the ability to compact a range of a table was added. To use this feature specify start and stop rows for the compact command. This will only compact tablets that overlap the given row range. 
+
+## <a id=Pre-splitting_tables></a> Pre-splitting tables
+
+Accumulo will balance and distribute tables accross servers. Before a table gets large, it will be maintained as a single tablet on a single server. This limits the speed at which data can be added or queried to the speed of a single node. To improve performance when the a table is new, or small, you can add split points and generate new tablets. 
+
+In the shell: 
+    
+    
+    root@myinstance> createtable newTable
+    root@myinstance> addsplits -t newTable g n t
+    
+
+This will create a new table with 4 tablets. The table will be split on the letters ``g'', ``n'', and ``t'' which will work nicely if the row data start with lower-case alphabetic characters. If your row data includes binary information or numeric information, or if the distribution of the row information is not flat, then you would pick different split points. Now ingest and query can proceed on 4 nodes which can improve performance. 
+
+## <a id=Merging_tablets></a> Merging tablets
+
+Over time, a table can get very large, so large that it has hundreds of thousands of split points. Once there are enough tablets to spread a table across the entire cluster, additional splits may not improve performance, and may create unnecessary bookkeeping. The distribution of data may change over time. For example, if row data contains date information, and data is continually added and removed to maintain a window of current information, tablets for older rows may be empty. 
+
+Accumulo supports tablet merging, which can be used to reduce delete the number of split points. The following command will merge all rows from ``A'' to ``Z'' into a single tablet: 
+    
+    
+    root@myinstance> merge -t myTable -s A -e Z
+    
+
+If the result of a merge produces a tablet that is larger than the configured split size, the tablet may be split by the tablet server. Be sure to increase your tablet size prior to any merges if the goal is to have larger tablets: 
+    
+    
+    root@myinstance> config -t myTable -s table.split.threshold=2G
+    
+
+In order to merge small tablets, you can ask accumulo to merge sections of a table smaller than a given size. 
+    
+    
+    root@myinstance> merge -t myTable -s 100M
+    
+
+By default, small tablets will not be merged into tablets that are already larger than the given size. This can leave isolated small tablets. To force small tablets to be merged into larger tablets use the ``-force'' option: 
+    
+    
+    root@myinstance> merge -t myTable -s 100M --force
+    
+
+Merging away small tablets works on one section at a time. If your table contains many sections of small split points, or you are attempting to change the split size of the entire table, it will be faster to set the split point and merge the entire table: 
+    
+    
+    root@myinstance> config -t myTable -s table.split.threshold=256M
+    root@myinstance> merge -t myTable
+    
+
+## <a id=Delete_Range></a> Delete Range
+
+Consider an indexing scheme that uses date information in each row. For example ``20110823-15:20:25.013'' might be a row that specifies a date and time. In some cases, we might like to delete rows based on this date, say to remove all the data older than the current year. Accumulo supports a delete range operation which can efficiently removes data between two rows. For example: 
+    
+    
+    root@myinstance> deleterange -t myTable -s 2010 -e 2011
+    
+
+This will delete all rows starting with ``2010'' and it will stop at any row starting ``2011''. You can delete any data prior to 2011 with: 
+    
+    
+    root@myinstance> deleterange -t myTable -e 2011 --force
+    
+
+The shell will not allow you to delete an unbounded range (no start) unless you provide the ``-force'' option. 
+
+Range deletion is implemented using splits at the given start/end positions, and will affect the number of splits in the table. 
+
+## <a id=Cloning_Tables></a> Cloning Tables
+
+A new table can be created that points to an existing table's data. This is a very quick metadata operation, no data is actually copied. The cloned table and the source table can change independently after the clone operation. One use case for this feature is testing. For example to test a new filtering iterator, clone the table, add the filter to the clone, and force a major compaction. To perform a test on less data, clone a table and then use delete range to efficiently remove a lot of data from the clone. Another use case is generating a snapshot to guard against human error. To create a snapshot, clone a table and then disable write permissions on the clone. 
+
+The clone operation will point to the source table's files. This is why the flush option is present and is enabled by default in the shell. If the flush option is not enabled, then any data the source table currently has in memory will not exist in the clone. 
+
+A cloned table copies the configuration of the source table. However the permissions of the source table are not copied to the clone. After a clone is created, only the user that created the clone can read and write to it. 
+
+In the following example we see that data inserted after the clone operation is not visible in the clone. 
+    
+    
+    root@a14> createtable people
+    root@a14 people> insert 890435 name last Doe
+    root@a14 people> insert 890435 name first John
+    root@a14 people> clonetable people test  
+    root@a14 people> insert 890436 name first Jane
+    root@a14 people> insert 890436 name last Doe  
+    root@a14 people> scan
+    890435 name:first []    John
+    890435 name:last []    Doe
+    890436 name:first []    Jane
+    890436 name:last []    Doe
+    root@a14 people> table test
+    root@a14 test> scan
+    890435 name:first []    John
+    890435 name:last []    Doe
+    root@a14 test>
+    
+
+The du command in the shell shows how much space a table is using in HDFS. This command can also show how much overlapping space two cloned tables have in HDFS. In the example below du shows table ci is using 428M. Then ci is cloned to cic and du shows that both tables share 428M. After three entries are inserted into cic and its flushed, du shows the two tables still share 428M but cic has 226 bytes to itself. Finally, table cic is compacted and then du shows that each table uses 428M. 
+    
+    
+    root@a14> du ci           
+                 428,482,573 [ci]
+    root@a14> clonetable ci cic
+    root@a14> du ci cic
+                 428,482,573 [ci, cic]
+    root@a14> table cic
+    root@a14 cic> insert r1 cf1 cq1 v1
+    root@a14 cic> insert r1 cf1 cq2 v2
+    root@a14 cic> insert r1 cf1 cq3 v3 
+    root@a14 cic> flush -t cic -w 
+    27 15:00:13,908 [shell.Shell] INFO : Flush of table cic completed.
+    root@a14 cic> du ci cic       
+                 428,482,573 [ci, cic]
+                         226 [cic]
+    root@a14 cic> compact -t cic -w
+    27 15:00:35,871 [shell.Shell] INFO : Compacting table ...
+    27 15:03:03,303 [shell.Shell] INFO : Compaction of table cic completed for given range
+    root@a14 cic> du ci cic        
+                 428,482,573 [ci]
+                 428,482,612 [cic]
+    root@a14 cic>
+    
+
+* * *
+
+** Next:** [Table Design][2] ** Up:** [Apache Accumulo User Manual Version 1.4][4] ** Previous:** [Writing Accumulo Clients][6]   ** [Contents][8]**
+
+   [2]: Table_Design.html
+   [4]: accumulo_user_manual.html
+   [6]: Writing_Accumulo_Clients.html
+   [8]: Contents.html
+   [9]: Table_Configuration.html#Locality_Groups
+   [10]: Table_Configuration.html#Constraints
+   [11]: Table_Configuration.html#Bloom_Filters
+   [12]: Table_Configuration.html#Iterators
+   [13]: Table_Configuration.html#Block_Cache
+   [14]: Table_Configuration.html#Compaction
+   [15]: Table_Configuration.html#Pre-splitting_tables
+   [16]: Table_Configuration.html#Merging_tablets
+   [17]: Table_Configuration.html#Delete_Range
+   [18]: Table_Configuration.html#Cloning_Tables
+   [19]: img2.png
+   [20]: img3.png
+

Propchange: incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/Table_Configuration.mdtext
------------------------------------------------------------------------------
    svn:eol-style = native

Added: incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/Table_Design.mdtext
URL: http://svn.apache.org/viewvc/incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/Table_Design.mdtext?rev=1304563&view=auto
==============================================================================
--- incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/Table_Design.mdtext (added)
+++ incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/Table_Design.mdtext Fri Mar 23 19:07:51 2012
@@ -0,0 +1,207 @@
+Title: Apache Accumulo User Manual: Table Design
+Notice:    Licensed to the Apache Software Foundation (ASF) under one
+           or more contributor license agreements.  See the NOTICE file
+           distributed with this work for additional information
+           regarding copyright ownership.  The ASF licenses this file
+           to you under the Apache License, Version 2.0 (the
+           "License"); you may not use this file except in compliance
+           with the License.  You may obtain a copy of the License at
+           .
+             http://www.apache.org/licenses/LICENSE-2.0
+           .
+           Unless required by applicable law or agreed to in writing,
+           software distributed under the License is distributed on an
+           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+           KIND, either express or implied.  See the License for the
+           specific language governing permissions and limitations
+           under the License.
+
+** Next:** [High-Speed Ingest][2] ** Up:** [Apache Accumulo User Manual Version 1.4][4] ** Previous:** [Table Configuration][6]   ** [Contents][8]**   
+  
+<a id=CHILD_LINKS></a>**Subsections**
+
+* [Basic Table][9]
+* [RowID Design][10]
+* [Indexing][11]
+* [Entity-Attribute and Graph Tables][12]
+* [Document-Partitioned Indexing][13]
+
+* * *
+
+## <a id=Table_Design></a> Table Design
+
+## <a id=Basic_Table></a> Basic Table
+
+Since Accumulo tables are sorted by row ID, each table can be thought of as being indexed by the row ID. Lookups performed row ID can be executed quickly, by doing a binary search, first across the tablets, and then within a tablet. Clients should choose a row ID carefully in order to support their desired application. A simple rule is to select a unique identifier as the row ID for each entity to be stored and assign all the other attributes to be tracked to be columns under this row ID. For example, if we have the following data in a comma-separated file: 
+    
+    
+        userid,age,address,account-balance
+    
+
+We might choose to store this data using the userid as the rowID and the rest of the data in column families: 
+    
+    
+    Mutation m = new Mutation(new Text(userid));
+    m.put(new Text("age"), age);
+    m.put(new Text("address"), address);
+    m.put(new Text("balance"), account_balance);
+    
+    writer.add(m);
+    
+
+We could then retrieve any of the columns for a specific userid by specifying the userid as the range of a scanner and fetching specific columns: 
+    
+    
+    Range r = new Range(userid, userid); // single row
+    Scanner s = conn.createScanner("userdata", auths);
+    s.setRange(r);
+    s.fetchColumnFamily(new Text("age"));
+    
+    for(Entry<Key,Value> entry : s)
+        System.out.println(entry.getValue().toString());
+    
+
+## <a id=RowID_Design></a> RowID Design
+
+Often it is necessary to transform the rowID in order to have rows ordered in a way that is optimal for anticipated access patterns. A good example of this is reversing the order of components of internet domain names in order to group rows of the same parent domain together: 
+    
+    
+    com.google.code
+    com.google.labs
+    com.google.mail
+    com.yahoo.mail
+    com.yahoo.research
+    
+
+Some data may result in the creation of very large rows - rows with many columns. In this case the table designer may wish to split up these rows for better load balancing while keeping them sorted together for scanning purposes. This can be done by appending a random substring at the end of the row: 
+    
+    
+    com.google.code_00
+    com.google.code_01
+    com.google.code_02
+    com.google.labs_00
+    com.google.mail_00
+    com.google.mail_01
+    
+
+It could also be done by adding a string representation of some period of time such as date to the week or month: 
+    
+    
+    com.google.code_201003
+    com.google.code_201004
+    com.google.code_201005
+    com.google.labs_201003
+    com.google.mail_201003
+    com.google.mail_201004
+    
+
+Appending dates provides the additional capability of restricting a scan to a given date range. 
+
+## <a id=Indexing></a> Indexing
+
+In order to support lookups via more than one attribute of an entity, additional indexes can be built. However, because Accumulo tables can support any number of columns without specifying them beforehand, a single additional index will often suffice for supporting lookups of records in the main table. Here, the index has, as the rowID, the Value or Term from the main table, the column families are the same, and the column qualifier of the index table contains the rowID from the main table. 
+
+![converted table][14]
+
+Note: We store rowIDs in the column qualifier rather than the Value so that we can have more than one rowID associated with a particular term within the index. If we stored this in the Value we would only see one of the rows in which the value appears since Accumulo is configured by default to return the one most recent value associated with a key. 
+
+Lookups can then be done by scanning the Index Table first for occurrences of the desired values in the columns specified, which returns a list of row ID from the main table. These can then be used to retrieve each matching record, in their entirety, or a subset of their columns, from the Main Table. 
+
+To support efficient lookups of multiple rowIDs from the same table, the Accumulo client library provides a BatchScanner. Users specify a set of Ranges to the BatchScanner, which performs the lookups in multiple threads to multiple servers and returns an Iterator over all the rows retrieved. The rows returned are NOT in sorted order, as is the case with the basic Scanner interface. 
+    
+    
+    // first we scan the index for IDs of rows matching our query
+    
+    Text term = new Text("mySearchTerm");
+    
+    HashSet<Text> matchingRows = new HashSet<Text>();
+    
+    Scanner indexScanner = createScanner("index", auths);
+    indexScanner.setRange(new Range(term, term));
+    
+    // we retrieve the matching rowIDs and create a set of ranges
+    for(Entry<Key,Value> entry : indexScanner)
+        matchingRows.add(new Text(entry.getValue()));
+    
+    // now we pass the set of rowIDs to the batch scanner to retrieve them
+    BatchScanner bscan = conn.createBatchScanner("table", auths, 10);
+    
+    bscan.setRanges(matchingRows);
+    bscan.fetchFamily("attributes");
+    
+    for(Entry<Key,Value> entry : scan)
+        System.out.println(e.getValue());
+    
+
+One advantage of the dynamic schema capabilities of Accumulo is that different fields may be indexed into the same physical table. However, it may be necessary to create different index tables if the terms must be formatted differently in order to maintain proper sort order. For example, real numbers must be formatted differently than their usual notation in order to be sorted correctly. In these cases, usually one index per unique data type will suffice. 
+
+## <a id=Entity-Attribute_and_Graph_Tables></a> Entity-Attribute and Graph Tables
+
+Accumulo is ideal for storing entities and their attributes, especially of the attributes are sparse. It is often useful to join several datasets together on common entities within the same table. This can allow for the representation of graphs, including nodes, their attributes, and connections to other nodes. 
+
+Rather than storing individual events, Entity-Attribute or Graph tables store aggregate information about the entities involved in the events and the relationships between entities. This is often preferrable when single events aren't very useful and when a continuously updated summarization is desired. 
+
+The physical schema for an entity-attribute or graph table is as follows: 
+
+![converted table][15]
+
+For example, to keep track of employees, managers and products the following entity-attribute table could be used. Note that the weights are not always necessary and are set to 0 when not used. 
+
+![converted table][16]   
+  
+
+
+To allow efficient updating of edge weights, an aggregating iterator can be configured to add the value of all mutations applied with the same key. These types of tables can easily be created from raw events by simply extracting the entities, attributes, and relationships from individual events and inserting the keys into Accumulo each with a count of 1. The aggregating iterator will take care of maintaining the edge weights. 
+
+## <a id=Document-Partitioned_Indexing></a> Document-Partitioned Indexing
+
+Using a simple index as described above works well when looking for records that match one of a set of given criteria. When looking for records that match more than one criterion simultaneously, such as when looking for documents that contain all of the words `the' and `white' and `house', there are several issues. 
+
+First is that the set of all records matching any one of the search terms must be sent to the client, which incurs a lot of network traffic. The second problem is that the client is responsible for performing set intersection on the sets of records returned to eliminate all but the records matching all search terms. The memory of the client may easily be overwhelmed during this operation. 
+
+For these reasons Accumulo includes support for a scheme known as sharded indexing, in which these set operations can be performed at the TabletServers and decisions about which records to include in the result set can be made without incurring network traffic. 
+
+This is accomplished via partitioning records into bins that each reside on at most one TabletServer, and then creating an index of terms per record within each bin as follows: 
+
+![converted table][17]
+
+Documents or records are mapped into bins by a user-defined ingest application. By storing the BinID as the RowID we ensure that all the information for a particular bin is contained in a single tablet and hosted on a single TabletServer since Accumulo never splits rows across tablets. Storing the Terms as column families serves to enable fast lookups of all the documents within this bin that contain the given term. 
+
+Finally, we perform set intersection operations on the TabletServer via a special iterator called the Intersecting Iterator. Since documents are partitioned into many bins, a search of all documents must search every bin. We can use the BatchScanner to scan all bins in parallel. The Intersecting Iterator should be enabled on a BatchScanner within user query code as follows: 
+    
+    
+    Text[] terms = {new Text("the"), new Text("white"), new Text("house")};
+    
+    BatchScanner bs = conn.createBatchScanner(table, auths, 20);
+    IteratorSetting iter = new IteratorSetting(20, "ii", IntersectingIterator.class);
+    IntersectingIterator.setColumnFamilies(iter, terms);
+    bs.addScanIterator(iter);
+    bs.setRanges(Collections.singleton(new Range()));
+    
+    for(Entry<Key,Value> entry : bs) {
+        System.out.println(" " + entry.getKey().getColumnQualifier());
+    }
+    
+
+This code effectively has the BatchScanner scan all tablets of a table, looking for documents that match all the given terms. Because all tablets are being scanned for every query, each query is more expensive than other Accumulo scans, which typically involve a small number of TabletServers. This reduces the number of concurrent queries supported and is subject to what is known as the `straggler' problem in which every query runs as slow as the slowest server participating. 
+
+Of course, fast servers will return their results to the client which can display them to the user immediately while they wait for the rest of the results to arrive. If the results are unordered this is quite effective as the first results to arrive are as good as any others to the user. 
+
+* * *
+
+** Next:** [High-Speed Ingest][2] ** Up:** [Apache Accumulo User Manual Version 1.4][4] ** Previous:** [Table Configuration][6]   ** [Contents][8]**
+
+   [2]: High_Speed_Ingest.html
+   [4]: accumulo_user_manual.html
+   [6]: Table_Configuration.html
+   [8]: Contents.html
+   [9]: Table_Design.html#Basic_Table
+   [10]: Table_Design.html#RowID_Design
+   [11]: Table_Design.html#Indexing
+   [12]: Table_Design.html#Entity-Attribute_and_Graph_Tables
+   [13]: Table_Design.html#Document-Partitioned_Indexing
+   [14]: img4.png
+   [15]: img5.png
+   [16]: img6.png
+   [17]: img7.png
+

Propchange: incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/Table_Design.mdtext
------------------------------------------------------------------------------
    svn:eol-style = native

Added: incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/Writing_Accumulo_Clients.mdtext
URL: http://svn.apache.org/viewvc/incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/Writing_Accumulo_Clients.mdtext?rev=1304563&view=auto
==============================================================================
--- incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/Writing_Accumulo_Clients.mdtext (added)
+++ incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/Writing_Accumulo_Clients.mdtext Fri Mar 23 19:07:51 2012
@@ -0,0 +1,177 @@
+Title: Apache Accumulo User Manual: Writing Accumulo Clients
+Notice:    Licensed to the Apache Software Foundation (ASF) under one
+           or more contributor license agreements.  See the NOTICE file
+           distributed with this work for additional information
+           regarding copyright ownership.  The ASF licenses this file
+           to you under the Apache License, Version 2.0 (the
+           "License"); you may not use this file except in compliance
+           with the License.  You may obtain a copy of the License at
+           .
+             http://www.apache.org/licenses/LICENSE-2.0
+           .
+           Unless required by applicable law or agreed to in writing,
+           software distributed under the License is distributed on an
+           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+           KIND, either express or implied.  See the License for the
+           specific language governing permissions and limitations
+           under the License.
+
+** Next:** [Table Configuration][2] ** Up:** [Apache Accumulo User Manual Version 1.4][4] ** Previous:** [Accumulo Shell][6]   ** [Contents][8]**   
+  
+<a id=CHILD_LINKS></a>**Subsections**
+
+* [Running Client Code][9]
+* [Connecting][10]
+* [Writing Data][11]
+* [Reading Data][12]
+
+* * *
+
+## <a id=Writing_Accumulo_Clients></a> Writing Accumulo Clients
+
+## <a id=Running_Client_Code></a> Running Client Code
+
+There are multiple ways to run Java code that uses Accumulo. Below is a list of the different ways to execute client code. 
+
+* using java executable 
+* using the accumulo script 
+* using the tool script 
+
+Inorder to run client code written to run against Accumulo, you will need to include the jars that Accumulo depends on in your classpath. Accumulo client code depends on Hadoop and Zookeeper. For Hadoop add the hadoop core jar, all of the jars in the Hadoop lib directory, and the conf directory to the classpath. For Zookeeper 3.3 you only need to add the Zookeeper jar, and not what is in the Zookeeper lib directory. You can run the following command on a configured Accumulo system to see what its using for its classpath. 
+    
+     
+    $ACCUMULO_HOME/bin/accumulo classpath
+    
+
+Another option for running your code is to put a jar file in $ACCUMULO_HOME/lib/ext. After doing this you can use the accumulo script to execute your code. For example if you create a jar containing the class com.foo.Client and placed that in lib/ext, then you could use the command $ACCUMULO_HOME/bin/accumulo com.foo.Client to execute your code. 
+
+If you are writing map reduce job that access Accumulo, then you can use the bin/tool.sh script to run those jobs. See the map reduce example. 
+
+## <a id=Connecting></a> Connecting
+
+All clients must first identify the Accumulo instance to which they will be communicating. Code to do this is as follows: 
+    
+    
+    String instanceName = "myinstance";
+    String zooServers = "zooserver-one,zooserver-two"
+    Instance inst = new ZooKeeperInstance(instanceName, zooServers);
+    
+    Connector conn = inst.getConnector("user", "passwd");
+    
+
+## <a id=Writing_Data></a> Writing Data
+
+Data are written to Accumulo by creating Mutation objects that represent all the changes to the columns of a single row. The changes are made atomically in the TabletServer. Clients then add Mutations to a BatchWriter which submits them to the appropriate TabletServers. 
+
+Mutations can be created thus: 
+    
+    
+    Text rowID = new Text("row1");
+    Text colFam = new Text("myColFam");
+    Text colQual = new Text("myColQual");
+    ColumnVisibility colVis = new ColumnVisibility("public");
+    long timestamp = System.currentTimeMillis();
+    
+    Value value = new Value("myValue".getBytes());
+    
+    Mutation mutation = new Mutation(rowID);
+    mutation.put(colFam, colQual, colVis, timestamp, value);
+    
+
+### <a id=BatchWriter></a> BatchWriter
+
+The BatchWriter is highly optimized to send Mutations to multiple TabletServers and automatically batches Mutations destined for the same TabletServer to amortize network overhead. Care must be taken to avoid changing the contents of any Object passed to the BatchWriter since it keeps objects in memory while batching. 
+
+Mutations are added to a BatchWriter thus: 
+    
+    
+    long memBuf = 1000000L; // bytes to store before sending a batch
+    long timeout = 1000L; // milliseconds to wait before sending
+    int numThreads = 10;
+    
+    BatchWriter writer =
+        conn.createBatchWriter("table", memBuf, timeout, numThreads)
+    
+    writer.add(mutation);
+    
+    writer.close();
+    
+
+An example of using the batch writer can be found at   
+accumulo/docs/examples/README.batch 
+
+## <a id=Reading_Data></a> Reading Data
+
+Accumulo is optimized to quickly retrieve the value associated with a given key, and to efficiently return ranges of consecutive keys and their associated values. 
+
+### <a id=Scanner></a> Scanner
+
+To retrieve data, Clients use a Scanner, which provides acts like an Iterator over keys and values. Scanners can be configured to start and stop at particular keys, and to return a subset of the columns available. 
+    
+    
+    // specify which visibilities we are allowed to see
+    Authorizations auths = new Authorizations("public");
+    
+    Scanner scan =
+        conn.createScanner("table", auths);
+    
+    scan.setRange(new Range("harry","john"));
+    scan.fetchFamily("attributes");
+    
+    for(Entry<Key,Value> entry : scan) {
+        String row = e.getKey().getRow();
+        Value value = e.getValue();
+    }
+    
+
+### <a id=Isolated_Scanner></a> Isolated Scanner
+
+Accumulo supports the ability to present an isolated view of rows when scanning. There are three possible ways that a row could change in accumulo : 
+
+* a mutation applied to a table 
+* iterators executed as part of a minor or major compaction 
+* bulk import of new files 
+
+Isolation guarantees that either all or none of the changes made by these operations on a row are seen. Use the IsolatedScanner to obtain an isolated view of an accumulo table. When using the regular scanner it is possible to see a non isolated view of a row. For example if a mutation modifies three columns, it is possible that you will only see two of those modifications. With the isolated scanner either all three of the changes are seen or none. 
+
+The IsolatedScanner buffers rows on the client side so a large row will not crash a tablet server. By default rows are buffered in memory, but the user can easily supply their own buffer if they wish to buffer to disk when rows are large. 
+
+For an example, look at the following   
+src/examples/src/main/java/org/apache/accumulo/examples/isolation/InterferenceTest.java
+
+### <a id=BatchScanner></a> BatchScanner
+
+For some types of access, it is more efficient to retrieve several ranges simultaneously. This arises when accessing a set of rows that are not consecutive whose IDs have been retrieved from a secondary index, for example. 
+
+The BatchScanner is configured similarly to the Scanner; it can be configured to retrieve a subset of the columns available, but rather than passing a single Range, BatchScanners accept a set of Ranges. It is important to note that the keys returned by a BatchScanner are not in sorted order since the keys streamed are from multiple TabletServers in parallel. 
+    
+    
+    ArrayList<Range> ranges = new ArrayList<Range>();
+    // populate list of ranges ...
+    
+    BatchScanner bscan =
+        conn.createBatchScanner("table", auths, 10);
+    
+    bscan.setRanges(ranges);
+    bscan.fetchFamily("attributes");
+    
+    for(Entry<Key,Value> entry : scan)
+        System.out.println(e.getValue());
+    
+
+An example of the BatchScanner can be found at   
+accumulo/docs/examples/README.batch 
+
+* * *
+
+** Next:** [Table Configuration][2] ** Up:** [Apache Accumulo User Manual Version 1.4][4] ** Previous:** [Accumulo Shell][6]   ** [Contents][8]**
+
+   [2]: Table_Configuration.html
+   [4]: accumulo_user_manual.html
+   [6]: Accumulo_Shell.html
+   [8]: Contents.html
+   [9]: Writing_Accumulo_Clients.html#Running_Client_Code
+   [10]: Writing_Accumulo_Clients.html#Connecting
+   [11]: Writing_Accumulo_Clients.html#Writing_Data
+   [12]: Writing_Accumulo_Clients.html#Reading_Data
+

Propchange: incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/Writing_Accumulo_Clients.mdtext
------------------------------------------------------------------------------
    svn:eol-style = native

Added: incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/accumulo_user_manual.mdtext
URL: http://svn.apache.org/viewvc/incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/accumulo_user_manual.mdtext?rev=1304563&view=auto
==============================================================================
--- incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/accumulo_user_manual.mdtext (added)
+++ incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/accumulo_user_manual.mdtext Fri Mar 23 19:07:51 2012
@@ -0,0 +1,63 @@
+Title: Apache Accumulo User Manual: index
+Notice:    Licensed to the Apache Software Foundation (ASF) under one
+           or more contributor license agreements.  See the NOTICE file
+           distributed with this work for additional information
+           regarding copyright ownership.  The ASF licenses this file
+           to you under the Apache License, Version 2.0 (the
+           "License"); you may not use this file except in compliance
+           with the License.  You may obtain a copy of the License at
+           .
+             http://www.apache.org/licenses/LICENSE-2.0
+           .
+           Unless required by applicable law or agreed to in writing,
+           software distributed under the License is distributed on an
+           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+           KIND, either express or implied.  See the License for the
+           specific language governing permissions and limitations
+           under the License.
+
+** Next:** [Contents][2]   ** [Contents][2]**   
+  
+
+
+## Apache Accumulo User Manual   
+Version 1.4
+
+  
+
+
+* * *
+
+<a id=CHILD_LINKS></a>
+
+* [Contents][2]
+* [Introduction][6]
+* [Accumulo Design][7]
+* [Accumulo Shell][8]
+* [Writing Accumulo Clients][9]
+* [Table Configuration][10]
+* [Table Design][11]
+* [High-Speed Ingest][12]
+* [Analytics][13]
+* [Security][14]
+* [Administration][15]
+* [Shell Commands][16]
+
+  
+
+
+* * *
+
+   [2]: Contents.html
+   [6]: Introduction.html
+   [7]: Accumulo_Design.html
+   [8]: Accumulo_Shell.html
+   [9]: Writing_Accumulo_Clients.html
+   [10]: Table_Configuration.html
+   [11]: Table_Design.html
+   [12]: High_Speed_Ingest.html
+   [13]: Analytics.html
+   [14]: Security.html
+   [15]: Administration.html
+   [16]: Shell_Commands.html
+

Propchange: incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/accumulo_user_manual.mdtext
------------------------------------------------------------------------------
    svn:eol-style = native

Added: incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/data_distribution.png
URL: http://svn.apache.org/viewvc/incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/data_distribution.png?rev=1304563&view=auto
==============================================================================
Binary file - no diff available.

Propchange: incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/data_distribution.png
------------------------------------------------------------------------------
    svn:mime-type = image/png

Added: incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/failure_handling.png
URL: http://svn.apache.org/viewvc/incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/failure_handling.png?rev=1304563&view=auto
==============================================================================
Binary file - no diff available.

Propchange: incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/failure_handling.png
------------------------------------------------------------------------------
    svn:mime-type = image/png

Added: incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/img1.png
URL: http://svn.apache.org/viewvc/incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/img1.png?rev=1304563&view=auto
==============================================================================
Binary file - no diff available.

Propchange: incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/img1.png
------------------------------------------------------------------------------
    svn:mime-type = image/png

Added: incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/img2.png
URL: http://svn.apache.org/viewvc/incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/img2.png?rev=1304563&view=auto
==============================================================================
Binary file - no diff available.

Propchange: incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/img2.png
------------------------------------------------------------------------------
    svn:mime-type = image/png

Added: incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/img3.png
URL: http://svn.apache.org/viewvc/incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/img3.png?rev=1304563&view=auto
==============================================================================
Binary file - no diff available.

Propchange: incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/img3.png
------------------------------------------------------------------------------
    svn:mime-type = image/png

Added: incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/img4.png
URL: http://svn.apache.org/viewvc/incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/img4.png?rev=1304563&view=auto
==============================================================================
Binary file - no diff available.

Propchange: incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/img4.png
------------------------------------------------------------------------------
    svn:mime-type = image/png

Added: incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/img5.png
URL: http://svn.apache.org/viewvc/incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/img5.png?rev=1304563&view=auto
==============================================================================
Binary file - no diff available.

Propchange: incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/img5.png
------------------------------------------------------------------------------
    svn:mime-type = image/png

Added: incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/img6.png
URL: http://svn.apache.org/viewvc/incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/img6.png?rev=1304563&view=auto
==============================================================================
Binary file - no diff available.

Propchange: incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/img6.png
------------------------------------------------------------------------------
    svn:mime-type = image/png

Added: incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/img7.png
URL: http://svn.apache.org/viewvc/incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/img7.png?rev=1304563&view=auto
==============================================================================
Binary file - no diff available.

Propchange: incubator/accumulo/site/trunk/content/accumulo/1.4/user_manual/img7.png
------------------------------------------------------------------------------
    svn:mime-type = image/png