You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@accumulo.apache.org by el...@apache.org on 2014/05/09 17:29:17 UTC

[39/50] [abbrv] ACCUMULO-1327 converted latex manual to asciidoc

http://git-wip-us.apache.org/repos/asf/accumulo/blob/900d6abb/docs/src/main/asciidoc/chapters/high_speed_ingest.txt
----------------------------------------------------------------------
diff --git a/docs/src/main/asciidoc/chapters/high_speed_ingest.txt b/docs/src/main/asciidoc/chapters/high_speed_ingest.txt
new file mode 100644
index 0000000..909f4c4
--- /dev/null
+++ b/docs/src/main/asciidoc/chapters/high_speed_ingest.txt
@@ -0,0 +1,125 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+== High-Speed Ingest
+
+Accumulo is often used as part of a larger data processing and storage system. To
+maximize the performance of a parallel system involving Accumulo, the ingestion
+and query components should be designed to provide enough parallelism and
+concurrency to avoid creating bottlenecks for users and other systems writing to
+and reading from Accumulo. There are several ways to achieve high ingest
+performance.
+
+=== Pre-Splitting New Tables
+
+New tables consist of a single tablet by default. As mutations are applied, the table
+grows and splits into multiple tablets which are balanced by the Master across
+TabletServers. This implies that the aggregate ingest rate will be limited to fewer
+servers than are available within the cluster until the table has reached the point
+where there are tablets on every TabletServer.
+
+Pre-splitting a table ensures that there are as many tablets as desired available
+before ingest begins to take advantage of all the parallelism possible with the cluster
+hardware. Tables can be split at any time by using the shell:
+
+  user@myinstance mytable> addsplits -sf /local_splitfile -t mytable
+
+For the purposes of providing parallelism to ingest it is not necessary to create more
+tablets than there are physical machines within the cluster as the aggregate ingest
+rate is a function of the number of physical machines. Note that the aggregate ingest
+rate is still subject to the number of machines running ingest clients, and the
+distribution of rowIDs across the table. The aggregation ingest rate will be
+suboptimal if there are many inserts into a small number of rowIDs.
+
+=== Multiple Ingester Clients
+
+Accumulo is capable of scaling to very high rates of ingest, which is dependent upon
+not just the number of TabletServers in operation but also the number of ingest
+clients. This is because a single client, while capable of batching mutations and
+sending them to all TabletServers, is ultimately limited by the amount of data that
+can be processed on a single machine. The aggregate ingest rate will scale linearly
+with the number of clients up to the point at which either the aggregate I/O of
+TabletServers or total network bandwidth capacity is reached.
+
+In operational settings where high rates of ingest are paramount, clusters are often
+configured to dedicate some number of machines solely to running Ingester Clients.
+The exact ratio of clients to TabletServers necessary for optimum ingestion rates
+will vary according to the distribution of resources per machine and by data type.
+
+=== Bulk Ingest
+
+Accumulo supports the ability to import files produced by an external process such
+as MapReduce into an existing table. In some cases it may be faster to load data this
+way rather than via ingesting through clients using BatchWriters. This allows a large
+number of machines to format data the way Accumulo expects. The new files can
+then simply be introduced to Accumulo via a shell command.
+
+To configure MapReduce to format data in preparation for bulk loading, the job
+should be set to use a range partitioner instead of the default hash partitioner. The
+range partitioner uses the split points of the Accumulo table that will receive the
+data. The split points can be obtained from the shell and used by the MapReduce
+RangePartitioner. Note that this is only useful if the existing table is already split
+into multiple tablets.
+
+  user@myinstance mytable> getsplits
+  aa
+  ab
+  ac
+  ...
+  zx
+  zy
+  zz
+
+Run the MapReduce job, using the AccumuloFileOutputFormat to create the files to
+be introduced to Accumulo. Once this is complete, the files can be added to
+Accumulo via the shell:
+
+  user@myinstance mytable> importdirectory /files_dir /failures
+
+Note that the paths referenced are directories within the same HDFS instance over
+which Accumulo is running. Accumulo places any files that failed to be added to the
+second directory specified.
+
+A complete example of using Bulk Ingest can be found at
++accumulo/docs/examples/README.bulkIngest+.
+
+=== Logical Time for Bulk Ingest
+
+Logical time is important for bulk imported data, for which the client code may
+be choosing a timestamp. At bulk import time, the user can choose to enable
+logical time for the set of files being imported. When its enabled, Accumulo
+uses a specialized system iterator to lazily set times in a bulk imported file.
+This mechanism guarantees that times set by unsynchronized multi-node
+applications (such as those running on MapReduce) will maintain some semblance
+of causal ordering. This mitigates the problem of the time being wrong on the
+system that created the file for bulk import. These times are not set when the
+file is imported, but whenever it is read by scans or compactions. At import, a
+time is obtained and always used by the specialized system iterator to set that
+time.
+
+The timestamp assigned by Accumulo will be the same for every key in the file.
+This could cause problems if the file contains multiple keys that are identical
+except for the timestamp. In this case, the sort order of the keys will be
+undefined. This could occur if an insert and an update were in the same bulk
+import file.
+
+=== MapReduce Ingest
+It is possible to efficiently write many mutations to Accumulo in parallel via a
+MapReduce job. In this scenario the MapReduce is written to process data that lives
+in HDFS and write mutations to Accumulo using the AccumuloOutputFormat. See
+the MapReduce section under Analytics for details.
+
+An example of using MapReduce can be found under
++accumulo/docs/examples/README.mapred+.

http://git-wip-us.apache.org/repos/asf/accumulo/blob/900d6abb/docs/src/main/asciidoc/chapters/introduction.txt
----------------------------------------------------------------------
diff --git a/docs/src/main/asciidoc/chapters/introduction.txt b/docs/src/main/asciidoc/chapters/introduction.txt
new file mode 100644
index 0000000..1b964b4
--- /dev/null
+++ b/docs/src/main/asciidoc/chapters/introduction.txt
@@ -0,0 +1,25 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+== Introduction
+Apache Accumulo is a highly scalable structured store based on Google's BigTable.
+Accumulo is written in Java and operates over the Hadoop Distributed File System
+(HDFS), which is part of the popular Apache Hadoop project. Accumulo supports
+efficient storage and retrieval of structured data, including queries for ranges, and
+provides support for using Accumulo tables as input and output for MapReduce
+jobs.
+
+Accumulo features automatic load-balancing and partitioning, data compression
+and fine-grained security labels.

http://git-wip-us.apache.org/repos/asf/accumulo/blob/900d6abb/docs/src/main/asciidoc/chapters/multivolume.txt
----------------------------------------------------------------------
diff --git a/docs/src/main/asciidoc/chapters/multivolume.txt b/docs/src/main/asciidoc/chapters/multivolume.txt
new file mode 100644
index 0000000..8921f2d
--- /dev/null
+++ b/docs/src/main/asciidoc/chapters/multivolume.txt
@@ -0,0 +1,82 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+== Multi-Volume Installations
+
+This is an advanced configuration setting for very large clusters
+under a lot of write pressure.
+
+The HDFS NameNode holds all of the metadata about the files in
+HDFS. For fast performance, all of this information needs to be stored
+in memory.  A single NameNode with 64G of memory can store the
+metadata for tens of millions of files.However, when scaling beyond a
+thousand nodes, an active Accumulo system can generate lots of updates
+to the file system, especially when data is being ingested.  The large
+number of write transactions to the NameNode, and the speed of a
+single edit log, can become the limiting factor for large scale
+Accumulo installations.
+
+You can see the effect of slow write transactions when the Accumulo
+Garbage Collector takes a long time (more than 5 minutes) to delete
+the files Accumulo no longer needs.  If your Garbage Collector
+routinely runs in less than a minute, the NameNode is performing well.
+
+However, if you do begin to experience slow-down and poor GC
+performance, Accumulo can be configured to use multiple NameNode
+servers.  The configuration ``instance.volumes'' should be set to a
+comma-separated list, using full URI references to different NameNode
+servers:
+
+[source,xml]
+<property>
+    <name>instance.volumes</name>
+    <value>hdfs://ns1:9001,hdfs://ns2:9001</value>
+</property>
+
+The introduction of multiple volume support in 1.6 changed the way Accumulo
+stores pointers to files.  It now stores fully qualified URI references to
+files.  Before 1.6, Accumulo stored paths that were relative to a table
+directory.   After an upgrade these relative paths will still exist and are
+resolved using instance.dfs.dir, instance.dfs.uri, and Hadoop configuration in
+the same way they were before 1.6.
+
+If the URI for a namenode changes (e.g. namenode was running on host1 and its
+moved to host2), then Accumulo will no longer function.  Even if Hadoop and
+Accumulo configurations are changed, the fully qualified URIs stored in
+Accumulo will still contain the old URI.  To handle this Accumulo has the
+following configuration property for replacing URI stored in its metadata.  The
+example configuration below will replace ns1 with nsA and ns2 with nsB in
+Accumulo metadata.  For this property to take affect, Accumulo will need to be
+restarted.
+
+[source,xml]
+<property>
+    <name>instance.volumes.replacements</name>
+    <value>hdfs://ns1:9001 hdfs://nsA:9001, hdfs://ns2:9001 hdfs://nsB:9001</value>
+</property>
+
+Using viewfs or HA namenode, introduced in Hadoop 2, offers another option for
+managing the fully qualified URIs stored in Accumulo.  Viewfs and HA namenode
+both introduce a level of indirection in the Hadoop configuration.   For
+example assume viewfs:///nn1 maps to hdfs://nn1 in the Hadoop configuration.
+If viewfs://nn1 is used by Accumulo, then its easy to map viewfs://nn1 to
+hdfs://nnA by changing the Hadoop configuration w/o doing anything to Accumulo.
+A production system should probably use a HA namenode.  Viewfs may be useful on
+a test system with a single non HA namenode.
+
+You may also want to configure your cluster to use Federation,
+available in Hadoop 2.0, which allows DataNodes to respond to multiple
+NameNode servers, so you do not have to partition your DataNodes by
+NameNode.

http://git-wip-us.apache.org/repos/asf/accumulo/blob/900d6abb/docs/src/main/asciidoc/chapters/security.txt
----------------------------------------------------------------------
diff --git a/docs/src/main/asciidoc/chapters/security.txt b/docs/src/main/asciidoc/chapters/security.txt
new file mode 100644
index 0000000..98a82a1
--- /dev/null
+++ b/docs/src/main/asciidoc/chapters/security.txt
@@ -0,0 +1,176 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+== Security
+
+Accumulo extends the BigTable data model to implement a security mechanism
+known as cell-level security. Every key-value pair has its own security label, stored
+under the column visibility element of the key, which is used to determine whether
+a given user meets the security requirements to read the value. This enables data of
+various security levels to be stored within the same row, and users of varying
+degrees of access to query the same table, while preserving data confidentiality.
+
+=== Security Label Expressions
+
+When mutations are applied, users can specify a security label for each value. This is
+done as the Mutation is created by passing a ColumnVisibility object to the put()
+method:
+
+[source,java]
+----
+Text rowID = new Text("row1");
+Text colFam = new Text("myColFam");
+Text colQual = new Text("myColQual");
+ColumnVisibility colVis = new ColumnVisibility("public");
+long timestamp = System.currentTimeMillis();
+
+Value value = new Value("myValue");
+
+Mutation mutation = new Mutation(rowID);
+mutation.put(colFam, colQual, colVis, timestamp, value);
+----
+
+=== Security Label Expression Syntax
+
+Security labels consist of a set of user-defined tokens that are required to read the
+value the label is associated with. The set of tokens required can be specified using
+syntax that supports logical AND and OR combinations of tokens, as well as nesting
+groups of tokens together.
+
+For example, suppose within our organization we want to label our data values with
+security labels defined in terms of user roles. We might have tokens such as:
+
+  admin
+  audit
+  system
+
+These can be specified alone or combined using logical operators:
+
+----
+// Users must have admin privileges
+admin
+
+// Users must have admin and audit privileges
+admin&audit
+
+// Users with either admin or audit privileges
+admin|audit
+
+// Users must have audit and one or both of admin or system
+(admin|system)&audit
+----
+
+When both +|+ and +&+ operators are used, parentheses must be used to specify
+precedence of the operators.
+
+=== Authorization
+
+When clients attempt to read data from Accumulo, any security labels present are
+examined against the set of authorizations passed by the client code when the
+Scanner or BatchScanner are created. If the authorizations are determined to be
+insufficient to satisfy the security label, the value is suppressed from the set of
+results sent back to the client.
+
+Authorizations are specified as a comma-separated list of tokens the user possesses:
+
+[source,java]
+----
+// user possesses both admin and system level access
+Authorization auths = new Authorization("admin","system");
+
+Scanner s = connector.createScanner("table", auths);
+----
+
+=== User Authorizations
+
+Each Accumulo user has a set of associated security labels. To manipulate
+these in the shell while using the default authorizor, use the setuaths and getauths commands.
+These may also be modified for the default authorizor using the java security operations API.
+
+When a user creates a scanner a set of Authorizations is passed. If the
+authorizations passed to the scanner are not a subset of the users
+authorizations, then an exception will be thrown.
+
+To prevent users from writing data they can not read, add the visibility
+constraint to a table. Use the -evc option in the createtable shell command to
+enable this constraint. For existing tables use the following shell command to
+enable the visibility constraint. Ensure the constraint number does not
+conflict with any existing constraints.
+
+  config -t table -s table.constraint.1=org.apache.accumulo.core.security.VisibilityConstraint
+
+Any user with the alter table permission can add or remove this constraint.
+This constraint is not applied to bulk imported data, if this a concern then
+disable the bulk import permission.
+
+=== Pluggable Security
+
+New in 1.5 of Accumulo is a pluggable security mechanism. It can be broken into three actions --
+authentication, authorization, and permission handling. By default all of these are handled in
+Zookeeper, which is how things were handled in Accumulo 1.4 and before. It is worth noting at this
+point, that it is a new feature in 1.5 and may be adjusted in future releases without the standard
+deprecation cycle.
+
+Authentication simply handles the ability for a user to verify their integrity. A combination of
+principal and authentication token are used to verify a user is who they say they are. An
+authentication token should be constructed, either directly through its constructor, but it is
+advised to use the +init(Property)+ method to populate an authentication token. It is expected that a
+user knows what the appropriate token to use for their system is. The default token is
++PasswordToken+.
+
+Once a user is authenticated by the Authenticator, the user has access to the other actions within
+Accumulo. All actions in Accumulo are ACLed, and this ACL check is handled by the Permission
+Handler. This is what manages all of the permissions, which are divided in system and per table
+level. From there, if a user is doing an action which requires authorizations, the Authorizor is
+queried to determine what authorizations the user has.
+
+This setup allows a variety of different mechanisms to be used for handling different aspects of
+Accumulo's security. A system like Kerberos can be used for authentication, then a system like LDAP
+could be used to determine if a user has a specific permission, and then it may default back to the
+default ZookeeperAuthorizor to determine what Authorizations a user is ultimately allowed to use.
+This is a pluggable system so custom components can be created depending on your need.
+
+=== Secure Authorizations Handling
+
+For applications serving many users, it is not expected that an Accumulo user
+will be created for each application user. In this case an Accumulo user with
+all authorizations needed by any of the applications users must be created. To
+service queries, the application should create a scanner with the application
+user's authorizations. These authorizations could be obtained from a trusted 3rd
+party.
+
+Often production systems will integrate with Public-Key Infrastructure (PKI) and
+designate client code within the query layer to negotiate with PKI servers in order
+to authenticate users and retrieve their authorization tokens (credentials). This
+requires users to specify only the information necessary to authenticate themselves
+to the system. Once user identity is established, their credentials can be accessed by
+the client code and passed to Accumulo outside of the reach of the user.
+
+=== Query Services Layer
+
+Since the primary method of interaction with Accumulo is through the Java API,
+production environments often call for the implementation of a Query layer. This
+can be done using web services in containers such as Apache Tomcat, but is not a
+requirement. The Query Services Layer provides a mechanism for providing a
+platform on which user facing applications can be built. This allows the application
+designers to isolate potentially complex query logic, and enables a convenient point
+at which to perform essential security functions.
+
+Several production environments choose to implement authentication at this layer,
+where users identifiers are used to retrieve their access credentials which are then
+cached within the query layer and presented to Accumulo through the
+Authorizations mechanism.
+
+Typically, the query services layer sits between Accumulo and user workstations.

http://git-wip-us.apache.org/repos/asf/accumulo/blob/900d6abb/docs/src/main/asciidoc/chapters/shell.txt
----------------------------------------------------------------------
diff --git a/docs/src/main/asciidoc/chapters/shell.txt b/docs/src/main/asciidoc/chapters/shell.txt
new file mode 100644
index 0000000..7afcd7d
--- /dev/null
+++ b/docs/src/main/asciidoc/chapters/shell.txt
@@ -0,0 +1,129 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+== Accumulo Shell
+Accumulo provides a simple shell that can be used to examine the contents and
+configuration settings of tables, insert/update/delete values, and change
+configuration settings.
+
+The shell can be started by the following command:
+
+  $ACCUMULO_HOME/bin/accumulo shell -u [username]
+
+The shell will prompt for the corresponding password to the username specified
+and then display the following prompt:
+
+  Shell - Apache Accumulo Interactive Shell
+  -
+  - version 1.6
+  - instance name: myinstance
+  - instance id: 00000000-0000-0000-0000-000000000000
+  -
+  - type 'help' for a list of available commands
+  -
+
+=== Basic Administration
+
+The Accumulo shell can be used to create and delete tables, as well as to configure
+table and instance specific options.
+
+----
+root@myinstance> tables
+accumulo.metadata
+accumulo.root
+
+root@myinstance> createtable mytable
+
+root@myinstance mytable>
+
+root@myinstance mytable> tables
+accumulo.metadata
+accumulo.root
+mytable
+
+root@myinstance mytable> createtable testtable
+
+root@myinstance testtable>
+
+root@myinstance testtable> deletetable testtable
+deletetable { testtable } (yes|no)? yes
+Table: [testtable] has been deleted.
+
+root@myinstance>
+----
+
+The Shell can also be used to insert updates and scan tables. This is useful for
+inspecting tables.
+
+----
+root@myinstance mytable> scan
+
+root@myinstance mytable> insert row1 colf colq value1
+insert successful
+
+root@myinstance mytable> scan
+row1 colf:colq [] value1
+----
+
+The value in brackets ``[]'' would be the visibility labels. Since none were used, this is empty for this row.
+You can use the +-st+ option to scan to see the timestamp for the cell, too.
+
+=== Table Maintenance
+
+The *compact* command instructs Accumulo to schedule a compaction of the table during which
+files are consolidated and deleted entries are removed.
+
+  root@myinstance mytable> compact -t mytable
+  07 16:13:53,201 [shell.Shell] INFO : Compaction of table mytable started for given range
+
+The *flush* command instructs Accumulo to write all entries currently in memory for a given table
+to disk.
+
+  root@myinstance mytable> flush -t mytable
+  07 16:14:19,351 [shell.Shell] INFO : Flush of table mytable
+  initiated...
+
+=== User Administration
+
+The Shell can be used to add, remove, and grant privileges to users.
+
+----
+root@myinstance mytable> createuser bob
+Enter new password for 'bob': *********
+Please confirm new password for 'bob': *********
+
+root@myinstance mytable> authenticate bob
+Enter current password for 'bob': *********
+Valid
+
+root@myinstance mytable> grant System.CREATE_TABLE -s -u bob
+
+root@myinstance mytable> user bob
+Enter current password for 'bob': *********
+
+bob@myinstance mytable> userpermissions
+System permissions: System.CREATE_TABLE
+Table permissions (accumulo.metadata): Table.READ
+Table permissions (mytable): NONE
+
+bob@myinstance mytable> createtable bobstable
+
+bob@myinstance bobstable>
+
+bob@myinstance bobstable> user root
+Enter current password for 'root': *********
+
+root@myinstance bobstable> revoke System.CREATE_TABLE -s -u bob
+----

http://git-wip-us.apache.org/repos/asf/accumulo/blob/900d6abb/docs/src/main/asciidoc/chapters/table_configuration.txt
----------------------------------------------------------------------
diff --git a/docs/src/main/asciidoc/chapters/table_configuration.txt b/docs/src/main/asciidoc/chapters/table_configuration.txt
new file mode 100644
index 0000000..fae8cd3
--- /dev/null
+++ b/docs/src/main/asciidoc/chapters/table_configuration.txt
@@ -0,0 +1,635 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+== Table Configuration
+
+Accumulo tables have a few options that can be configured to alter the default
+behavior of Accumulo as well as improve performance based on the data stored.
+These include locality groups, constraints, bloom filters, iterators, and block
+cache.  For a complete list of available configuration options, see <<configuration>>.
+
+=== Locality Groups
+Accumulo supports storing sets of column families separately on disk to allow
+clients to efficiently scan over columns that are frequently used together and to avoid
+scanning over column families that are not requested. After a locality group is set,
+Scanner and BatchScanner operations will automatically take advantage of them
+whenever the fetchColumnFamilies() method is used.
+
+By default, tables place all column families into the same ``default'' locality group.
+Additional locality groups can be configured at any time via the shell or
+programmatically as follows:
+
+==== Managing Locality Groups via the Shell
+
+  usage: setgroups <group>=<col fam>{,<col fam>}{ <group>=<col fam>{,<col fam>}}
+      [-?] -t <table>
+
+  user@myinstance mytable> setgroups group_one=colf1,colf2 -t mytable
+
+  user@myinstance mytable> getgroups -t mytable
+
+==== Managing Locality Groups via the Client API
+
+[source,java]
+----
+Connector conn;
+
+HashMap<String,Set<Text>> localityGroups = new HashMap<String, Set<Text>>();
+
+HashSet<Text> metadataColumns = new HashSet<Text>();
+metadataColumns.add(new Text("domain"));
+metadataColumns.add(new Text("link"));
+
+HashSet<Text> contentColumns = new HashSet<Text>();
+contentColumns.add(new Text("body"));
+contentColumns.add(new Text("images"));
+
+localityGroups.put("metadata", metadataColumns);
+localityGroups.put("content", contentColumns);
+
+conn.tableOperations().setLocalityGroups("mytable", localityGroups);
+
+// existing locality groups can be obtained as follows
+Map<String, Set<Text>> groups =
+    conn.tableOperations().getLocalityGroups("mytable");
+----
+
+The assignment of Column Families to Locality Groups can be changed at any time. The
+physical movement of column families into their new locality groups takes place via
+the periodic Major Compaction process that takes place continuously in the
+background. Major Compaction can also be scheduled to take place immediately
+through the shell:
+
+  user@myinstance mytable> compact -t mytable
+
+=== Constraints
+
+Accumulo supports constraints applied on mutations at insert time. This can be
+used to disallow certain inserts according to a user defined policy. Any mutation
+that fails to meet the requirements of the constraint is rejected and sent back to the
+client.
+
+Constraints can be enabled by setting a table property as follows:
+
+----
+user@myinstance mytable> constraint -t mytable -a com.test.ExampleConstraint com.test.AnotherConstraint
+
+user@myinstance mytable> constraint -l
+com.test.ExampleConstraint=1
+com.test.AnotherConstraint=2
+----
+
+Currently there are no general-purpose constraints provided with the Accumulo
+distribution. New constraints can be created by writing a Java class that implements
+the following interface:
+
+  org.apache.accumulo.core.constraints.Constraint
+
+To deploy a new constraint, create a jar file containing the class implementing the
+new constraint and place it in the lib directory of the Accumulo installation. New
+constraint jars can be added to Accumulo and enabled without restarting but any
+change to an existing constraint class requires Accumulo to be restarted.
+
+An example of constraints can be found in
++accumulo/docs/examples/README.constraints+ with corresponding code under
++accumulo/examples/simple/src/main/java/accumulo/examples/simple/constraints+ .
+
+=== Bloom Filters
+As mutations are applied to an Accumulo table, several files are created per tablet. If
+bloom filters are enabled, Accumulo will create and load a small data structure into
+memory to determine whether a file contains a given key before opening the file.
+This can speed up lookups considerably.
+
+To enable bloom filters, enter the following command in the Shell:
+
+  user@myinstance> config -t mytable -s table.bloom.enabled=true
+
+An extensive example of using Bloom Filters can be found at
++accumulo/docs/examples/README.bloom+ .
+
+=== Iterators
+Iterators provide a modular mechanism for adding functionality to be executed by
+TabletServers when scanning or compacting data. This allows users to efficiently
+summarize, filter, and aggregate data. In fact, the built-in features of cell-level
+security and column fetching are implemented using Iterators.
+Some useful Iterators are provided with Accumulo and can be found in the
+*+org.apache.accumulo.core.iterators.user+* package.
+In each case, any custom Iterators must be included in Accumulo's classpath,
+typically by including a jar in +$ACCUMULO_HOME/lib+ or
++$ACCUMULO_HOME/lib/ext+, although the VFS classloader allows for
+classpath manipulation using a variety of schemes including URLs and HDFS URIs.
+
+==== Setting Iterators via the Shell
+
+Iterators can be configured on a table at scan, minor compaction and/or major
+compaction scopes. If the Iterator implements the OptionDescriber interface, the
+setiter command can be used which will interactively prompt the user to provide
+values for the given necessary options.
+
+  usage: setiter [-?] -ageoff | -agg | -class <name> | -regex |
+      -reqvis | -vers   [-majc] [-minc] [-n <itername>] -p <pri>
+      [-scan] [-t <table>]
+
+  user@myinstance mytable> setiter -t mytable -scan -p 15 -n myiter -class com.company.MyIterator
+
+The config command can always be used to manually configure iterators which is useful
+in cases where the Iterator does not implement the OptionDescriber interface.
+
+  config -t mytable -s table.iterator.scan.myiter=15,com.company.MyIterator
+  config -t mytable -s table.iterator.minc.myiter=15,com.company.MyIterator
+  config -t mytable -s table.iterator.majc.myiter=15,com.company.MyIterator
+  config -t mytable -s table.iterator.scan.myiter.opt.myoptionname=myoptionvalue
+  config -t mytable -s table.iterator.minc.myiter.opt.myoptionname=myoptionvalue
+  config -t mytable -s table.iterator.majc.myiter.opt.myoptionname=myoptionvalue
+
+==== Setting Iterators Programmatically
+
+[source,java]
+scanner.addIterator(new IteratorSetting(
+    15, // priority
+    "myiter", // name this iterator
+    "com.company.MyIterator" // class name
+));
+
+Some iterators take additional parameters from client code, as in the following
+example:
+
+[source,java]
+IteratorSetting iter = new IteratorSetting(...);
+iter.addOption("myoptionname", "myoptionvalue");
+scanner.addIterator(iter)
+
+Tables support separate Iterator settings to be applied at scan time, upon minor
+compaction and upon major compaction. For most uses, tables will have identical
+iterator settings for all three to avoid inconsistent results.
+
+==== Versioning Iterators and Timestamps
+
+Accumulo provides the capability to manage versioned data through the use of
+timestamps within the Key. If a timestamp is not specified in the key created by the
+client then the system will set the timestamp to the current time. Two keys with
+identical rowIDs and columns but different timestamps are considered two versions
+of the same key. If two inserts are made into Accumulo with the same rowID,
+column, and timestamp, then the behavior is non-deterministic.
+
+Timestamps are sorted in descending order, so the most recent data comes first.
+Accumulo can be configured to return the top k versions, or versions later than a
+given date. The default is to return the one most recent version.
+
+The version policy can be changed by changing the VersioningIterator options for a
+table as follows:
+
+----
+user@myinstance mytable> config -t mytable -s table.iterator.scan.vers.opt.maxVersions=3
+
+user@myinstance mytable> config -t mytable -s table.iterator.minc.vers.opt.maxVersions=3
+
+user@myinstance mytable> config -t mytable -s table.iterator.majc.vers.opt.maxVersions=3
+----
+
+When a table is created, by default its configured to use the
+VersioningIterator and keep one version. A table can be created without the
+VersioningIterator with the -ndi option in the shell. Also the Java API
+has the following method
+
+[source,java]
+connector.tableOperations.create(String tableName, boolean limitVersion);
+
+
+===== Logical Time
+
+Accumulo 1.2 introduces the concept of logical time. This ensures that timestamps
+set by Accumulo always move forward. This helps avoid problems caused by
+TabletServers that have different time settings. The per tablet counter gives unique
+one up time stamps on a per mutation basis. When using time in milliseconds, if
+two things arrive within the same millisecond then both receive the same
+timestamp. When using time in milliseconds, Accumulo set times will still
+always move forward and never backwards.
+
+A table can be configured to use logical timestamps at creation time as follows:
+
+  user@myinstance> createtable -tl logical
+
+===== Deletes
+Deletes are special keys in Accumulo that get sorted along will all the other data.
+When a delete key is inserted, Accumulo will not show anything that has a
+timestamp less than or equal to the delete key. During major compaction, any keys
+older than a delete key are omitted from the new file created, and the omitted keys
+are removed from disk as part of the regular garbage collection process.
+
+==== Filters
+When scanning over a set of key-value pairs it is possible to apply an arbitrary
+filtering policy through the use of a Filter. Filters are types of iterators that return
+only key-value pairs that satisfy the filter logic. Accumulo has a few built-in filters
+that can be configured on any table: AgeOff, ColumnAgeOff, Timestamp, NoVis, and RegEx. More can be added
+by writing a Java class that extends the
++org.apache.accumulo.core.iterators.Filter+ class.
+
+The AgeOff filter can be configured to remove data older than a certain date or a fixed
+amount of time from the present. The following example sets a table to delete
+everything inserted over 30 seconds ago:
+
+----
+user@myinstance> createtable filtertest
+
+user@myinstance filtertest> setiter -t filtertest -scan -minc -majc -p 10 -n myfilter -ageoff
+AgeOffFilter removes entries with timestamps more than <ttl> milliseconds old
+----------> set org.apache.accumulo.core.iterators.user.AgeOffFilter parameter negate, default false
+                keeps k/v that pass accept method, true rejects k/v that pass accept method:
+----------> set org.apache.accumulo.core.iterators.user.AgeOffFilter parameter ttl, time to
+                live (milliseconds): 3000
+----------> set org.apache.accumulo.core.iterators.user.AgeOffFilter parameter currentTime, if set,
+                use the given value as the absolute time in milliseconds as the current time of day:
+
+user@myinstance filtertest>
+
+user@myinstance filtertest> scan
+
+user@myinstance filtertest> insert foo a b c
+
+user@myinstance filtertest> scan
+foo a:b [] c
+
+user@myinstance filtertest> sleep 4
+
+user@myinstance filtertest> scan
+
+user@myinstance filtertest>
+----
+
+To see the iterator settings for a table, use:
+
+  user@example filtertest> config -t filtertest -f iterator
+  ---------+---------------------------------------------+------------------
+  SCOPE    | NAME                                        | VALUE
+  ---------+---------------------------------------------+------------------
+  table    | table.iterator.majc.myfilter .............. | 10,org.apache.accumulo.core.iterators.user.AgeOffFilter
+  table    | table.iterator.majc.myfilter.opt.ttl ...... | 3000
+  table    | table.iterator.majc.vers .................. | 20,org.apache.accumulo.core.iterators.VersioningIterator
+  table    | table.iterator.majc.vers.opt.maxVersions .. | 1
+  table    | table.iterator.minc.myfilter .............. | 10,org.apache.accumulo.core.iterators.user.AgeOffFilter
+  table    | table.iterator.minc.myfilter.opt.ttl ...... | 3000
+  table    | table.iterator.minc.vers .................. | 20,org.apache.accumulo.core.iterators.VersioningIterator
+  table    | table.iterator.minc.vers.opt.maxVersions .. | 1
+  table    | table.iterator.scan.myfilter .............. | 10,org.apache.accumulo.core.iterators.user.AgeOffFilter
+  table    | table.iterator.scan.myfilter.opt.ttl ...... | 3000
+  table    | table.iterator.scan.vers .................. | 20,org.apache.accumulo.core.iterators.VersioningIterator
+  table    | table.iterator.scan.vers.opt.maxVersions .. | 1
+  ---------+---------------------------------------------+------------------
+
+==== Combiners
+
+Accumulo allows Combiners to be configured on tables and column
+families. When a Combiner is set it is applied across the values
+associated with any keys that share rowID, column family, and column qualifier.
+This is similar to the reduce step in MapReduce, which applied some function to all
+the values associated with a particular key.
+
+For example, if a summing combiner were configured on a table and the following
+mutations were inserted:
+
+  Row     Family Qualifier Timestamp  Value
+  rowID1  colfA  colqA     20100101   1
+  rowID1  colfA  colqA     20100102   1
+
+The table would reflect only one aggregate value:
+
+  rowID1  colfA  colqA     -          2
+
+Combiners can be enabled for a table using the setiter command in the shell. Below is an example.
+
+----
+root@a14 perDayCounts> setiter -t perDayCounts -p 10 -scan -minc -majc -n daycount
+                       -class org.apache.accumulo.core.iterators.user.SummingCombiner
+TypedValueCombiner can interpret Values as a variety of number encodings
+  (VLong, Long, or String) before combining
+----------> set SummingCombiner parameter columns,
+            <col fam>[:<col qual>]{,<col fam>[:<col qual>]} : day
+----------> set SummingCombiner parameter type, <VARNUM|LONG|STRING>: STRING
+
+root@a14 perDayCounts> insert foo day 20080101 1
+root@a14 perDayCounts> insert foo day 20080101 1
+root@a14 perDayCounts> insert foo day 20080103 1
+root@a14 perDayCounts> insert bar day 20080101 1
+root@a14 perDayCounts> insert bar day 20080101 1
+
+root@a14 perDayCounts> scan
+bar day:20080101 []    2
+foo day:20080101 []    2
+foo day:20080103 []    1
+----
+
+Accumulo includes some useful Combiners out of the box. To find these look in
+the *+org.apache.accumulo.core.iterators.user+* package.
+
+Additional Combiners can be added by creating a Java class that extends
++org.apache.accumulo.core.iterators.Combiner+ and adding a jar containing that
+class to Accumulo's lib/ext directory.
+
+An example of a Combiner can be found under
+
+  accumulo/examples/simple/src/main/java/org/apache/accumulo/examples/simple/combiner/StatsCombiner.java
+
+
+=== Block Cache
+
+In order to increase throughput of commonly accessed entries, Accumulo employs a block cache.
+This block cache buffers data in memory so that it doesn't have to be read off of disk.
+The RFile format that Accumulo prefers is a mix of index blocks and data blocks, where the index blocks are used to find the appropriate data blocks.
+Typical queries to Accumulo result in a binary search over several index blocks followed by a linear scan of one or more data blocks.
+
+The block cache can be configured on a per-table basis, and all tablets hosted on a tablet server share a single resource pool.
+To configure the size of the tablet server's block cache, set the following properties:
+
+  tserver.cache.data.size: Specifies the size of the cache for file data blocks.
+  tserver.cache.index.size: Specifies the size of the cache for file indices.
+
+To enable the block cache for your table, set the following properties:
+
+  table.cache.block.enable: Determines whether file (data) block cache is enabled.
+  table.cache.index.enable: Determines whether index cache is enabled.
+
+The block cache can have a significant effect on alleviating hot spots, as well as reducing query latency.
+It is enabled by default for the metadata tables.
+
+=== Compaction
+
+As data is written to Accumulo it is buffered in memory. The data buffered in
+memory is eventually written to HDFS on a per tablet basis. Files can also be
+added to tablets directly by bulk import. In the background tablet servers run
+major compactions to merge multiple files into one. The tablet server has to
+decide which tablets to compact and which files within a tablet to compact.
+This decision is made using the compaction ratio, which is configurable on a
+per table basis. To configure this ratio modify the following property:
+
+  table.compaction.major.ratio
+
+Increasing this ratio will result in more files per tablet and less compaction
+work. More files per tablet means more higher query latency. So adjusting
+this ratio is a trade off between ingest and query performance. The ratio
+defaults to 3.
+
+The way the ratio works is that a set of files is compacted into one file if the
+sum of the sizes of the files in the set is larger than the ratio multiplied by
+the size of the largest file in the set. If this is not true for the set of all
+files in a tablet, the largest file is removed from consideration, and the
+remaining files are considered for compaction. This is repeated until a
+compaction is triggered or there are no files left to consider.
+
+The number of background threads tablet servers use to run major compactions is
+configurable. To configure this modify the following property:
+
+  tserver.compaction.major.concurrent.max
+
+Also, the number of threads tablet servers use for minor compactions is
+configurable. To configure this modify the following property:
+
+  tserver.compaction.minor.concurrent.max
+
+The numbers of minor and major compactions running and queued is visible on the
+Accumulo monitor page. This allows you to see if compactions are backing up
+and adjustments to the above settings are needed. When adjusting the number of
+threads available for compactions, consider the number of cores and other tasks
+running on the nodes such as maps and reduces.
+
+If major compactions are not keeping up, then the number of files per tablet
+will grow to a point such that query performance starts to suffer. One way to
+handle this situation is to increase the compaction ratio. For example, if the
+compaction ratio were set to 1, then every new file added to a tablet by minor
+compaction would immediately queue the tablet for major compaction. So if a
+tablet has a 200M file and minor compaction writes a 1M file, then the major
+compaction will attempt to merge the 200M and 1M file. If the tablet server
+has lots of tablets trying to do this sort of thing, then major compactions
+will back up and the number of files per tablet will start to grow, assuming
+data is being continuously written. Increasing the compaction ratio will
+alleviate backups by lowering the amount of major compaction work that needs to
+be done.
+
+Another option to deal with the files per tablet growing too large is to adjust
+the following property:
+
+  table.file.max
+
+When a tablet reaches this number of files and needs to flush its in-memory
+data to disk, it will choose to do a merging minor compaction. A merging minor
+compaction will merge the tablet's smallest file with the data in memory at
+minor compaction time. Therefore the number of files will not grow beyond this
+limit. This will make minor compactions take longer, which will cause ingest
+performance to decrease. This can cause ingest to slow down until major
+compactions have enough time to catch up. When adjusting this property, also
+consider adjusting the compaction ratio. Ideally, merging minor compactions
+never need to occur and major compactions will keep up. It is possible to
+configure the file max and compaction ratio such that only merging minor
+compactions occur and major compactions never occur. This should be avoided
+because doing only merging minor compactions causes O(_N_^2^) work to be done.
+The amount of work done by major compactions is O(_N_*log~_R_~(_N_)) where
+_R_ is the compaction ratio.
+
+Compactions can be initiated manually for a table. To initiate a minor
+compaction, use the flush command in the shell. To initiate a major compaction,
+use the compact command in the shell. The compact command will compact all
+tablets in a table to one file. Even tablets with one file are compacted. This
+is useful for the case where a major compaction filter is configured for a
+table. In 1.4 the ability to compact a range of a table was added. To use this
+feature specify start and stop rows for the compact command. This will only
+compact tablets that overlap the given row range.
+
+=== Pre-splitting tables
+
+Accumulo will balance and distribute tables across servers. Before a
+table gets large, it will be maintained as a single tablet on a single
+server. This limits the speed at which data can be added or queried
+to the speed of a single node. To improve performance when the a table
+is new, or small, you can add split points and generate new tablets.
+
+In the shell:
+
+  root@myinstance> createtable newTable
+  root@myinstance> addsplits -t newTable g n t
+
+This will create a new table with 4 tablets. The table will be split
+on the letters ``g'', ``n'', and ``t'' which will work nicely if the
+row data start with lower-case alphabetic characters. If your row
+data includes binary information or numeric information, or if the
+distribution of the row information is not flat, then you would pick
+different split points. Now ingest and query can proceed on 4 nodes
+which can improve performance.
+
+=== Merging tablets
+
+Over time, a table can get very large, so large that it has hundreds
+of thousands of split points. Once there are enough tablets to spread
+a table across the entire cluster, additional splits may not improve
+performance, and may create unnecessary bookkeeping. The distribution
+of data may change over time. For example, if row data contains date
+information, and data is continually added and removed to maintain a
+window of current information, tablets for older rows may be empty.
+
+Accumulo supports tablet merging, which can be used to reduce
+the number of split points. The following command will merge all rows
+from ``A'' to ``Z'' into a single tablet:
+
+  root@myinstance> merge -t myTable -s A -e Z
+
+If the result of a merge produces a tablet that is larger than the
+configured split size, the tablet may be split by the tablet server.
+Be sure to increase your tablet size prior to any merges if the goal
+is to have larger tablets:
+
+  root@myinstance> config -t myTable -s table.split.threshold=2G
+
+In order to merge small tablets, you can ask Accumulo to merge
+sections of a table smaller than a given size.
+
+  root@myinstance> merge -t myTable -s 100M
+
+By default, small tablets will not be merged into tablets that are
+already larger than the given size. This can leave isolated small
+tablets. To force small tablets to be merged into larger tablets use
+the +--force+ option:
+
+  root@myinstance> merge -t myTable -s 100M --force
+
+Merging away small tablets works on one section at a time. If your
+table contains many sections of small split points, or you are
+attempting to change the split size of the entire table, it will be
+faster to set the split point and merge the entire table:
+
+  root@myinstance> config -t myTable -s table.split.threshold=256M
+  root@myinstance> merge -t myTable
+
+=== Delete Range
+
+Consider an indexing scheme that uses date information in each row.
+For example ``20110823-15:20:25.013'' might be a row that specifies a
+date and time. In some cases, we might like to delete rows based on
+this date, say to remove all the data older than the current year.
+Accumulo supports a delete range operation which efficiently
+removes data between two rows. For example:
+
+  root@myinstance> deleterange -t myTable -s 2010 -e 2011
+
+This will delete all rows starting with ``2010'' and it will stop at
+any row starting ``2011''. You can delete any data prior to 2011
+with:
+
+  root@myinstance> deleterange -t myTable -e 2011 --force
+
+The shell will not allow you to delete an unbounded range (no start)
+unless you provide the +--force+ option.
+
+Range deletion is implemented using splits at the given start/end
+positions, and will affect the number of splits in the table.
+
+=== Cloning Tables
+
+A new table can be created that points to an existing table's data. This is a
+very quick metadata operation, no data is actually copied. The cloned table
+and the source table can change independently after the clone operation. One
+use case for this feature is testing. For example to test a new filtering
+iterator, clone the table, add the filter to the clone, and force a major
+compaction. To perform a test on less data, clone a table and then use delete
+range to efficiently remove a lot of data from the clone. Another use case is
+generating a snapshot to guard against human error. To create a snapshot,
+clone a table and then disable write permissions on the clone.
+
+The clone operation will point to the source table's files. This is why the
+flush option is present and is enabled by default in the shell. If the flush
+option is not enabled, then any data the source table currently has in memory
+will not exist in the clone.
+
+A cloned table copies the configuration of the source table. However the
+permissions of the source table are not copied to the clone. After a clone is
+created, only the user that created the clone can read and write to it.
+
+In the following example we see that data inserted after the clone operation is
+not visible in the clone.
+
+----
+root@a14> createtable people
+
+root@a14 people> insert 890435 name last Doe
+root@a14 people> insert 890435 name first John
+
+root@a14 people> clonetable people test
+
+root@a14 people> insert 890436 name first Jane
+root@a14 people> insert 890436 name last Doe
+
+root@a14 people> scan
+890435 name:first []    John
+890435 name:last []    Doe
+890436 name:first []    Jane
+890436 name:last []    Doe
+
+root@a14 people> table test
+
+root@a14 test> scan
+890435 name:first []    John
+890435 name:last []    Doe
+
+root@a14 test>
+----
+
+The du command in the shell shows how much space a table is using in HDFS.
+This command can also show how much overlapping space two cloned tables have in
+HDFS. In the example below du shows table ci is using 428M. Then ci is cloned
+to cic and du shows that both tables share 428M. After three entries are
+inserted into cic and its flushed, du shows the two tables still share 428M but
+cic has 226 bytes to itself. Finally, table cic is compacted and then du shows
+that each table uses 428M.
+
+----
+root@a14> du ci
+             428,482,573 [ci]
+
+root@a14> clonetable ci cic
+
+root@a14> du ci cic
+             428,482,573 [ci, cic]
+
+root@a14> table cic
+
+root@a14 cic> insert r1 cf1 cq1 v1
+root@a14 cic> insert r1 cf1 cq2 v2
+root@a14 cic> insert r1 cf1 cq3 v3
+
+root@a14 cic> flush -t cic -w
+27 15:00:13,908 [shell.Shell] INFO : Flush of table cic completed.
+
+root@a14 cic> du ci cic
+             428,482,573 [ci, cic]
+                     226 [cic]
+
+root@a14 cic> compact -t cic -w
+27 15:00:35,871 [shell.Shell] INFO : Compacting table ...
+27 15:03:03,303 [shell.Shell] INFO : Compaction of table cic completed for given range
+
+root@a14 cic> du ci cic
+             428,482,573 [ci]
+             428,482,612 [cic]
+
+root@a14 cic>
+----
+
+=== Exporting Tables
+
+Accumulo supports exporting tables for the purpose of copying tables to another
+cluster. Exporting and importing tables preserves the tables configuration,
+splits, and logical time. Tables are exported and then copied via the hadoop
+distcp command. To export a table, it must be offline and stay offline while
+discp runs. The reason it needs to stay offline is to prevent files from being
+deleted. A table can be cloned and the clone taken offline inorder to avoid
+losing access to the table. See +docs/examples/README.export+ for an example.

http://git-wip-us.apache.org/repos/asf/accumulo/blob/900d6abb/docs/src/main/asciidoc/chapters/table_design.txt
----------------------------------------------------------------------
diff --git a/docs/src/main/asciidoc/chapters/table_design.txt b/docs/src/main/asciidoc/chapters/table_design.txt
new file mode 100644
index 0000000..31fa49a
--- /dev/null
+++ b/docs/src/main/asciidoc/chapters/table_design.txt
@@ -0,0 +1,336 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+== Table Design
+
+=== Basic Table
+
+Since Accumulo tables are sorted by row ID, each table can be thought of as being
+indexed by the row ID. Lookups performed by row ID can be executed quickly, by doing
+a binary search, first across the tablets, and then within a tablet. Clients should
+choose a row ID carefully in order to support their desired application. A simple rule
+is to select a unique identifier as the row ID for each entity to be stored and assign
+all the other attributes to be tracked to be columns under this row ID. For example,
+if we have the following data in a comma-separated file:
+
+  userid,age,address,account-balance
+
+We might choose to store this data using the userid as the rowID, the column
+name in the column family, and a blank column qualifier:
+
+[source,java]
+----
+Mutation m = new Mutation(userid);
+final String column_qualifier = "";
+m.put("age", column_qualifier, age);
+m.put("address", column_qualifier, address);
+m.put("balance", column_qualifier, account_balance);
+
+writer.add(m);
+----
+
+We could then retrieve any of the columns for a specific userid by specifying the
+userid as the range of a scanner and fetching specific columns:
+
+[source,java]
+----
+Range r = new Range(userid, userid); // single row
+Scanner s = conn.createScanner("userdata", auths);
+s.setRange(r);
+s.fetchColumnFamily(new Text("age"));
+
+for(Entry<Key,Value> entry : s) {
+  System.out.println(entry.getValue().toString());
+}
+----
+
+=== RowID Design
+
+Often it is necessary to transform the rowID in order to have rows ordered in a way
+that is optimal for anticipated access patterns. A good example of this is reversing
+the order of components of internet domain names in order to group rows of the
+same parent domain together:
+
+  com.google.code
+  com.google.labs
+  com.google.mail
+  com.yahoo.mail
+  com.yahoo.research
+
+Some data may result in the creation of very large rows - rows with many columns.
+In this case the table designer may wish to split up these rows for better load
+balancing while keeping them sorted together for scanning purposes. This can be
+done by appending a random substring at the end of the row:
+
+  com.google.code_00
+  com.google.code_01
+  com.google.code_02
+  com.google.labs_00
+  com.google.mail_00
+  com.google.mail_01
+
+It could also be done by adding a string representation of some period of time such as date to the week
+or month:
+
+  com.google.code_201003
+  com.google.code_201004
+  com.google.code_201005
+  com.google.labs_201003
+  com.google.mail_201003
+  com.google.mail_201004
+
+Appending dates provides the additional capability of restricting a scan to a given
+date range.
+
+=== Lexicoders
+Since Keys in Accumulo are sorted lexicographically by default, it's often useful to encode
+common data types into a byte format in which their sort order corresponds to the sort order
+in their native form. An example of this is encoding dates and numerical data so that they can
+be better seeked or searched in ranges.
+
+The lexicoders are a standard and extensible way of encoding Java types. Here's an example
+of a lexicoder that encodes a java Date object so that it sorts lexicographically:
+
+[source,java]
+----
+// create new date lexicoder
+DateLexicoder dateEncoder = new DateLexicoder();
+
+// truncate time to hours
+long epoch = System.currentTimeMillis();
+Date hour = new Date(epoch - (epoch % 3600000));
+
+// encode the rowId so that it is sorted lexicographically
+Mutation mutation = new Mutation(dateEncoder.encode(hour));
+mutation.put(new Text("colf"), new Text("colq"), new Value(new byte[]{}));
+----
+
+If we want to return the most recent date first, we can reverse the sort order
+with the reverse lexicoder:
+
+[source,java]
+----
+// create new date lexicoder and reverse lexicoder
+DateLexicoder dateEncoder = new DateLexicoder();
+ReverseLexicoder reverseEncoder = new ReverseLexicoder(dateEncoder);
+
+// truncate date to hours
+long epoch = System.currentTimeMillis();
+Date hour = new Date(epoch - (epoch % 3600000));
+
+// encode the rowId so that it sorts in reverse lexicographic order
+Mutation mutation = new Mutation(reverseEncoder.encode(hour));
+mutation.put(new Text("colf"), new Text("colq"), new Value(new byte[]{}));
+----
+
+
+=== Indexing
+In order to support lookups via more than one attribute of an entity, additional
+indexes can be built. However, because Accumulo tables can support any number of
+columns without specifying them beforehand, a single additional index will often
+suffice for supporting lookups of records in the main table. Here, the index has, as
+the rowID, the Value or Term from the main table, the column families are the same,
+and the column qualifier of the index table contains the rowID from the main table.
+
+[width="75%",cols="^,^,^,^"]
+[grid="rows"]
+[options="header"]
+|=============================================
+|RowID |Column Family |Column Qualifier |Value
+|Term  |Field Name    |MainRowID        |
+|=============================================
+
+Note: We store rowIDs in the column qualifier rather than the Value so that we can
+have more than one rowID associated with a particular term within the index. If we
+stored this in the Value we would only see one of the rows in which the value
+appears since Accumulo is configured by default to return the one most recent
+value associated with a key.
+
+Lookups can then be done by scanning the Index Table first for occurrences of the
+desired values in the columns specified, which returns a list of row ID from the main
+table. These can then be used to retrieve each matching record, in their entirety, or a
+subset of their columns, from the Main Table.
+
+To support efficient lookups of multiple rowIDs from the same table, the Accumulo
+client library provides a BatchScanner. Users specify a set of Ranges to the
+BatchScanner, which performs the lookups in multiple threads to multiple servers
+and returns an Iterator over all the rows retrieved. The rows returned are NOT in
+sorted order, as is the case with the basic Scanner interface.
+
+[source,java]
+----
+// first we scan the index for IDs of rows matching our query
+Text term = new Text("mySearchTerm");
+
+HashSet<Range> matchingRows = new HashSet<Range>();
+
+Scanner indexScanner = createScanner("index", auths);
+indexScanner.setRange(new Range(term, term));
+
+// we retrieve the matching rowIDs and create a set of ranges
+for(Entry<Key,Value> entry : indexScanner) {
+    matchingRows.add(new Range(entry.getKey().getColumnQualifier()));
+}
+
+// now we pass the set of rowIDs to the batch scanner to retrieve them
+BatchScanner bscan = conn.createBatchScanner("table", auths, 10);
+bscan.setRanges(matchingRows);
+bscan.fetchColumnFamily(new Text("attributes"));
+
+for(Entry<Key,Value> entry : bscan) {
+    System.out.println(entry.getValue());
+}
+----
+
+One advantage of the dynamic schema capabilities of Accumulo is that different
+fields may be indexed into the same physical table. However, it may be necessary to
+create different index tables if the terms must be formatted differently in order to
+maintain proper sort order. For example, real numbers must be formatted
+differently than their usual notation in order to be sorted correctly. In these cases,
+usually one index per unique data type will suffice.
+
+=== Entity-Attribute and Graph Tables
+
+Accumulo is ideal for storing entities and their attributes, especially of the
+attributes are sparse. It is often useful to join several datasets together on common
+entities within the same table. This can allow for the representation of graphs,
+including nodes, their attributes, and connections to other nodes.
+
+Rather than storing individual events, Entity-Attribute or Graph tables store
+aggregate information about the entities involved in the events and the
+relationships between entities. This is often preferrable when single events aren't
+very useful and when a continuously updated summarization is desired.
+
+The physical schema for an entity-attribute or graph table is as follows:
+
+[width="75%",cols="^,^,^,^"]
+[grid="rows"]
+[options="header"]
+|==================================================
+|RowID    |Column Family  |Column Qualifier |Value
+|EntityID |Attribute Name |Attribute Value  |Weight
+|EntityID |Edge Type      |Related EntityID |Weight
+|==================================================
+
+For example, to keep track of employees, managers and products the following
+entity-attribute table could be used. Note that the weights are not always necessary
+and are set to 0 when not used.
+
+[width="75%",cols="^,^,^,^"]
+[grid="rows"]
+[options="header"]
+|=============================================
+|RowID |Column Family |Column Qualifier |Value
+| E001 | name         | bob             | 0
+| E001 | department   | sales           | 0
+| E001 | hire_date    | 20030102        | 0
+| E001 | units_sold   | P001            | 780
+| E002 | name         | george          | 0
+| E002 | department   | sales           | 0
+| E002 | manager_of   | E001            | 0
+| E002 | manager_of   | E003            | 0
+| E003 | name         | harry           | 0
+| E003 | department   | accounts_recv   | 0
+| E003 | hire_date    | 20000405        | 0
+| E003 | units_sold   | P002            | 566
+| E003 | units_sold   | P001            | 232
+| P001 | product_name | nike_airs       | 0
+| P001 | product_type | shoe            | 0
+| P001 | in_stock     | germany         | 900
+| P001 | in_stock     | brazil          | 200
+| P002 | product_name | basic_jacket    | 0
+| P002 | product_type | clothing        | 0
+| P002 | in_stock     | usa             | 3454
+| P002 | in_stock     | germany         | 700
+|=============================================
+
+To allow efficient updating of edge weights, an aggregating iterator can be
+configured to add the value of all mutations applied with the same key. These types
+of tables can easily be created from raw events by simply extracting the entities,
+attributes, and relationships from individual events and inserting the keys into
+Accumulo each with a count of 1. The aggregating iterator will take care of
+maintaining the edge weights.
+
+=== Document-Partitioned Indexing
+
+Using a simple index as described above works well when looking for records that
+match one of a set of given criteria. When looking for records that match more than
+one criterion simultaneously, such as when looking for documents that contain all of
+the words `the' and `white' and `house', there are several issues.
+
+First is that the set of all records matching any one of the search terms must be sent
+to the client, which incurs a lot of network traffic. The second problem is that the
+client is responsible for performing set intersection on the sets of records returned
+to eliminate all but the records matching all search terms. The memory of the client
+may easily be overwhelmed during this operation.
+
+For these reasons Accumulo includes support for a scheme known as sharded
+indexing, in which these set operations can be performed at the TabletServers and
+decisions about which records to include in the result set can be made without
+incurring network traffic.
+
+This is accomplished via partitioning records into bins that each reside on at most
+one TabletServer, and then creating an index of terms per record within each bin as
+follows:
+
+[width="75%",cols="^,^,^,^"]
+[grid="rows"]
+[options="header"]
+|==============================================
+|RowID |Column Family |Column Qualifier |Value
+|BinID |Term          |DocID            |Weight
+|==============================================
+
+Documents or records are mapped into bins by a user-defined ingest application. By
+storing the BinID as the RowID we ensure that all the information for a particular
+bin is contained in a single tablet and hosted on a single TabletServer since
+Accumulo never splits rows across tablets. Storing the Terms as column families
+serves to enable fast lookups of all the documents within this bin that contain the
+given term.
+
+Finally, we perform set intersection operations on the TabletServer via a special
+iterator called the Intersecting Iterator. Since documents are partitioned into many
+bins, a search of all documents must search every bin. We can use the BatchScanner
+to scan all bins in parallel. The Intersecting Iterator should be enabled on a
+BatchScanner within user query code as follows:
+
+[source,java]
+----
+Text[] terms = {new Text("the"), new Text("white"), new Text("house")};
+
+BatchScanner bscan = conn.createBatchScanner(table, auths, 20);
+
+IteratorSetting iter = new IteratorSetting(20, "ii", IntersectingIterator.class);
+IntersectingIterator.setColumnFamilies(iter, terms);
+
+bscan.addScanIterator(iter);
+bscan.setRanges(Collections.singleton(new Range()));
+
+for(Entry<Key,Value> entry : bscan) {
+    System.out.println(" " + entry.getKey().getColumnQualifier());
+}
+----
+
+This code effectively has the BatchScanner scan all tablets of a table, looking for
+documents that match all the given terms. Because all tablets are being scanned for
+every query, each query is more expensive than other Accumulo scans, which
+typically involve a small number of TabletServers. This reduces the number of
+concurrent queries supported and is subject to what is known as the `straggler'
+problem in which every query runs as slow as the slowest server participating.
+
+Of course, fast servers will return their results to the client which can display them
+to the user immediately while they wait for the rest of the results to arrive. If the
+results are unordered this is quite effective as the first results to arrive are as good
+as any others to the user.