You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@accumulo.apache.org by ct...@apache.org on 2014/03/28 01:50:53 UTC

[1/6] ACCUMULO-1487, ACCUMULO-1491 Stop packaging docs for monitor

Repository: accumulo
Updated Branches:
  refs/heads/master 0721f8dca -> 5655a044e


http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/server/monitor/src/main/resources/docs/isolation.html
----------------------------------------------------------------------
diff --git a/server/monitor/src/main/resources/docs/isolation.html b/server/monitor/src/main/resources/docs/isolation.html
deleted file mode 100644
index d0e77cc..0000000
--- a/server/monitor/src/main/resources/docs/isolation.html
+++ /dev/null
@@ -1,39 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-<html>
-<head>
-<title>Accumulo Isolation</title>
-<link rel='stylesheet' type='text/css' href='documentation.css' media='screen'/>
-</head>
-<body>
-
-<h1>Apache Accumulo Documentation : Isolation</h1>
-
-<h3>Scanning</h3>
-
-<p>Accumulo supports the ability to present an isolated view of rows when scanning.  There are three possible ways that a row could change in accumulo :  
-<ul>
- <li>a mutation applied to a table
- <li>iterators executed as part of a minor or major compaction 
- <li>bulk import of new files
-</ul>
-Isolation guarantees that either all or none of the changes made by these operations on a row are seen.  Use the <a href='apidocs/org/apache/accumulo/core/client/IsolatedScanner.html'>IsolatedScanner</a> to obtain an isolated view of an accumulo table.  When using the regular scanner it is possible to see a non isolated view of a row.  For example if a mutation modifies three columns, it is possible that you will only see two of those modifications.  With the isolated scanner either all three of the changes are seen or none.  For an example of this try running the <a href='apidocs/org/apache/accumulo/examples/simple/isolation/InterferenceTest.html'>InterferenceTest</a> example.  
-
-<p>At this time there is no client side isolation support for the <a href='apidocs/org/apache/accumulo/core/client/BatchScanner.html'>BatchScanner</a>.  You may consider using the <a href='apidocs/org/apache/accumulo/core/iterators/WholeRowIterator.html'>WholeRowIterator</a> with the  <a href='apidocs/org/apache/accumulo/core/client/BatchScanner.html'>BatchScanner</a> to achieve isolation though. This drawback of doing this is that entire rows are read into memory on the server side.  If a row is too big, it may crash a tablet server.  The <a href='apidocs/org/apache/accumulo/core/client/IsolatedScanner.html'>IsolatedScanner</a> buffers rows on the client side so a large row will not crash a tablet server.
-
-<h3>Iterators</h3>
-<p>When writing server side iterators for accumulo isolation is something to be aware of.  A scan time iterator in accumulo reads from a set of data sources.  While an iterator is reading data it has an isolated view.  However, after it returns a key/value it is possible that accumulo may switch data sources and re-seek the iterator.  This is done so that resources may be reclaimed.  When the user does not request isolation this can occur after any key is returned.  When a user request isolation this will only occur after a new row is returned, in which case it will re-seek to the very beginning of the next possible row.

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/server/monitor/src/main/resources/docs/lgroups.html
----------------------------------------------------------------------
diff --git a/server/monitor/src/main/resources/docs/lgroups.html b/server/monitor/src/main/resources/docs/lgroups.html
deleted file mode 100644
index 0012ffb..0000000
--- a/server/monitor/src/main/resources/docs/lgroups.html
+++ /dev/null
@@ -1,42 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-<html>
-<head>
-<title>Accumulo Locality Groups</title>
-<link rel='stylesheet' type='text/css' href='documentation.css' media='screen'/>
-</head>
-<body>
-
-<h1>Apache Accumulo Documentation : Locality Groups</h1>
-
-<p>Accumulo supports locality groups similar to those described in the Big Table paper.  Locality groups allow vertical partitioning of data by column family.  This allows user to configure their tables such that scans over a subset of column families are much faster.  The Accumulo locality group model has the following features.
-
-<UL>
- <LI>There is a default locality group that holds all column families not in a declared locality group.
- <LI>No requirement to declare locality groups or column families at table creation.
- <LI>Can change locality group configuration on the fly.
-</UL>
-
-
-<P>When the locality group configuration for a table is changed it has no effect on existing data.  All minor and major compactions that occur after the change will organize data into the new locality group structure.  As data is written into a table, it will cause minor and major compactions to occur.  Over time this will result in all data being organized according to the new locality groups.   If all data must be reorganized into the new locality groups immediately, this can be accomplished by forcing a full major compaction of the table.  Use the compact command in the shell to accomplish this. 
-
-<P>There are two ways to manipulate locality groups, via the shell or through the Java API.  From the shell use the getgroups and setgroups commands.  Through the API, <a href='apidocs/org/apache/accumulo/core/client/admin/TableOperations.html'>TableOperations</a> has the methods setLocalityGroups() and getLocalityGroups().
-
-<P>To limit scans to a set of locality groups, use the fetchColumnFamily() function on  <a href='apidocs/org/apache/accumulo/core/client/Scanner.html'>Scanner</a> or <a href='apidocs/org/apache/accumulo/core/client/BatchScanner.html'>BatchScanner</a>.  From the shell use scan with the -c option.  
-
-</body>
-</html>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/server/monitor/src/main/resources/docs/metrics.html
----------------------------------------------------------------------
diff --git a/server/monitor/src/main/resources/docs/metrics.html b/server/monitor/src/main/resources/docs/metrics.html
deleted file mode 100644
index 00f0a5b..0000000
--- a/server/monitor/src/main/resources/docs/metrics.html
+++ /dev/null
@@ -1,182 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-<html>
-<head>
-<title>Accumulo Metrics</title>
-<link rel='stylesheet' type='text/css' href='documentation.css' media='screen'/>
-</head>
-<body>
-
-<h1>Apache Accumulo Documentation : Metrics</h1>
-
-As of version 1.2, metrics for the Master, Tablet Servers, and Loggers are available. A new configuration file, accumulo-metrics.xml, is located in the conf directory and can
-be modified to turn metrics collection on or off, and to enable file logging if desired. This file can be modified at runtime and the changes will be seen after a few seconds.
-Except where specified all time values are in milliseconds.
-<h1>Master Metrics</h1>
-<p>JMX Object Name: org.apache.accumulo.server.metrics:type=MasterMetricsMBean,name= &lt;current thread name&gt;</p>
-<table>
-	<thead>
-		<tr><td>Method Name</td><td>Description</td></tr>
-	</thead>
-	<tbody>
-		<tr class="highlight"><td>public long getPingCount();</td><td>Number of pings to tablet servers</td></tr>
-		<tr><td>public long getPingAvgTime();</td><td>Average time for each ping</td></tr>
-		<tr class="highlight"><td>public long getPingMinTime();</td><td>Minimum time for each ping</td></tr>
-		<tr><td>public long getPingMaxTime();</td><td>Maximum time for each ping</td></tr>
-		<tr class="highlight"><td>public String getTServerWithHighestPingTime();</td><td>tablet server with highest ping</td></tr>
-		<tr><td>public void reset();</td><td>Resets all counters to zero</td></tr>
-	</tbody>
-</table>
-<h1>Logging Server Metrics</h1>
-<p>JMX Object Name: org.apache.accumulo.server.metrics:type=LogWriterMBean,name= &lt;current thread name&gt;</p>
-<table>
-	<thead>
-		<tr><td>Method Name</td><td>Description</td></tr>
-	</thead>
-	<tbody>
-		<tr class="highlight"><td>public long getCloseCount();</td><td>Number of closed log files</td></tr>
-		<tr><td>public long getCloseAvgTime();</td><td>Average time to close a log file</td></tr>
-		<tr class="highlight"><td>public long getCloseMinTime();</td><td>Minimum time to close a log file</td></tr>
-		<tr><td>public long getCloseMaxTime();</td><td>Maximum time to close a log file</td></tr>
-		<tr class="highlight"><td>public long getCopyCount();</td><td>Number of log files copied</td></tr>
-		<tr><td>public long getCopyAvgTime();</td><td>Average time to copy a log file</td></tr>
-		<tr class="highlight"><td>public long getCopyMinTime();</td><td>Minimum time to copy a log file</td></tr>
-		<tr><td>public long getCopyMaxTime();</td><td>Maximum time to copy a log file</td></tr>
-		<tr class="highlight"><td>public long getCreateCount();</td><td>Number of log files created</td></tr>
-		<tr><td>public long getCreateMinTime();</td><td>Minimum time to create a log file</td></tr>
-		<tr class="highlight"><td>public long getCreateMaxTime();</td><td>Maximum time to create a log file</td></tr>
-		<tr><td>public long getCreateAvgTime();</td><td>Average time to create a log file</td></tr>
-		<tr class="highlight"><td>public long getLogAppendCount();</td><td>Number of times logs have been appended</td></tr>
-		<tr><td>public long getLogAppendMinTime();</td><td>Minimum time to append to a log file</td></tr>
-		<tr class="highlight"><td>public long getLogAppendMaxTime();</td><td>Maximum time to append to a log file</td></tr>
-		<tr><td>public long getLogAppendAvgTime();</td><td>Average time to append to a log file</td></tr>
-		<tr class="highlight"><td>public long getLogFlushCount();</td><td>Number of log file flushes</td></tr>
-		<tr><td>public long getLogFlushMinTime();</td><td>Minimum time to flush a log file</td></tr>
-		<tr class="highlight"><td>public long getLogFlushMaxTime();</td><td>Maximum time to flush a log file</td></tr>
-		<tr><td>public long getLogFlushAvgTime();</td><td>Average time to flush a log file</td></tr>
-		<tr class="highlight"><td>public long getLogExceptionCount();</td><td>Number of log exceptions</td></tr>
-		<tr><td>public void reset();</td><td>Resets all counters to zero</td></tr>
-	</tbody>
-</table>
-<h1>Tablet Server Metrics</h1>
-<p>JMX Object Name: org.apache.accumulo.server.metrics:type=TabletServerMBean,name= &lt;current thread name&gt;</p>
-<table>
-	<thead>
-		<tr><td>Method Name</td><td>Description</td></tr>
-	</thead>
-	<tbody>
-		<tr class="highlight"><td>public int getOnlineCount();</td><td>Number of tablets online</td></tr>
-		<tr><td>public int getOpeningCount();</td><td>Number of tablets that are being opened</td></tr>
-		<tr class="highlight"><td>public int getUnopenedCount();</td><td>Number or unopened tablets</td></tr>
-		<tr><td>public int getMajorCompactions();</td><td>Number of Major Compactions currently running</td></tr>
-		<tr class="highlight"><td>public int getMajorCompactionsQueued();</td><td>Number of Major Compactions yet to run</td></tr>
-		<tr><td>public int getMinorCompactions();</td><td>Number of Minor Compactions currently running</td></tr>
-		<tr class="highlight"><td>public int getMinorCompactionsQueued();</td><td>Number of Minor Compactions yet to run</td></tr>
-		<tr><td>public int getShutdownStage();</td><td>Current stage in the shutdown process</td></tr>
-		<tr class="highlight"><td>public long getEntries();</td><td>Number of entries in all the tablets</td></tr>
-		<tr><td>public long getEntriesInMemory();</td><td>Number of entries in memory on all tablet servers</td></tr>
-		<tr class="highlight"><td>public long getQueries();</td><td>Number of queries currently running on all the tablet servers</td></tr>
-		<tr><td>public long getIngest();</td><td>Number of entries currently being ingested on all the tablet servers</td></tr>
-		<tr class="highlight"><td>public long getTotalMinorCompactions();</td><td>Number of Minor Compactions completed</td></tr>
-		<tr><td>public double getHoldTime();</td><td>Number of seconds that ingest is waiting for memory to be freed on tablet servers</td></tr>
-		<tr class="highlight"><td>public String getName();</td><td>Address of the master</td></tr>
-	</tbody>
-</table>
-<h1>Tablet Server Minor Compaction Metrics</h1>
-<p>JMX Object Name: org.apache.accumulo.server.metrics:type=TabletServerMinCMetricsMBean,name= &lt;current thread name&gt;</p>
-<table>
-	<thead>
-		<tr><td>Method Name</td><td>Description</td></tr>
-	</thead>
-	<tbody>
-		<tr class="highlight"><td>public long getMinorCompactionCount();</td><td>Number of completed Minor Compactions on all tablet servers</td></tr>
-		<tr><td>public long getMinorCompactionAvgTime();</td><td>Average time to complete Minor Compaction</td></tr>
-		<tr class="highlight"><td>public long getMinorCompactionMinTime();</td><td>Minimum time to complete Minor Compaction</td></tr>
-		<tr><td>public long getMinorCompactionMaxTime();</td><td>Maximum time to complete Minor Compaction</td></tr>
-		<tr class="highlight"><td>public long getMinorCompactionQueueCount();</td><td>Number of Minor Compactions yet to be run</td></tr>
-		<tr><td>public long getMinorCompactionQueueAvgTime();</td><td>Average time Minor Compaction is in the queue</td></tr>
-		<tr class="highlight"><td>public long getMinorCompactionQueueMinTime();</td><td>Minimum time Minor Compaction is in the queue</td></tr>
-		<tr><td>public long getMinorCompactionQueueMaxTime();</td><td>Maximum time Minor Compaction is in the queue</td></tr>
-		<tr class="highlight"><td>public void reset();</td><td>Resets all counters to zero</td></tr>
-	</tbody>
-</table>
-<h1>Tablet Server Scan Metrics</h1>
-<p>JMX Object Name: org.apache.accumulo.server.metrics:type=TabletServerScanMetricsMBean,name= &lt;current thread name&gt;</p>
-<table>
-	<thead>
-		<tr><td>Method Name</td><td>Description</td></tr>
-	</thead>
-	<tbody>
-		<tr class="highlight"><td>public long getScanCount();</td><td>Number of scans completed</td></tr>
-		<tr><td>public long getScanAvgTime();</td><td>Average time for scan operation</td></tr>
-		<tr class="highlight"><td>public long getScanMinTime();</td><td>Minimum time for scan operation</td></tr>
-		<tr><td>public long getScanMaxTime();</td><td>Maximum time for scan operation</td></tr>
-		<tr class="highlight"><td>public long getResultCount();</td><td>Number of scans that returned a result</td></tr>
-		<tr><td>public long getResultAvgSize();</td><td>Average size of scan result</td></tr>
-		<tr class="highlight"><td>public long getResultMinSize();</td><td>Minimum size of scan result</td></tr>
-		<tr><td>public long getResultMaxSize();</td><td>Maximum size of scan result</td></tr>
-		<tr class="highlight"><td>public void reset();</td><td>Resets all counters to zero</td></tr>
-	</tbody>
-</table>
-<h1>Tablet Server Update Metrics</h1>
-<p>JMX Object Name: org.apache.accumulo.server.metrics:type=TabletServerUpdateMetricsMBean,name= &lt;current thread name&gt;</p>
-<table>
-	<thead>
-		<tr><td>Method Name</td><td>Description</td></tr>
-	</thead>
-	<tbody>
-		<tr class="highlight"><td>public long getPermissionErrorCount();</td><td>Number of permission errors</td></tr>
-		<tr><td>public long getUnknownTabletErrorCount();</td><td>Number of unknown tablet errors</td></tr>
-		<tr class="highlight"><td>public long getMutationArrayAvgSize();</td><td>Average size of mutation array</td></tr>
-		<tr><td>public long getMutationArrayMinSize();</td><td>Minimum size of mutation array</td></tr>
-		<tr class="highlight"><td>public long getMutationArrayMaxSize();</td><td>Maximum size of mutation array</td></tr>
-		<tr><td>public long getCommitPrepCount();</td><td>Number of commit preparations</td></tr>
-		<tr class="highlight"><td>public long getCommitPrepMinTime();</td><td>Minimum time for commit preparation</td></tr>
-		<tr><td>public long getCommitPrepMaxTime();</td><td>Maximum time for commit preparatation</td></tr>
-		<tr class="highlight"><td>public long getCommitPrepAvgTime();</td><td>Average time for commit preparation</td></tr>
-		<tr><td>public long getConstraintViolationCount();</td><td>Number of constraint violations</td></tr>
-		<tr class="highlight"><td>public long getWALogWriteCount();</td><td>Number of writes to the Write Ahead Log</td></tr>
-		<tr><td>public long getWALogWriteMinTime();</td><td>Minimum time of a write to the Write Ahead Log</td></tr>
-		<tr class="highlight"><td>public long getWALogWriteMaxTime();</td><td>Maximum time of a write to the Write Ahead Log</td></tr>
-		<tr><td>public long getWALogWriteAvgTime();</td><td>Average time of a write to the Write Ahead Log</td></tr>
-		<tr class="highlight"><td>public long getCommitCount();</td><td>Number of commits</td></tr>
-		<tr><td>public long getCommitMinTime();</td><td>Minimum time for a commit</td></tr>
-		<tr class="highlight"><td>public long getCommitMaxTime();</td><td>Maximum time for a commit</td></tr>
-		<tr><td>public long getCommitAvgTime();</td><td>Average time for a commit</td></tr>
-		<tr class="highlight"><td>public void reset();</td><td>Resets all counters to zero</td></tr>
-	</tbody>
-</table>
-<h1>Thrift Server Metrics</h1>
-<p>JMX Object Name: org.apache.accumulo.server.metrics:type=ThriftMetricsMBean,name= &lt;thread name&gt;</p>
-<table>
-	<thead>
-		<tr><td>Method Name</td><td>Description</td></tr>
-	</thead>
-	<tbody>
-		<tr class="highlight"><td>public long getIdleCount();</td><td>Number of times the Thrift server has been idle</td></tr>
-		<tr><td>public long getIdleMinTime();</td><td>Minimum amount of time the Thrift server has been idle</td></tr>
-		<tr class="highlight"><td>public long getIdleMaxTime();</td><td>Maximum amount of time the Thrift server has been idle</td></tr>
-		<tr><td>public long getIdleAvgTime();</td><td>Average time the Thrift server has been idle</td></tr>
-		<tr class="highlight"><td>public long getExecutionCount();</td><td>Number of calls processed by the Thrift server</td></tr>
-		<tr><td>public long getExecutionMinTime();</td><td>Minimum amount of time executing method</td></tr>
-		<tr class="highlight"><td>public long getExecutionMaxTime();</td><td>Maximum amount of time executing method</td></tr>
-		<tr><td>public long getExecutionAvgTime();</td><td>Average time executing methods</td></tr>
-		<tr class="highlight"><td>public void reset();</td><td>Resets all counters to zero</td></tr>
-	</tbody>
-</table>
-</body>
-</html>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/server/monitor/src/main/resources/docs/timestamps.html
----------------------------------------------------------------------
diff --git a/server/monitor/src/main/resources/docs/timestamps.html b/server/monitor/src/main/resources/docs/timestamps.html
deleted file mode 100644
index 52290c7..0000000
--- a/server/monitor/src/main/resources/docs/timestamps.html
+++ /dev/null
@@ -1,160 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-<html>
-<head>
-<title>Accumulo Timestamps</title>
-<link rel='stylesheet' type='text/css' href='documentation.css' media='screen'/>
-</head>
-<body>
-
-<h1>Apache Accumulo Documentation : Timestamps</h1>
-
-<p>Everything inserted into accumulo has a timestamp.  If the user does not
-set it, then the system will set the timestamp.  The timestamp is the last
-thing accumulo sorts on.  So when two keys have the same row, column family,
-column qualifier, and column visibility then the timestamp of the two keys is
-compared. 
-
-<p>Timestamps are sorted in descending order, so the most recent data comes
-first.  When a table is created in accumulo, by default it has a versioning
-iterator that only shows the most recent.  In the example below two identical
-things are inserted.  The scan after that only shows the most recent version.
-However when the versioning iterator configuration is changed, then both are
-seen.  When data is inserted with a lower timestamp than existing data, it will
-fall behind the existing data and may not be seen depending on the versioning
-settings.  This is why the insert made with a timestamp of 500 is not seen in
-the scan below.
-
-<p><pre>
-root@ac12&gt; createtable foo
-root@ac12 foo&gt; 
-root@ac12 foo&gt; 
-root@ac12 foo&gt; insert r1 cf1 cq1 value1                                   
-root@ac12 foo&gt; insert r1 cf1 cq1 value2
-root@ac12 foo&gt; scan -st
-r1 cf1:cq1 [] 1279906856203    value2
-root@ac12 foo&gt; config -t foo -f iterator                                  
----------+---------------------------------------------+-----------------------------------------------------------------------------------------------------
-SCOPE    | NAME                                        | VALUE
----------+---------------------------------------------+-----------------------------------------------------------------------------------------------------
-table    | table.iterator.majc.vers .................. | 20,org.apache.accumulo.core.iterators.VersioningIterator
-table    | table.iterator.majc.vers.opt.maxVersions .. | 1
-table    | table.iterator.minc.vers .................. | 20,org.apache.accumulo.core.iterators.VersioningIterator
-table    | table.iterator.minc.vers.opt.maxVersions .. | 1
-table    | table.iterator.scan.vers .................. | 20,org.apache.accumulo.core.iterators.VersioningIterator
-table    | table.iterator.scan.vers.opt.maxVersions .. | 1
----------+---------------------------------------------+-----------------------------------------------------------------------------------------------------
-root@ac12 foo&gt; config -t foo -s table.iterator.scan.vers.opt.maxVersions=3
-root@ac12 foo&gt; config -t foo -s table.iterator.minc.vers.opt.maxVersions=3
-root@ac12 foo&gt; config -t foo -s table.iterator.majc.vers.opt.maxVersions=3
-root@ac12 foo&gt; scan -st
-r1 cf1:cq1 [] 1279906856203    value2
-r1 cf1:cq1 [] 1279906853170    value1
-root@ac12 foo&gt; insert -t 600 r1 cf1 cq1 value3
-root@ac12 foo&gt; insert -t 500 r1 cf1 cq1 value4
-root@ac12 foo&gt; scan -st
-r1 cf1:cq1 [] 1279906856203    value2
-r1 cf1:cq1 [] 1279906853170    value1
-r1 cf1:cq1 [] 600    value3
-root@ac12 foo&gt;
-
-</pre>
-
-<p>Deletes are special keys in accumulo that get sorted along will all the other
-data.  When a delete key is inserted, accumulo will not show anything that has
-a timestamp less than or equal to the delete key.  In the example below an
-insert is made with timestamp 5 and then a delete is inserted with timestamp 3.
-The scan after that show that the delete marker does not hide the key.  However
-when a delete is inserted with timestamp 5, then nothing can be seen.  Once a
-delete marker is inserted, it is there until a full major compaction occurs.
-That is why the insert made after the delete can not be seen.  The insert after
-the flush and compact commands can be seen because the delete marker is gone.
-The flush forced a minor compaction and compact forced a full major compaction.
-
-<p><pre>
-root@ac12&gt; createtable bar
-root@ac12 bar&gt; insert -t 5 r1 cf1 cq1 val1
-root@ac12 bar&gt; scan -st
-r1 cf1:cq1 [] 5    val1
-root@ac12 bar&gt; delete -t 3 r1 cf1 cq1     
-root@ac12 bar&gt; scan
-r1 cf1:cq1 []    val1
-root@ac12 bar&gt; scan -st
-r1 cf1:cq1 [] 5    val1
-root@ac12 bar&gt; delete -t 5 r1 cf1 cq1
-root@ac12 bar&gt; scan -st              
-root@ac12 bar&gt; insert -t 5 r1 cf1 cq1 val2
-root@ac12 bar&gt; scan -st
-root@ac12 bar&gt; flush -t bar
-23 14:01:36,587 [shell.Shell] INFO : Flush of table bar initiated...
-root@ac12 bar&gt; compact -t bar
-23 14:02:00,042 [shell.Shell] INFO : Compaction of table bar scheduled for 20100723140200EDT
-root@ac12 bar&gt; insert -t 5 r1 cf1 cq1 val1
-root@ac12 bar&gt; scan
-r1 cf1:cq1 []    val1
-</pre>
-
-<p>If two inserts are made into accumulo with the same row, column, and
-timestamp, then the behavior is non-deterministic.
-
-<p>Accumulo 1.2 introduces the concept of logical time.  This ensures that
-timestamps set by accumulo always move forward.  There have been many problems
-caused by tablet servers with different system times.  In the case where a
-tablet servers time is in the future, tablets hosted on that tablet server and
-then migrated will have future timestamps in their data.  This can cause newer
-keys to fall behind existing keys, which can result in seeing older data or not
-seeing data if a new key falls behind on old delete.  Logical time prevents
-this by ensuring that accumulo set time stamps never go backwards, on a per
-tablet basis.  So if a tablet servers time is a year in the future, then any
-tablet hosted there will generate timestamps a year in the future even when
-later hosted on a server with correct time. Logical time can be configured on a
-per table basis to either set time in millis or to use a per tablet counter.
-The per tablet counter gives unique one up time stamps on a per mutation
-basis. When using time in millis, if two things arrive within the same
-millisecond then both receive the same timestamp. 
-
-<p>The example below shows a table created using a per tablet counter for
-timestamps.  Two inserts are made, the first gets timestamp 0 the second 1.
-After that the table is split into two tablets and two more inserts are made.
-These inserts get the same timestamp because they are made on different
-tablets.    When the original tablet is split into two, the two child tablets
-inherit the next timestamp of their parent and start from there. So do not
-expect this configuration to offer unique timestamps across a table.  Its only
-purpose is to uniquely order events within a tablet.
-
-<p><pre>
-root@ac12 foo&gt; createtable -tl logical
-root@ac12 logical&gt; insert 000892 person name "John Doe"
-root@ac12 logical&gt; insert 003042 person name "Jane Doe"
-root@ac12 logical&gt; scan -st
-000892 person:name [] 0    John Doe
-003042 person:name [] 1    Jane Doe
-root@ac12 logical&gt;
-root@ac12 logical&gt; addsplits -t logical 002000
-root@ac12 logical&gt; insert 003042 person address "123 Somewhere"
-root@ac12 logical&gt; insert 000892 person address "123 Nowhere"  
-root@ac12 logical&gt; scan -st
-000892 person:address [] 2    123 Nowhere
-000892 person:name [] 0    John Doe
-003042 person:address [] 2    123 Somewhere
-003042 person:name [] 1    Jane Doe
-root@ac12 logical&gt; 
- 
-</pre>
-
-</body>
-</html>


[2/6] ACCUMULO-1487, ACCUMULO-1491 Stop packaging docs for monitor

Posted by ct...@apache.org.
http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/server/monitor/src/main/resources/docs/examples/README.client
----------------------------------------------------------------------
diff --git a/server/monitor/src/main/resources/docs/examples/README.client b/server/monitor/src/main/resources/docs/examples/README.client
deleted file mode 100644
index 64343eb..0000000
--- a/server/monitor/src/main/resources/docs/examples/README.client
+++ /dev/null
@@ -1,79 +0,0 @@
-Title: Apache Accumulo Client Examples
-Notice:    Licensed to the Apache Software Foundation (ASF) under one
-           or more contributor license agreements.  See the NOTICE file
-           distributed with this work for additional information
-           regarding copyright ownership.  The ASF licenses this file
-           to you under the Apache License, Version 2.0 (the
-           "License"); you may not use this file except in compliance
-           with the License.  You may obtain a copy of the License at
-           .
-             http://www.apache.org/licenses/LICENSE-2.0
-           .
-           Unless required by applicable law or agreed to in writing,
-           software distributed under the License is distributed on an
-           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-           KIND, either express or implied.  See the License for the
-           specific language governing permissions and limitations
-           under the License.
-
-This documents how you run the simplest java examples.
-
-This tutorial uses the following Java classes, which can be found in org.apache.accumulo.examples.simple.client in the examples-simple module:
-
- * Flush.java - flushes a table
- * RowOperations.java - reads and writes rows
- * ReadWriteExample.java - creates a table, writes to it, and reads from it
-
-Using the accumulo command, you can run the simple client examples by providing their 
-class name, and enough arguments to find your accumulo instance.  For example,
-the Flush class will flush a table:
-
-    $ PACKAGE=org.apache.accumulo.examples.simple.client
-    $ bin/accumulo $PACKAGE.Flush -u root -p mypassword -i instance -z zookeeper -t trace
-
-The very simple RowOperations class demonstrates how to read and write rows using the BatchWriter
-and Scanner:
-
-    $ bin/accumulo $PACKAGE.RowOperations -u root -p mypassword -i instance -z zookeeper 
-    2013-01-14 14:45:24,738 [client.RowOperations] INFO : This is everything
-    2013-01-14 14:45:24,744 [client.RowOperations] INFO : Key: row1 column:1 [] 1358192724640 false Value: This is the value for this key
-    2013-01-14 14:45:24,744 [client.RowOperations] INFO : Key: row1 column:2 [] 1358192724642 false Value: This is the value for this key
-    2013-01-14 14:45:24,744 [client.RowOperations] INFO : Key: row1 column:3 [] 1358192724642 false Value: This is the value for this key
-    2013-01-14 14:45:24,744 [client.RowOperations] INFO : Key: row1 column:4 [] 1358192724642 false Value: This is the value for this key
-    2013-01-14 14:45:24,746 [client.RowOperations] INFO : Key: row2 column:1 [] 1358192724642 false Value: This is the value for this key
-    2013-01-14 14:45:24,746 [client.RowOperations] INFO : Key: row2 column:2 [] 1358192724642 false Value: This is the value for this key
-    2013-01-14 14:45:24,746 [client.RowOperations] INFO : Key: row2 column:3 [] 1358192724642 false Value: This is the value for this key
-    2013-01-14 14:45:24,746 [client.RowOperations] INFO : Key: row2 column:4 [] 1358192724642 false Value: This is the value for this key
-    2013-01-14 14:45:24,747 [client.RowOperations] INFO : Key: row3 column:1 [] 1358192724642 false Value: This is the value for this key
-    2013-01-14 14:45:24,747 [client.RowOperations] INFO : Key: row3 column:2 [] 1358192724642 false Value: This is the value for this key
-    2013-01-14 14:45:24,747 [client.RowOperations] INFO : Key: row3 column:3 [] 1358192724642 false Value: This is the value for this key
-    2013-01-14 14:45:24,747 [client.RowOperations] INFO : Key: row3 column:4 [] 1358192724642 false Value: This is the value for this key
-    2013-01-14 14:45:24,756 [client.RowOperations] INFO : This is row1 and row3
-    2013-01-14 14:45:24,757 [client.RowOperations] INFO : Key: row1 column:1 [] 1358192724640 false Value: This is the value for this key
-    2013-01-14 14:45:24,757 [client.RowOperations] INFO : Key: row1 column:2 [] 1358192724642 false Value: This is the value for this key
-    2013-01-14 14:45:24,757 [client.RowOperations] INFO : Key: row1 column:3 [] 1358192724642 false Value: This is the value for this key
-    2013-01-14 14:45:24,757 [client.RowOperations] INFO : Key: row1 column:4 [] 1358192724642 false Value: This is the value for this key
-    2013-01-14 14:45:24,761 [client.RowOperations] INFO : Key: row3 column:1 [] 1358192724642 false Value: This is the value for this key
-    2013-01-14 14:45:24,761 [client.RowOperations] INFO : Key: row3 column:2 [] 1358192724642 false Value: This is the value for this key
-    2013-01-14 14:45:24,761 [client.RowOperations] INFO : Key: row3 column:3 [] 1358192724642 false Value: This is the value for this key
-    2013-01-14 14:45:24,761 [client.RowOperations] INFO : Key: row3 column:4 [] 1358192724642 false Value: This is the value for this key
-    2013-01-14 14:45:24,765 [client.RowOperations] INFO : This is just row3
-    2013-01-14 14:45:24,769 [client.RowOperations] INFO : Key: row3 column:1 [] 1358192724642 false Value: This is the value for this key
-    2013-01-14 14:45:24,770 [client.RowOperations] INFO : Key: row3 column:2 [] 1358192724642 false Value: This is the value for this key
-    2013-01-14 14:45:24,770 [client.RowOperations] INFO : Key: row3 column:3 [] 1358192724642 false Value: This is the value for this key
-    2013-01-14 14:45:24,770 [client.RowOperations] INFO : Key: row3 column:4 [] 1358192724642 false Value: This is the value for this key
-
-To create a table, write to it and read from it:
-
-    $ bin/accumulo $PACKAGE.ReadWriteExample -u root -p mypassword -i instance -z zookeeper --createtable --create --read 
-    hello%00; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -> world
-    hello%01; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -> world
-    hello%02; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -> world
-    hello%03; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -> world
-    hello%04; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -> world
-    hello%05; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -> world
-    hello%06; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -> world
-    hello%07; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -> world
-    hello%08; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -> world
-    hello%09; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -> world
-

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/server/monitor/src/main/resources/docs/examples/README.combiner
----------------------------------------------------------------------
diff --git a/server/monitor/src/main/resources/docs/examples/README.combiner b/server/monitor/src/main/resources/docs/examples/README.combiner
deleted file mode 100644
index d1ba6e9..0000000
--- a/server/monitor/src/main/resources/docs/examples/README.combiner
+++ /dev/null
@@ -1,70 +0,0 @@
-Title: Apache Accumulo Combiner Example
-Notice:    Licensed to the Apache Software Foundation (ASF) under one
-           or more contributor license agreements.  See the NOTICE file
-           distributed with this work for additional information
-           regarding copyright ownership.  The ASF licenses this file
-           to you under the Apache License, Version 2.0 (the
-           "License"); you may not use this file except in compliance
-           with the License.  You may obtain a copy of the License at
-           .
-             http://www.apache.org/licenses/LICENSE-2.0
-           .
-           Unless required by applicable law or agreed to in writing,
-           software distributed under the License is distributed on an
-           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-           KIND, either express or implied.  See the License for the
-           specific language governing permissions and limitations
-           under the License.
-
-This tutorial uses the following Java class, which can be found in org.apache.accumulo.examples.simple.combiner in the examples-simple module:
-
- * StatsCombiner.java - a combiner that calculates max, min, sum, and count
-
-This is a simple combiner example.  To build this example run maven and then
-copy the produced jar into the accumulo lib dir.  This is already done in the
-tar distribution.
-
-    $ bin/accumulo shell -u username
-    Enter current password for 'username'@'instance': ***
-    
-    Shell - Apache Accumulo Interactive Shell
-    - 
-    - version: 1.5.0
-    - instance name: instance
-    - instance id: 00000000-0000-0000-0000-000000000000
-    - 
-    - type 'help' for a list of available commands
-    - 
-    username@instance> createtable runners
-    username@instance runners> setiter -t runners -p 10 -scan -minc -majc -n decStats -class org.apache.accumulo.examples.simple.combiner.StatsCombiner
-    Combiner that keeps track of min, max, sum, and count
-    ----------> set StatsCombiner parameter all, set to true to apply Combiner to every column, otherwise leave blank. if true, columns option will be ignored.: 
-    ----------> set StatsCombiner parameter columns, <col fam>[:<col qual>]{,<col fam>[:<col qual>]} escape non aplhanum chars using %<hex>.: stat
-    ----------> set StatsCombiner parameter radix, radix/base of the numbers: 10
-    username@instance runners> setiter -t runners -p 11 -scan -minc -majc -n hexStats -class org.apache.accumulo.examples.simple.combiner.StatsCombiner
-    Combiner that keeps track of min, max, sum, and count
-    ----------> set StatsCombiner parameter all, set to true to apply Combiner to every column, otherwise leave blank. if true, columns option will be ignored.: 
-    ----------> set StatsCombiner parameter columns, <col fam>[:<col qual>]{,<col fam>[:<col qual>]} escape non aplhanum chars using %<hex>.: hstat
-    ----------> set StatsCombiner parameter radix, radix/base of the numbers: 16
-    username@instance runners> insert 123456 name first Joe
-    username@instance runners> insert 123456 stat marathon 240
-    username@instance runners> scan
-    123456 name:first []    Joe
-    123456 stat:marathon []    240,240,240,1
-    username@instance runners> insert 123456 stat marathon 230
-    username@instance runners> insert 123456 stat marathon 220
-    username@instance runners> scan
-    123456 name:first []    Joe
-    123456 stat:marathon []    220,240,690,3
-    username@instance runners> insert 123456 hstat virtualMarathon 6a
-    username@instance runners> insert 123456 hstat virtualMarathon 6b
-    username@instance runners> scan
-    123456 hstat:virtualMarathon []    6a,6b,d5,2
-    123456 name:first []    Joe
-    123456 stat:marathon []    220,240,690,3
-
-In this example a table is created and the example stats combiner is applied to
-the column family stat and hstat.  The stats combiner computes min,max,sum, and
-count.  It can be configured to use a different base or radix.  In the example
-above the column family stat is configured for base 10 and the column family
-hstat is configured for base 16.

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/server/monitor/src/main/resources/docs/examples/README.constraints
----------------------------------------------------------------------
diff --git a/server/monitor/src/main/resources/docs/examples/README.constraints b/server/monitor/src/main/resources/docs/examples/README.constraints
deleted file mode 100644
index 4a73f45..0000000
--- a/server/monitor/src/main/resources/docs/examples/README.constraints
+++ /dev/null
@@ -1,54 +0,0 @@
-Title: Apache Accumulo Constraints Example
-Notice:    Licensed to the Apache Software Foundation (ASF) under one
-           or more contributor license agreements.  See the NOTICE file
-           distributed with this work for additional information
-           regarding copyright ownership.  The ASF licenses this file
-           to you under the Apache License, Version 2.0 (the
-           "License"); you may not use this file except in compliance
-           with the License.  You may obtain a copy of the License at
-           .
-             http://www.apache.org/licenses/LICENSE-2.0
-           .
-           Unless required by applicable law or agreed to in writing,
-           software distributed under the License is distributed on an
-           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-           KIND, either express or implied.  See the License for the
-           specific language governing permissions and limitations
-           under the License.
-
-This tutorial uses the following Java classes, which can be found in org.apache.accumulo.examples.simple.constraints in the examples-simple module:
-
- * AlphaNumKeyConstraint.java - a constraint that requires alphanumeric keys
- * NumericValueConstraint.java - a constraint that requires numeric string values
-
-This an example of how to create a table with constraints. Below a table is
-created with two example constraints.  One constraints does not allow non alpha
-numeric keys.  The other constraint does not allow non numeric values. Two
-inserts that violate these constraints are attempted and denied.  The scan at
-the end shows the inserts were not allowed. 
-
-    $ ./bin/accumulo shell -u username -p password
-    
-    Shell - Apache Accumulo Interactive Shell
-    - 
-    - version: 1.5.0
-    - instance name: instance
-    - instance id: 00000000-0000-0000-0000-000000000000
-    - 
-    - type 'help' for a list of available commands
-    - 
-    username@instance> createtable testConstraints
-    username@instance testConstraints> constraint -a org.apache.accumulo.examples.simple.constraints.NumericValueConstraint
-    username@instance testConstraints> constraint -a org.apache.accumulo.examples.simple.constraints.AlphaNumKeyConstraint
-    username@instance testConstraints> insert r1 cf1 cq1 1111
-    username@instance testConstraints> insert r1 cf1 cq1 ABC
-      Constraint Failures:
-          ConstraintViolationSummary(constrainClass:org.apache.accumulo.examples.simple.constraints.NumericValueConstraint, violationCode:1, violationDescription:Value is not numeric, numberOfViolatingMutations:1)
-    username@instance testConstraints> insert r1! cf1 cq1 ABC 
-      Constraint Failures:
-          ConstraintViolationSummary(constrainClass:org.apache.accumulo.examples.simple.constraints.NumericValueConstraint, violationCode:1, violationDescription:Value is not numeric, numberOfViolatingMutations:1)
-          ConstraintViolationSummary(constrainClass:org.apache.accumulo.examples.simple.constraints.AlphaNumKeyConstraint, violationCode:1, violationDescription:Row was not alpha numeric, numberOfViolatingMutations:1)
-    username@instance testConstraints> scan
-    r1 cf1:cq1 []    1111
-    username@instance testConstraints> 
-

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/server/monitor/src/main/resources/docs/examples/README.dirlist
----------------------------------------------------------------------
diff --git a/server/monitor/src/main/resources/docs/examples/README.dirlist b/server/monitor/src/main/resources/docs/examples/README.dirlist
deleted file mode 100644
index e505cf9..0000000
--- a/server/monitor/src/main/resources/docs/examples/README.dirlist
+++ /dev/null
@@ -1,114 +0,0 @@
-Title: Apache Accumulo File System Archive
-Notice:    Licensed to the Apache Software Foundation (ASF) under one
-           or more contributor license agreements.  See the NOTICE file
-           distributed with this work for additional information
-           regarding copyright ownership.  The ASF licenses this file
-           to you under the Apache License, Version 2.0 (the
-           "License"); you may not use this file except in compliance
-           with the License.  You may obtain a copy of the License at
-           .
-             http://www.apache.org/licenses/LICENSE-2.0
-           .
-           Unless required by applicable law or agreed to in writing,
-           software distributed under the License is distributed on an
-           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-           KIND, either express or implied.  See the License for the
-           specific language governing permissions and limitations
-           under the License.
-
-This example stores filesystem information in accumulo.  The example stores the information in the following three tables.  More information about the table structures can be found at the end of README.dirlist.
-
- * directory table : This table stores information about the filesystem directory structure.
- * index table     : This table stores a file name index.  It can be used to quickly find files with given name, suffix, or prefix.
- * data table      : This table stores the file data.  File with duplicate data are only stored once.  
-
-This example shows how to use Accumulo to store a file system history.  It has the following classes:
-
- * Ingest.java - Recursively lists the files and directories under a given path, ingests their names and file info into one Accumulo table, indexes the file names in a separate table, and the file data into a third table.
- * QueryUtil.java - Provides utility methods for getting the info for a file, listing the contents of a directory, and performing single wild card searches on file or directory names.
- * Viewer.java - Provides a GUI for browsing the file system information stored in Accumulo.
- * FileCount.java - Computes recursive counts over file system information and stores them back into the same Accumulo table.
- 
-To begin, ingest some data with Ingest.java.
-
-    $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.Ingest -i instance -z zookeepers -u username -p password --vis exampleVis --chunkSize 100000 /local/username/workspace
-
-This may take some time if there are large files in the /local/username/workspace directory.  If you use 0 instead of 100000 on the command line, the ingest will run much faster, but it will not put any file data into Accumulo (the dataTable will be empty).
-Note that running this example will create tables dirTable, indexTable, and dataTable in Accumulo that you should delete when you have completed the example.
-If you modify a file or add new files in the directory ingested (e.g. /local/username/workspace), you can run Ingest again to add new information into the Accumulo tables.
-
-To browse the data ingested, use Viewer.java.  Be sure to give the "username" user the authorizations to see the data (in this case, run
-
-    $ ./bin/accumulo shell -u root -e 'setauths -u username -s exampleVis'
-
-then run the Viewer:
-
-    $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.Viewer -i instance -z zookeepers -u username -p password -t dirTable --dataTable dataTable --auths exampleVis --path /local/username/workspace
-
-To list the contents of specific directories, use QueryUtil.java.
-
-    $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t dirTable --auths exampleVis --path /local/username
-    $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t dirTable --auths exampleVis --path /local/username/workspace
-
-To perform searches on file or directory names, also use QueryUtil.java.  Search terms must contain no more than one wild card and cannot contain "/".
-*Note* these queries run on the _indexTable_ table instead of the dirTable table.
-
-    $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t indexTable --auths exampleVis --path filename --search
-    $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t indexTable --auths exampleVis --path 'filename*' --search
-    $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t indexTable --auths exampleVis --path '*jar' --search
-    $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t indexTable --auths exampleVis --path 'filename*jar' --search
-
-To count the number of direct children (directories and files) and descendants (children and children's descendants, directories and files), run the FileCount over the dirTable table.
-The results are written back to the same table.  FileCount reads from and writes to Accumulo.  This requires scan authorizations for the read and a visibility for the data written.
-In this example, the authorizations and visibility are set to the same value, exampleVis.  See README.visibility for more information on visibility and authorizations.
-
-    $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.FileCount -i instance -z zookeepers -u username -p password -t dirTable --auths exampleVis
-
-## Directory Table
-
-Here is a illustration of what data looks like in the directory table:
-
-    row colf:colq [vis]	value
-    000 dir:exec [exampleVis]    true
-    000 dir:hidden [exampleVis]    false
-    000 dir:lastmod [exampleVis]    1291996886000
-    000 dir:length [exampleVis]    1666
-    001/local dir:exec [exampleVis]    true
-    001/local dir:hidden [exampleVis]    false
-    001/local dir:lastmod [exampleVis]    1304945270000
-    001/local dir:length [exampleVis]    272
-    002/local/Accumulo.README \x7F\xFF\xFE\xCFH\xA1\x82\x97:exec [exampleVis]    false
-    002/local/Accumulo.README \x7F\xFF\xFE\xCFH\xA1\x82\x97:hidden [exampleVis]    false
-    002/local/Accumulo.README \x7F\xFF\xFE\xCFH\xA1\x82\x97:lastmod [exampleVis]    1308746481000
-    002/local/Accumulo.README \x7F\xFF\xFE\xCFH\xA1\x82\x97:length [exampleVis]    9192
-    002/local/Accumulo.README \x7F\xFF\xFE\xCFH\xA1\x82\x97:md5 [exampleVis]    274af6419a3c4c4a259260ac7017cbf1
-
-The rows are of the form depth + path, where depth is the number of slashes ("/") in the path padded to 3 digits.  This is so that all the children of a directory appear as consecutive keys in Accumulo; without the depth, you would for example see all the subdirectories of /local before you saw /usr.
-For directories the column family is "dir".  For files the column family is Long.MAX_VALUE - lastModified in bytes rather than string format so that newer versions sort earlier.
-
-## Index Table
-
-Here is an illustration of what data looks like in the index table:
-
-    row colf:colq [vis]
-    fAccumulo.README i:002/local/Accumulo.README [exampleVis]
-    flocal i:001/local [exampleVis]
-    rEMDAER.olumuccA i:002/local/Accumulo.README [exampleVis]
-    rlacol i:001/local [exampleVis]
-
-The values of the index table are null.  The rows are of the form "f" + filename or "r" + reverse file name.  This is to enable searches with wildcards at the beginning, middle, or end.
-
-## Data Table
-
-Here is an illustration of what data looks like in the data table:
-
-    row colf:colq [vis]	value
-    274af6419a3c4c4a259260ac7017cbf1 refs:e77276a2b56e5c15b540eaae32b12c69\x00filext [exampleVis]    README
-    274af6419a3c4c4a259260ac7017cbf1 refs:e77276a2b56e5c15b540eaae32b12c69\x00name [exampleVis]    /local/Accumulo.README
-    274af6419a3c4c4a259260ac7017cbf1 ~chunk:\x00\x0FB@\x00\x00\x00\x00 [exampleVis]    *******************************************************************************\x0A1. Building\x0A\x0AIn the normal tarball or RPM release of accumulo, [truncated]
-    274af6419a3c4c4a259260ac7017cbf1 ~chunk:\x00\x0FB@\x00\x00\x00\x01 [exampleVis]
-
-The rows are the md5 hash of the file.  Some column family : column qualifier pairs are "refs" : hash of file name + null byte + property name, in which case the value is property value.  There can be multiple references to the same file which are distinguished by the hash of the file name.
-Other column family : column qualifier pairs are "~chunk" : chunk size in bytes + chunk number in bytes, in which case the value is the bytes for that chunk of the file.  There is an end of file data marker whose chunk number is the number of chunks for the file and whose value is empty.
-
-There may exist multiple copies of the same file (with the same md5 hash) with different chunk sizes or different visibilities.  There is an iterator that can be set on the data table that combines these copies into a single copy with a visibility taken from the visibilities of the file references, e.g. (vis from ref1)|(vis from ref2). 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/server/monitor/src/main/resources/docs/examples/README.export
----------------------------------------------------------------------
diff --git a/server/monitor/src/main/resources/docs/examples/README.export b/server/monitor/src/main/resources/docs/examples/README.export
deleted file mode 100644
index 6430449..0000000
--- a/server/monitor/src/main/resources/docs/examples/README.export
+++ /dev/null
@@ -1,91 +0,0 @@
-Title: Apache Accumulo Export/Import Example
-Notice:    Licensed to the Apache Software Foundation (ASF) under one
-           or more contributor license agreements.  See the NOTICE file
-           distributed with this work for additional information
-           regarding copyright ownership.  The ASF licenses this file
-           to you under the Apache License, Version 2.0 (the
-           "License"); you may not use this file except in compliance
-           with the License.  You may obtain a copy of the License at
-           .
-             http://www.apache.org/licenses/LICENSE-2.0
-           .
-           Unless required by applicable law or agreed to in writing,
-           software distributed under the License is distributed on an
-           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-           KIND, either express or implied.  See the License for the
-           specific language governing permissions and limitations
-           under the License.
-
-Accumulo provides a mechanism to export and import tables.  This README shows
-how to use this feature.
-
-The shell session below shows creating a table, inserting data, and exporting
-the table.  A table must be offline to export it, and it should remain offline
-for the duration of the distcp.  An easy way to take a table offline without
-interuppting access to it is to clone it and take the clone offline.
-
-    root@test15> createtable table1
-    root@test15 table1> insert a cf1 cq1 v1
-    root@test15 table1> insert h cf1 cq1 v2
-    root@test15 table1> insert z cf1 cq1 v3
-    root@test15 table1> insert z cf1 cq2 v4
-    root@test15 table1> addsplits -t table1 b r
-    root@test15 table1> scan
-    a cf1:cq1 []    v1
-    h cf1:cq1 []    v2
-    z cf1:cq1 []    v3
-    z cf1:cq2 []    v4
-    root@test15> config -t table1 -s table.split.threshold=100M
-    root@test15 table1> clonetable table1 table1_exp
-    root@test15 table1> offline table1_exp
-    root@test15 table1> exporttable -t table1_exp /tmp/table1_export
-    root@test15 table1> quit
-
-After executing the export command, a few files are created in the hdfs dir.
-One of the files is a list of files to distcp as shown below.
-
-    $ hadoop fs -ls /tmp/table1_export
-    Found 2 items
-    -rw-r--r--   3 user supergroup        162 2012-07-25 09:56 /tmp/table1_export/distcp.txt
-    -rw-r--r--   3 user supergroup        821 2012-07-25 09:56 /tmp/table1_export/exportMetadata.zip
-    $ hadoop fs -cat /tmp/table1_export/distcp.txt
-    hdfs://n1.example.com:6093/accumulo/tables/3/default_tablet/F0000000.rf
-    hdfs://n1.example.com:6093/tmp/table1_export/exportMetadata.zip
-
-Before the table can be imported, it must be copied using distcp.  After the
-discp completed, the cloned table may be deleted.
-
-    $ hadoop distcp -f /tmp/table1_export/distcp.txt /tmp/table1_export_dest
-
-The Accumulo shell session below shows importing the table and inspecting it.
-The data, splits, config, and logical time information for the table were
-preserved.
-
-    root@test15> importtable table1_copy /tmp/table1_export_dest
-    root@test15> table table1_copy
-    root@test15 table1_copy> scan
-    a cf1:cq1 []    v1
-    h cf1:cq1 []    v2
-    z cf1:cq1 []    v3
-    z cf1:cq2 []    v4
-    root@test15 table1_copy> getsplits -t table1_copy
-    b
-    r
-    root@test15> config -t table1_copy -f split
-    ---------+--------------------------+-------------------------------------------
-    SCOPE    | NAME                     | VALUE
-    ---------+--------------------------+-------------------------------------------
-    default  | table.split.threshold .. | 1G
-    table    |    @override ........... | 100M
-    ---------+--------------------------+-------------------------------------------
-    root@test15> tables -l
-    accumulo.metadata    =>        !0
-    accumulo.root        =>        +r
-    table1_copy          =>         5
-    trace                =>         1
-    root@test15 table1_copy> scan -t accumulo.metadata -b 5 -c srv:time
-    5;b srv:time []    M1343224500467
-    5;r srv:time []    M1343224500467
-    5< srv:time []    M1343224500467
-
-

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/server/monitor/src/main/resources/docs/examples/README.filedata
----------------------------------------------------------------------
diff --git a/server/monitor/src/main/resources/docs/examples/README.filedata b/server/monitor/src/main/resources/docs/examples/README.filedata
deleted file mode 100644
index 9f0016e..0000000
--- a/server/monitor/src/main/resources/docs/examples/README.filedata
+++ /dev/null
@@ -1,47 +0,0 @@
-Title: Apache Accumulo File System Archive Example (Data Only)
-Notice:    Licensed to the Apache Software Foundation (ASF) under one
-           or more contributor license agreements.  See the NOTICE file
-           distributed with this work for additional information
-           regarding copyright ownership.  The ASF licenses this file
-           to you under the Apache License, Version 2.0 (the
-           "License"); you may not use this file except in compliance
-           with the License.  You may obtain a copy of the License at
-           .
-             http://www.apache.org/licenses/LICENSE-2.0
-           .
-           Unless required by applicable law or agreed to in writing,
-           software distributed under the License is distributed on an
-           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-           KIND, either express or implied.  See the License for the
-           specific language governing permissions and limitations
-           under the License.
-
-This example archives file data into an Accumulo table.  Files with duplicate data are only stored once.
-The example has the following classes:
-
- * CharacterHistogram - A MapReduce that computes a histogram of byte frequency for each file and stores the histogram alongside the file data.  An example use of the ChunkInputFormat.
- * ChunkCombiner - An Iterator that dedupes file data and sets their visibilities to a combined visibility based on current references to the file data.
- * ChunkInputFormat - An Accumulo InputFormat that provides keys containing file info (List<Entry<Key,Value>>) and values with an InputStream over the file (ChunkInputStream).
- * ChunkInputStream - An input stream over file data stored in Accumulo.
- * FileDataIngest - Takes a list of files and archives them into Accumulo keyed on hashes of the files.
- * FileDataQuery - Retrieves file data based on the hash of the file. (Used by the dirlist.Viewer.)
- * KeyUtil - A utility for creating and parsing null-byte separated strings into/from Text objects.
- * VisibilityCombiner - A utility for merging visibilities into the form (VIS1)|(VIS2)|...
-
-This example is coupled with the dirlist example.  See README.dirlist for instructions.
-
-If you haven't already run the README.dirlist example, ingest a file with FileDataIngest.
-
-    $ ./bin/accumulo org.apache.accumulo.examples.simple.filedata.FileDataIngest -i instance -z zookeepers -u username -p password -t dataTable --auths exampleVis --chunk 1000 $ACCUMULO_HOME/README
-
-Open the accumulo shell and look at the data.  The row is the MD5 hash of the file, which you can verify by running a command such as 'md5sum' on the file.
-
-    > scan -t dataTable
-
-Run the CharacterHistogram MapReduce to add some information about the file.
-
-    $ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.filedata.CharacterHistogram -i instance -z zookeepers -u username -p password -t dataTable --auths exampleVis --vis exampleVis
-
-Scan again to see the histogram stored in the 'info' column family.
-
-    > scan -t dataTable

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/server/monitor/src/main/resources/docs/examples/README.filter
----------------------------------------------------------------------
diff --git a/server/monitor/src/main/resources/docs/examples/README.filter b/server/monitor/src/main/resources/docs/examples/README.filter
deleted file mode 100644
index a320554..0000000
--- a/server/monitor/src/main/resources/docs/examples/README.filter
+++ /dev/null
@@ -1,110 +0,0 @@
-Title: Apache Accumulo Filter Example
-Notice:    Licensed to the Apache Software Foundation (ASF) under one
-           or more contributor license agreements.  See the NOTICE file
-           distributed with this work for additional information
-           regarding copyright ownership.  The ASF licenses this file
-           to you under the Apache License, Version 2.0 (the
-           "License"); you may not use this file except in compliance
-           with the License.  You may obtain a copy of the License at
-           .
-             http://www.apache.org/licenses/LICENSE-2.0
-           .
-           Unless required by applicable law or agreed to in writing,
-           software distributed under the License is distributed on an
-           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-           KIND, either express or implied.  See the License for the
-           specific language governing permissions and limitations
-           under the License.
-
-This is a simple filter example.  It uses the AgeOffFilter that is provided as 
-part of the core package org.apache.accumulo.core.iterators.user.  Filters are 
-iterators that select desired key/value pairs (or weed out undesired ones).  
-Filters extend the org.apache.accumulo.core.iterators.Filter class 
-and must implement a method accept(Key k, Value v).  This method returns true 
-if the key/value pair are to be delivered and false if they are to be ignored.
-Filter takes a "negate" parameter which defaults to false.  If set to true, the
-return value of the accept method is negated, so that key/value pairs accepted
-by the method are omitted by the Filter.
-
-    username@instance> createtable filtertest
-    username@instance filtertest> setiter -t filtertest -scan -p 10 -n myfilter -ageoff
-    AgeOffFilter removes entries with timestamps more than <ttl> milliseconds old
-    ----------> set AgeOffFilter parameter negate, default false keeps k/v that pass accept method, true rejects k/v that pass accept method: 
-    ----------> set AgeOffFilter parameter ttl, time to live (milliseconds): 30000
-    ----------> set AgeOffFilter parameter currentTime, if set, use the given value as the absolute time in milliseconds as the current time of day: 
-    username@instance filtertest> scan
-    username@instance filtertest> insert foo a b c
-    username@instance filtertest> scan
-    foo a:b []    c
-    username@instance filtertest> 
-    
-... wait 30 seconds ...
-    
-    username@instance filtertest> scan
-    username@instance filtertest> 
-
-Note the absence of the entry inserted more than 30 seconds ago.  Since the
-scope was set to "scan", this means the entry is still in Accumulo, but is
-being filtered out at query time.  To delete entries from Accumulo based on
-the ages of their timestamps, AgeOffFilters should be set up for the "minc"
-and "majc" scopes, as well.
-
-To force an ageoff of the persisted data, after setting up the ageoff iterator 
-on the "minc" and "majc" scopes you can flush and compact your table. This will
-happen automatically as a background operation on any table that is being 
-actively written to, but can also be requested in the shell.
-
-The first setiter command used the special -ageoff flag to specify the 
-AgeOffFilter, but any Filter can be configured by using the -class flag.  The 
-following commands show how to enable the AgeOffFilter for the minc and majc
-scopes using the -class flag, then flush and compact the table.
-
-    username@instance filtertest> setiter -t filtertest -minc -majc -p 10 -n myfilter -class org.apache.accumulo.core.iterators.user.AgeOffFilter
-    AgeOffFilter removes entries with timestamps more than <ttl> milliseconds old
-    ----------> set AgeOffFilter parameter negate, default false keeps k/v that pass accept method, true rejects k/v that pass accept method: 
-    ----------> set AgeOffFilter parameter ttl, time to live (milliseconds): 30000
-    ----------> set AgeOffFilter parameter currentTime, if set, use the given value as the absolute time in milliseconds as the current time of day: 
-    username@instance filtertest> flush
-    06 10:42:24,806 [shell.Shell] INFO : Flush of table filtertest initiated...
-    username@instance filtertest> compact
-    06 10:42:36,781 [shell.Shell] INFO : Compaction of table filtertest started for given range
-    username@instance filtertest> flush -t filtertest -w
-    06 10:42:52,881 [shell.Shell] INFO : Flush of table filtertest completed.
-    username@instance filtertest> compact -t filtertest -w
-    06 10:43:00,632 [shell.Shell] INFO : Compacting table ...
-    06 10:43:01,307 [shell.Shell] INFO : Compaction of table filtertest completed for given range
-    username@instance filtertest>
-
-By default, flush and compact execute in the background, but with the -w flag
-they will wait to return until the operation has completed.  Both are 
-demonstrated above, though only one call to each would be necessary.  A 
-specific table can be specified with -t.
-
-After the compaction runs, the newly created files will not contain any data 
-that should have been aged off, and the Accumulo garbage collector will remove 
-the old files.
-
-To see the iterator settings for a table, use config.
-
-    username@instance filtertest> config -t filtertest -f iterator
-    ---------+---------------------------------------------+---------------------------------------------------------------------------
-    SCOPE    | NAME                                        | VALUE
-    ---------+---------------------------------------------+---------------------------------------------------------------------------
-    table    | table.iterator.majc.myfilter .............. | 10,org.apache.accumulo.core.iterators.user.AgeOffFilter
-    table    | table.iterator.majc.myfilter.opt.ttl ...... | 30000
-    table    | table.iterator.majc.vers .................. | 20,org.apache.accumulo.core.iterators.user.VersioningIterator
-    table    | table.iterator.majc.vers.opt.maxVersions .. | 1
-    table    | table.iterator.minc.myfilter .............. | 10,org.apache.accumulo.core.iterators.user.AgeOffFilter
-    table    | table.iterator.minc.myfilter.opt.ttl ...... | 30000
-    table    | table.iterator.minc.vers .................. | 20,org.apache.accumulo.core.iterators.user.VersioningIterator
-    table    | table.iterator.minc.vers.opt.maxVersions .. | 1
-    table    | table.iterator.scan.myfilter .............. | 10,org.apache.accumulo.core.iterators.user.AgeOffFilter
-    table    | table.iterator.scan.myfilter.opt.ttl ...... | 30000
-    table    | table.iterator.scan.vers .................. | 20,org.apache.accumulo.core.iterators.user.VersioningIterator
-    table    | table.iterator.scan.vers.opt.maxVersions .. | 1
-    ---------+---------------------------------------------+---------------------------------------------------------------------------
-    username@instance filtertest> 
-
-When setting new iterators, make sure to order their priority numbers 
-(specified with -p) in the order you would like the iterators to be applied.
-Also, each iterator must have a unique name and priority within each scope.

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/server/monitor/src/main/resources/docs/examples/README.helloworld
----------------------------------------------------------------------
diff --git a/server/monitor/src/main/resources/docs/examples/README.helloworld b/server/monitor/src/main/resources/docs/examples/README.helloworld
deleted file mode 100644
index be95014..0000000
--- a/server/monitor/src/main/resources/docs/examples/README.helloworld
+++ /dev/null
@@ -1,47 +0,0 @@
-Title: Apache Accumulo Hello World Example
-Notice:    Licensed to the Apache Software Foundation (ASF) under one
-           or more contributor license agreements.  See the NOTICE file
-           distributed with this work for additional information
-           regarding copyright ownership.  The ASF licenses this file
-           to you under the Apache License, Version 2.0 (the
-           "License"); you may not use this file except in compliance
-           with the License.  You may obtain a copy of the License at
-           .
-             http://www.apache.org/licenses/LICENSE-2.0
-           .
-           Unless required by applicable law or agreed to in writing,
-           software distributed under the License is distributed on an
-           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-           KIND, either express or implied.  See the License for the
-           specific language governing permissions and limitations
-           under the License.
-
-This tutorial uses the following Java classes, which can be found in org.apache.accumulo.examples.simple.helloworld in the examples-simple module: 
-
- * InsertWithBatchWriter.java - Inserts 10K rows (50K entries) into accumulo with each row having 5 entries
- * ReadData.java - Reads all data between two rows
-
-Log into the accumulo shell:
-
-    $ ./bin/accumulo shell -u username -p password
-
-Create a table called 'hellotable':
-
-    username@instance> createtable hellotable	
-
-Launch a Java program that inserts data with a BatchWriter:
-
-    $ ./bin/accumulo org.apache.accumulo.examples.simple.helloworld.InsertWithBatchWriter -i instance -z zookeepers -u username -p password -t hellotable 
-
-On the accumulo status page at the URL below (where 'master' is replaced with the name or IP of your accumulo master), you should see 50K entries
-	
-    http://master:50095/
-	
-To view the entries, use the shell to scan the table:
-
-    username@instance> table hellotable
-    username@instance hellotable> scan
-
-You can also use a Java class to scan the table:
-
-    $ ./bin/accumulo org.apache.accumulo.examples.simple.helloworld.ReadData -i instance -z zookeepers -u username -p password -t hellotable --startKey row_0 --endKey row_1001

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/server/monitor/src/main/resources/docs/examples/README.isolation
----------------------------------------------------------------------
diff --git a/server/monitor/src/main/resources/docs/examples/README.isolation b/server/monitor/src/main/resources/docs/examples/README.isolation
deleted file mode 100644
index 06d5aeb..0000000
--- a/server/monitor/src/main/resources/docs/examples/README.isolation
+++ /dev/null
@@ -1,50 +0,0 @@
-Title: Apache Accumulo Isolation Example
-Notice:    Licensed to the Apache Software Foundation (ASF) under one
-           or more contributor license agreements.  See the NOTICE file
-           distributed with this work for additional information
-           regarding copyright ownership.  The ASF licenses this file
-           to you under the Apache License, Version 2.0 (the
-           "License"); you may not use this file except in compliance
-           with the License.  You may obtain a copy of the License at
-           .
-             http://www.apache.org/licenses/LICENSE-2.0
-           .
-           Unless required by applicable law or agreed to in writing,
-           software distributed under the License is distributed on an
-           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-           KIND, either express or implied.  See the License for the
-           specific language governing permissions and limitations
-           under the License.
-
-
-Accumulo has an isolated scanner that ensures partial changes to rows are not
-seen.  Isolation is documented in ../docs/isolation.html and the user manual.  
-
-InterferenceTest is a simple example that shows the effects of scanning with
-and without isolation.  This program starts two threads.  One threads
-continually upates all of the values in a row to be the same thing, but
-different from what it used to be.  The other thread continually scans the
-table and checks that all values in a row are the same.  Without isolation the
-scanning thread will sometimes see different values, which is the result of
-reading the row at the same time a mutation is changing the row.
-
-Below, Interference Test is run without isolation enabled for 5000 iterations
-and it reports problems.
-
-    $ ./bin/accumulo org.apache.accumulo.examples.simple.isolation.InterferenceTest -i instance -z zookeepers -u username -p password -t isotest --iterations 5000
-    ERROR Columns in row 053 had multiple values [53, 4553]
-    ERROR Columns in row 061 had multiple values [561, 61]
-    ERROR Columns in row 070 had multiple values [570, 1070]
-    ERROR Columns in row 079 had multiple values [1079, 1579]
-    ERROR Columns in row 088 had multiple values [2588, 1588]
-    ERROR Columns in row 106 had multiple values [2606, 3106]
-    ERROR Columns in row 115 had multiple values [4615, 3115]
-    finished
-
-Below, Interference Test is run with isolation enabled for 5000 iterations and
-it reports no problems.
-
-    $ ./bin/accumulo org.apache.accumulo.examples.simple.isolation.InterferenceTest -i instance -z zookeepers -u username -p password -t isotest --iterations 5000 --isolated
-    finished
-
-

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/server/monitor/src/main/resources/docs/examples/README.mapred
----------------------------------------------------------------------
diff --git a/server/monitor/src/main/resources/docs/examples/README.mapred b/server/monitor/src/main/resources/docs/examples/README.mapred
deleted file mode 100644
index b98140f..0000000
--- a/server/monitor/src/main/resources/docs/examples/README.mapred
+++ /dev/null
@@ -1,154 +0,0 @@
-Title: Apache Accumulo MapReduce Example
-Notice:    Licensed to the Apache Software Foundation (ASF) under one
-           or more contributor license agreements.  See the NOTICE file
-           distributed with this work for additional information
-           regarding copyright ownership.  The ASF licenses this file
-           to you under the Apache License, Version 2.0 (the
-           "License"); you may not use this file except in compliance
-           with the License.  You may obtain a copy of the License at
-           .
-             http://www.apache.org/licenses/LICENSE-2.0
-           .
-           Unless required by applicable law or agreed to in writing,
-           software distributed under the License is distributed on an
-           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-           KIND, either express or implied.  See the License for the
-           specific language governing permissions and limitations
-           under the License.
-
-This example uses mapreduce and accumulo to compute word counts for a set of
-documents.  This is accomplished using a map-only mapreduce job and a
-accumulo table with combiners.
-
-To run this example you will need a directory in HDFS containing text files.
-The accumulo readme will be used to show how to run this example.
-
-    $ hadoop fs -copyFromLocal $ACCUMULO_HOME/README /user/username/wc/Accumulo.README
-    $ hadoop fs -ls /user/username/wc
-    Found 1 items
-    -rw-r--r--   2 username supergroup       9359 2009-07-15 17:54 /user/username/wc/Accumulo.README
-
-The first part of running this example is to create a table with a combiner
-for the column family count.
-
-    $ ./bin/accumulo shell -u username -p password
-    Shell - Apache Accumulo Interactive Shell
-    - version: 1.5.0
-    - instance name: instance
-    - instance id: 00000000-0000-0000-0000-000000000000
-    - 
-    - type 'help' for a list of available commands
-    - 
-    username@instance> createtable wordCount
-    username@instance wordCount> setiter -class org.apache.accumulo.core.iterators.user.SummingCombiner -p 10 -t wordCount -majc -minc -scan
-    SummingCombiner interprets Values as Longs and adds them together.  A variety of encodings (variable length, fixed length, or string) are available
-    ----------> set SummingCombiner parameter all, set to true to apply Combiner to every column, otherwise leave blank. if true, columns option will be ignored.: false
-    ----------> set SummingCombiner parameter columns, <col fam>[:<col qual>]{,<col fam>[:<col qual>]} escape non-alphanum chars using %<hex>.: count
-    ----------> set SummingCombiner parameter lossy, if true, failed decodes are ignored. Otherwise combiner will error on failed decodes (default false): <TRUE|FALSE>: false 
-    ----------> set SummingCombiner parameter type, <VARLEN|FIXEDLEN|STRING|fullClassName>: STRING
-    username@instance wordCount> quit
-
-After creating the table, run the word count map reduce job.
-
-    $ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.WordCount -i instance -z zookeepers  --input /user/username/wc -t wordCount -u username -p password
-    
-    11/02/07 18:20:11 INFO input.FileInputFormat: Total input paths to process : 1
-    11/02/07 18:20:12 INFO mapred.JobClient: Running job: job_201102071740_0003
-    11/02/07 18:20:13 INFO mapred.JobClient:  map 0% reduce 0%
-    11/02/07 18:20:20 INFO mapred.JobClient:  map 100% reduce 0%
-    11/02/07 18:20:22 INFO mapred.JobClient: Job complete: job_201102071740_0003
-    11/02/07 18:20:22 INFO mapred.JobClient: Counters: 6
-    11/02/07 18:20:22 INFO mapred.JobClient:   Job Counters 
-    11/02/07 18:20:22 INFO mapred.JobClient:     Launched map tasks=1
-    11/02/07 18:20:22 INFO mapred.JobClient:     Data-local map tasks=1
-    11/02/07 18:20:22 INFO mapred.JobClient:   FileSystemCounters
-    11/02/07 18:20:22 INFO mapred.JobClient:     HDFS_BYTES_READ=10487
-    11/02/07 18:20:22 INFO mapred.JobClient:   Map-Reduce Framework
-    11/02/07 18:20:22 INFO mapred.JobClient:     Map input records=255
-    11/02/07 18:20:22 INFO mapred.JobClient:     Spilled Records=0
-    11/02/07 18:20:22 INFO mapred.JobClient:     Map output records=1452
-
-After the map reduce job completes, query the accumulo table to see word
-counts.
-
-    $ ./bin/accumulo shell -u username -p password
-    username@instance> table wordCount
-    username@instance wordCount> scan -b the
-    the count:20080906 []    75
-    their count:20080906 []    2
-    them count:20080906 []    1
-    then count:20080906 []    1
-    there count:20080906 []    1
-    these count:20080906 []    3
-    this count:20080906 []    6
-    through count:20080906 []    1
-    time count:20080906 []    3
-    time. count:20080906 []    1
-    to count:20080906 []    27
-    total count:20080906 []    1
-    tserver, count:20080906 []    1
-    tserver.compaction.major.concurrent.max count:20080906 []    1
-    ...
-
-Another example to look at is
-org.apache.accumulo.examples.simple.mapreduce.UniqueColumns.  This example
-computes the unique set of columns in a table and shows how a map reduce job
-can directly read a tables files from HDFS. 
-
-One more example available is 
-org.apache.accumulo.examples.simple.mapreduce.TokenFileWordCount.
-The TokenFileWordCount example works exactly the same as the WordCount example
-explained above except that it uses a token file rather than giving the 
-password directly to the map-reduce job (this avoids having the password 
-displayed in the job's configuration which is world-readable).
-
-To create a token file, use the create-token utility
-
-  $ ./bin/accumulo create-token
-  
-It defaults to creating a PasswordToken, but you can specify the token class 
-with -tc (requires the fully qualified class name). Based on the token class, 
-it will prompt you for each property required to create the token.
-
-The last value it prompts for is a local filename to save to. If this file
-exists, it will append the new token to the end. Multiple tokens can exist in
-a file, but only the first one for each user will be recognized.
-
-Rather than waiting for the prompts, you can specify some options when calling
-create-token, for example
-
-  $ ./bin/accumulo create-token -u root -p secret -f root.pw
-  
-would create a token file containing a PasswordToken for 
-user 'root' with password 'secret' and saved to 'root.pw'
-
-This local file needs to be uploaded to hdfs to be used with the 
-map-reduce job. For example, if the file were 'root.pw' in the local directory:
-
-  $ hadoop fs -put root.pw root.pw
-  
-This would put 'root.pw' in the user's home directory in hdfs. 
-
-Because the basic WordCount example uses Opts to parse its arguments 
-(which extends ClientOnRequiredTable), you can use a token file with
-the basic WordCount example by calling the same command as explained above
-except replacing the password with the token file (rather than -p, use -tf).
-
-  $ ./bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.WordCount -i instance -z zookeepers  --input /user/username/wc -t wordCount -u username -tf tokenfile
-
-In the above examples, username was 'root' and tokenfile was 'root.pw'  
-
-However, if you don't want to use the Opts class to parse arguments,
-the TokenFileWordCount is an example of using the token file manually.
-
-  $ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.TokenFileWordCount instance zookeepers username tokenfile /user/username/wc wordCount
-
-The results should be the same as the WordCount example except that the
-authentication token was not stored in the configuration. It was instead 
-stored in a file that the map-reduce job pulled into the distributed cache.
-(If you ran either of these on the same table right after the 
-WordCount example, then the resulting counts should just double.)
-
-
-
-

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/server/monitor/src/main/resources/docs/examples/README.maxmutation
----------------------------------------------------------------------
diff --git a/server/monitor/src/main/resources/docs/examples/README.maxmutation b/server/monitor/src/main/resources/docs/examples/README.maxmutation
deleted file mode 100644
index aa679a8..0000000
--- a/server/monitor/src/main/resources/docs/examples/README.maxmutation
+++ /dev/null
@@ -1,47 +0,0 @@
-Title: Apache Accumulo MaxMutation Constraints Example
-Notice:    Licensed to the Apache Software Foundation (ASF) under one
-           or more contributor license agreements.  See the NOTICE file
-           distributed with this work for additional information
-           regarding copyright ownership.  The ASF licenses this file
-           to you under the Apache License, Version 2.0 (the
-           "License"); you may not use this file except in compliance
-           with the License.  You may obtain a copy of the License at
-           .
-             http://www.apache.org/licenses/LICENSE-2.0
-           .
-           Unless required by applicable law or agreed to in writing,
-           software distributed under the License is distributed on an
-           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-           KIND, either express or implied.  See the License for the
-           specific language governing permissions and limitations
-           under the License.
-
-This an example of how to limit the size of mutations that will be accepted into
-a table.  Under the default configuration, accumulo does not provide a limitation
-on the size of mutations that can be ingested.  Poorly behaved writers might
-inadvertently create mutations so large, that they cause the tablet servers to 
-run out of memory.  A simple contraint can be added to a table to reject very 
-large mutations.
-
-    $ ./bin/accumulo shell -u username -p password
-    
-    Shell - Apache Accumulo Interactive Shell
-    - 
-    - version: 1.5.0
-    - instance name: instance
-    - instance id: 00000000-0000-0000-0000-000000000000
-    - 
-    - type 'help' for a list of available commands
-    - 
-    username@instance> createtable test_ingest
-    username@instance test_ingest> config -t test_ingest -s table.constraint.1=org.apache.accumulo.examples.simple.constraints.MaxMutationSize
-    username@instance test_ingest> 
-
-
-Now the table will reject any mutation that is larger than 1/256th of the 
-working memory of the tablet server.  The following command attempts to ingest 
-a single row with 10000 columns, which exceeds the memory limit:
-
-    $ ./bin/accumulo org.apache.accumulo.test.TestIngest -i instance -z zookeepers -u username -p password --rows 1 --cols 10000 
-ERROR : Constraint violates : ConstraintViolationSummary(constrainClass:org.apache.accumulo.examples.simple.constraints.MaxMutationSize, violationCode:0, violationDescription:mutation exceeded maximum size of 188160, numberOfViolatingMutations:1)
-

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/server/monitor/src/main/resources/docs/examples/README.regex
----------------------------------------------------------------------
diff --git a/server/monitor/src/main/resources/docs/examples/README.regex b/server/monitor/src/main/resources/docs/examples/README.regex
deleted file mode 100644
index f23190f..0000000
--- a/server/monitor/src/main/resources/docs/examples/README.regex
+++ /dev/null
@@ -1,58 +0,0 @@
-Title: Apache Accumulo Regex Example
-Notice:    Licensed to the Apache Software Foundation (ASF) under one
-           or more contributor license agreements.  See the NOTICE file
-           distributed with this work for additional information
-           regarding copyright ownership.  The ASF licenses this file
-           to you under the Apache License, Version 2.0 (the
-           "License"); you may not use this file except in compliance
-           with the License.  You may obtain a copy of the License at
-           .
-             http://www.apache.org/licenses/LICENSE-2.0
-           .
-           Unless required by applicable law or agreed to in writing,
-           software distributed under the License is distributed on an
-           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-           KIND, either express or implied.  See the License for the
-           specific language governing permissions and limitations
-           under the License.
-
-This example uses mapreduce and accumulo to find items using regular expressions.
-This is accomplished using a map-only mapreduce job and a scan-time iterator.
-
-To run this example you will need some data in a table.  The following will
-put a trivial amount of data into accumulo using the accumulo shell:
-
-    $ ./bin/accumulo shell -u username -p password
-    Shell - Apache Accumulo Interactive Shell
-    - version: 1.5.0
-    - instance name: instance
-    - instance id: 00000000-0000-0000-0000-000000000000
-    - 
-    - type 'help' for a list of available commands
-    - 
-    username@instance> createtable input
-    username@instance> insert dogrow dogcf dogcq dogvalue
-    username@instance> insert catrow catcf catcq catvalue
-    username@instance> quit
-
-The RegexExample class sets an iterator on the scanner.  This does pattern matching
-against each key/value in accumulo, and only returns matching items.  It will do this
-in parallel and will store the results in files in hdfs.
-
-The following will search for any rows in the input table that starts with "dog":
-
-    $ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.RegexExample -u user -p passwd -i instance -t input --rowRegex 'dog.*' --output /tmp/output
-
-    $ hadoop fs -ls /tmp/output
-    Found 3 items
-    -rw-r--r--   1 username supergroup          0 2013-01-10 14:11 /tmp/output/_SUCCESS
-    drwxr-xr-x   - username supergroup          0 2013-01-10 14:10 /tmp/output/_logs
-    -rw-r--r--   1 username supergroup         51 2013-01-10 14:10 /tmp/output/part-m-00000
-
-We can see the output of our little map-reduce job:
-
-    $ hadoop fs -text /tmp/output/output/part-m-00000
-    dogrow dogcf:dogcq [] 1357844987994 false	dogvalue
-    $
-
-

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/server/monitor/src/main/resources/docs/examples/README.reservations
----------------------------------------------------------------------
diff --git a/server/monitor/src/main/resources/docs/examples/README.reservations b/server/monitor/src/main/resources/docs/examples/README.reservations
deleted file mode 100644
index a966ed9..0000000
--- a/server/monitor/src/main/resources/docs/examples/README.reservations
+++ /dev/null
@@ -1,66 +0,0 @@
-Title: Apache Accumulo Isolation Example
-Notice:    Licensed to the Apache Software Foundation (ASF) under one
-           or more contributor license agreements.  See the NOTICE file
-           distributed with this work for additional information
-           regarding copyright ownership.  The ASF licenses this file
-           to you under the Apache License, Version 2.0 (the
-           "License"); you may not use this file except in compliance
-           with the License.  You may obtain a copy of the License at
-           .
-             http://www.apache.org/licenses/LICENSE-2.0
-           .
-           Unless required by applicable law or agreed to in writing,
-           software distributed under the License is distributed on an
-           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-           KIND, either express or implied.  See the License for the
-           specific language governing permissions and limitations
-           under the License.
-
-This example shows running a simple reservation system implemented using
-conditional mutations.  This system guarantees that only one concurrent user can
-reserve a resource.  The example's reserve command allows multiple users to be
-specified.  When this is done, it creates a separate reservation thread for each
-user.  In the example below threads are spun up for alice, bob, eve, mallory,
-and trent to reserve room06 on 20140101.  Bob ends up getting the reservation
-and everyone else is put on a wait list.  The example code will take any string
-for what, when and who.
-
-    $ ./bin/accumulo org.apache.accumulo.examples.simple.reservations.ARS
-    >connect test16 localhost root secret ars
-      connected
-    >
-      Commands : 
-        reserve <what> <when> <who> {who}
-        cancel <what> <when> <who>
-        list <what> <when>
-    >reserve room06 20140101 alice bob eve mallory trent
-                       bob : RESERVED
-                   mallory : WAIT_LISTED
-                     alice : WAIT_LISTED
-                     trent : WAIT_LISTED
-                       eve : WAIT_LISTED
-    >list room06 20140101
-      Reservation holder : bob
-      Wait list : [mallory, alice, trent, eve]
-    >cancel room06 20140101 alice
-    >cancel room06 20140101 bob
-    >list room06 20140101
-      Reservation holder : mallory
-      Wait list : [trent, eve]
-    >quit
-
-Scanning the table in the Accumulo shell after running the example shows the
-following:
-
-    root@test16> table ars
-    root@test16 ars> scan
-    room06:20140101 res:0001 []    mallory
-    room06:20140101 res:0003 []    trent
-    room06:20140101 res:0004 []    eve
-    room06:20140101 tx:seq []    6
-
-The tx:seq column is incremented for each update to the row allowing for
-detection of concurrent changes.  For an update to go through, the sequence
-number must not have changed since the data was read.  If it does change,
-the conditional mutation will fail and the example code will retry.
-

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/server/monitor/src/main/resources/docs/examples/README.rowhash
----------------------------------------------------------------------
diff --git a/server/monitor/src/main/resources/docs/examples/README.rowhash b/server/monitor/src/main/resources/docs/examples/README.rowhash
deleted file mode 100644
index e7fbfed..0000000
--- a/server/monitor/src/main/resources/docs/examples/README.rowhash
+++ /dev/null
@@ -1,59 +0,0 @@
-Title: Apache Accumulo RowHash Example
-Notice:    Licensed to the Apache Software Foundation (ASF) under one
-           or more contributor license agreements.  See the NOTICE file
-           distributed with this work for additional information
-           regarding copyright ownership.  The ASF licenses this file
-           to you under the Apache License, Version 2.0 (the
-           "License"); you may not use this file except in compliance
-           with the License.  You may obtain a copy of the License at
-           .
-             http://www.apache.org/licenses/LICENSE-2.0
-           .
-           Unless required by applicable law or agreed to in writing,
-           software distributed under the License is distributed on an
-           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-           KIND, either express or implied.  See the License for the
-           specific language governing permissions and limitations
-           under the License.
-
-This example shows a simple map/reduce job that reads from an accumulo table and
-writes back into that table.
-
-To run this example you will need some data in a table.  The following will
-put a trivial amount of data into accumulo using the accumulo shell:
-
-    $ ./bin/accumulo shell -u username -p password
-    Shell - Apache Accumulo Interactive Shell
-    - version: 1.5.0
-    - instance name: instance
-    - instance id: 00000000-0000-0000-0000-000000000000
-    - 
-    - type 'help' for a list of available commands
-    - 
-    username@instance> createtable input
-    username@instance> insert a-row cf cq value
-    username@instance> insert b-row cf cq value
-    username@instance> quit
-
-The RowHash class will insert a hash for each row in the database if it contains a 
-specified colum.  Here's how you run the map/reduce job
-
-    $ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.RowHash -u user -p passwd -i instance -t input --column cf:cq 
-
-Now we can scan the table and see the hashes:
-
-    $ ./bin/accumulo shell -u username -p password
-    Shell - Apache Accumulo Interactive Shell
-    - version: 1.5.0
-    - instance name: instance
-    - instance id: 00000000-0000-0000-0000-000000000000
-    - 
-    - type 'help' for a list of available commands
-    - 
-    username@instance> scan -t input
-    a-row cf:cq []    value
-    a-row cf-HASHTYPE:cq-MD5BASE64 []    IGPBYI1uC6+AJJxC4r5YBA==
-    b-row cf:cq []    value
-    b-row cf-HASHTYPE:cq-MD5BASE64 []    IGPBYI1uC6+AJJxC4r5YBA==
-    username@instance> 
-

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/server/monitor/src/main/resources/docs/examples/README.shard
----------------------------------------------------------------------
diff --git a/server/monitor/src/main/resources/docs/examples/README.shard b/server/monitor/src/main/resources/docs/examples/README.shard
deleted file mode 100644
index f79015a..0000000
--- a/server/monitor/src/main/resources/docs/examples/README.shard
+++ /dev/null
@@ -1,67 +0,0 @@
-Title: Apache Accumulo Shard Example
-Notice:    Licensed to the Apache Software Foundation (ASF) under one
-           or more contributor license agreements.  See the NOTICE file
-           distributed with this work for additional information
-           regarding copyright ownership.  The ASF licenses this file
-           to you under the Apache License, Version 2.0 (the
-           "License"); you may not use this file except in compliance
-           with the License.  You may obtain a copy of the License at
-           .
-             http://www.apache.org/licenses/LICENSE-2.0
-           .
-           Unless required by applicable law or agreed to in writing,
-           software distributed under the License is distributed on an
-           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-           KIND, either express or implied.  See the License for the
-           specific language governing permissions and limitations
-           under the License.
-
-Accumulo has an iterator called the intersecting iterator which supports querying a term index that is partitioned by 
-document, or "sharded". This example shows how to use the intersecting iterator through these four programs:
-
- * Index.java - Indexes a set of text files into an Accumulo table
- * Query.java - Finds documents containing a given set of terms.
- * Reverse.java - Reads the index table and writes a map of documents to terms into another table.
- * ContinuousQuery.java  Uses the table populated by Reverse.java to select N random terms per document.  Then it continuously and randomly queries those terms.
-
-To run these example programs, create two tables like below.
-
-    username@instance> createtable shard
-    username@instance shard> createtable doc2term
-
-After creating the tables, index some files.  The following command indexes all of the java files in the Accumulo source code.
-
-    $ cd /local/username/workspace/accumulo/
-    $ find core/src server/src -name "*.java" | xargs ./bin/accumulo org.apache.accumulo.examples.simple.shard.Index -i instance -z zookeepers -t shard -u username -p password --partitions 30
-
-The following command queries the index to find all files containing 'foo' and 'bar'.
-
-    $ cd $ACCUMULO_HOME
-    $ ./bin/accumulo org.apache.accumulo.examples.simple.shard.Query -i instance -z zookeepers -t shard -u username -p password foo bar
-    /local/username/workspace/accumulo/src/core/src/test/java/accumulo/core/security/ColumnVisibilityTest.java
-    /local/username/workspace/accumulo/src/core/src/test/java/accumulo/core/client/mock/MockConnectorTest.java
-    /local/username/workspace/accumulo/src/core/src/test/java/accumulo/core/security/VisibilityEvaluatorTest.java
-    /local/username/workspace/accumulo/src/server/src/main/java/accumulo/test/functional/RowDeleteTest.java
-    /local/username/workspace/accumulo/src/server/src/test/java/accumulo/server/logger/TestLogWriter.java
-    /local/username/workspace/accumulo/src/server/src/main/java/accumulo/test/functional/DeleteEverythingTest.java
-    /local/username/workspace/accumulo/src/core/src/test/java/accumulo/core/data/KeyExtentTest.java
-    /local/username/workspace/accumulo/src/server/src/test/java/accumulo/server/constraints/MetadataConstraintsTest.java
-    /local/username/workspace/accumulo/src/core/src/test/java/accumulo/core/iterators/WholeRowIteratorTest.java
-    /local/username/workspace/accumulo/src/server/src/test/java/accumulo/server/util/DefaultMapTest.java
-    /local/username/workspace/accumulo/src/server/src/test/java/accumulo/server/tabletserver/InMemoryMapTest.java
-
-In order to run ContinuousQuery, we need to run Reverse.java to populate doc2term.
-
-    $ ./bin/accumulo org.apache.accumulo.examples.simple.shard.Reverse -i instance -z zookeepers --shardTable shard --doc2Term doc2term -u username -p password
-
-Below ContinuousQuery is run using 5 terms.  So it selects 5 random terms from each document, then it continually 
-randomly selects one set of 5 terms and queries.  It prints the number of matching documents and the time in seconds.
-
-    $ ./bin/accumulo org.apache.accumulo.examples.simple.shard.ContinuousQuery -i instance -z zookeepers --shardTable shard --doc2Term doc2term -u username -p password --terms 5
-    [public, core, class, binarycomparable, b] 2  0.081
-    [wordtodelete, unindexdocument, doctablename, putdelete, insert] 1  0.041
-    [import, columnvisibilityinterpreterfactory, illegalstateexception, cv, columnvisibility] 1  0.049
-    [getpackage, testversion, util, version, 55] 1  0.048
-    [for, static, println, public, the] 55  0.211
-    [sleeptime, wrappingiterator, options, long, utilwaitthread] 1  0.057
-    [string, public, long, 0, wait] 12  0.132

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/server/monitor/src/main/resources/docs/examples/README.tabletofile
----------------------------------------------------------------------
diff --git a/server/monitor/src/main/resources/docs/examples/README.tabletofile b/server/monitor/src/main/resources/docs/examples/README.tabletofile
deleted file mode 100644
index 8a4180e..0000000
--- a/server/monitor/src/main/resources/docs/examples/README.tabletofile
+++ /dev/null
@@ -1,59 +0,0 @@
-Title: Apache Accumulo Table-to-File Example
-Notice:    Licensed to the Apache Software Foundation (ASF) under one
-           or more contributor license agreements.  See the NOTICE file
-           distributed with this work for additional information
-           regarding copyright ownership.  The ASF licenses this file
-           to you under the Apache License, Version 2.0 (the
-           "License"); you may not use this file except in compliance
-           with the License.  You may obtain a copy of the License at
-           .
-             http://www.apache.org/licenses/LICENSE-2.0
-           .
-           Unless required by applicable law or agreed to in writing,
-           software distributed under the License is distributed on an
-           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-           KIND, either express or implied.  See the License for the
-           specific language governing permissions and limitations
-           under the License.
-
-This example uses mapreduce to extract specified columns from an existing table.
-
-To run this example you will need some data in a table.  The following will
-put a trivial amount of data into accumulo using the accumulo shell:
-
-    $ ./bin/accumulo shell -u username -p password
-    Shell - Apache Accumulo Interactive Shell
-    - version: 1.5.0
-    - instance name: instance
-    - instance id: 00000000-0000-0000-0000-000000000000
-    - 
-    - type 'help' for a list of available commands
-    - 
-    username@instance> createtable input
-    username@instance> insert dog cf cq dogvalue
-    username@instance> insert cat cf cq catvalue
-    username@instance> insert junk family qualifier junkvalue
-    username@instance> quit
-
-The TableToFile class configures a map-only job to read the specified columns and
-write the key/value pairs to a file in HDFS.
-
-The following will extract the rows containing the column "cf:cq":
-
-    $ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.TableToFile -u user -p passwd -i instance -t input --columns cf:cq --output /tmp/output
-
-    $ hadoop fs -ls /tmp/output
-    -rw-r--r--   1 username supergroup          0 2013-01-10 14:44 /tmp/output/_SUCCESS
-    drwxr-xr-x   - username supergroup          0 2013-01-10 14:44 /tmp/output/_logs
-    drwxr-xr-x   - username supergroup          0 2013-01-10 14:44 /tmp/output/_logs/history
-    -rw-r--r--   1 username supergroup       9049 2013-01-10 14:44 /tmp/output/_logs/history/job_201301081658_0011_1357847072863_username_TableToFile%5F1357847071434
-    -rw-r--r--   1 username supergroup      26172 2013-01-10 14:44 /tmp/output/_logs/history/job_201301081658_0011_conf.xml
-    -rw-r--r--   1 username supergroup         50 2013-01-10 14:44 /tmp/output/part-m-00000
-
-We can see the output of our little map-reduce job:
-
-    $ hadoop fs -text /tmp/output/output/part-m-00000
-    catrow cf:cq []	catvalue
-    dogrow cf:cq []	dogvalue
-    $
-

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/server/monitor/src/main/resources/docs/examples/README.terasort
----------------------------------------------------------------------
diff --git a/server/monitor/src/main/resources/docs/examples/README.terasort b/server/monitor/src/main/resources/docs/examples/README.terasort
deleted file mode 100644
index cf5051a..0000000
--- a/server/monitor/src/main/resources/docs/examples/README.terasort
+++ /dev/null
@@ -1,50 +0,0 @@
-Title: Apache Accumulo Terasort Example
-Notice:    Licensed to the Apache Software Foundation (ASF) under one
-           or more contributor license agreements.  See the NOTICE file
-           distributed with this work for additional information
-           regarding copyright ownership.  The ASF licenses this file
-           to you under the Apache License, Version 2.0 (the
-           "License"); you may not use this file except in compliance
-           with the License.  You may obtain a copy of the License at
-           .
-             http://www.apache.org/licenses/LICENSE-2.0
-           .
-           Unless required by applicable law or agreed to in writing,
-           software distributed under the License is distributed on an
-           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-           KIND, either express or implied.  See the License for the
-           specific language governing permissions and limitations
-           under the License.
-
-This example uses map/reduce to generate random input data that will
-be sorted by storing it into accumulo.  It uses data very similar to the
-hadoop terasort benchmark.
-
-To run this example you run it with arguments describing the amount of data:
-
-    $ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.TeraSortIngest \
-    -i instance -z zookeepers -u user -p password \
-    --count 10 \
-    --minKeySize 10 \ 
-    --maxKeySize 10 \
-    --minValueSize 78 \
-    --maxValueSize 78 \
-    --table sort \
-    --splits 10 \
-
-After the map reduce job completes, scan the data:
-
-    $ ./bin/accumulo shell -u username -p password
-    username@instance> scan -t sort 
-    +l-$$OE/ZH c:         4 []    GGGGGGGGGGWWWWWWWWWWMMMMMMMMMMCCCCCCCCCCSSSSSSSSSSIIIIIIIIIIYYYYYYYYYYOOOOOOOO
-    ,C)wDw//u= c:        10 []    CCCCCCCCCCSSSSSSSSSSIIIIIIIIIIYYYYYYYYYYOOOOOOOOOOEEEEEEEEEEUUUUUUUUUUKKKKKKKK
-    75@~?'WdUF c:         1 []    IIIIIIIIIIYYYYYYYYYYOOOOOOOOOOEEEEEEEEEEUUUUUUUUUUKKKKKKKKKKAAAAAAAAAAQQQQQQQQ
-    ;L+!2rT~hd c:         8 []    MMMMMMMMMMCCCCCCCCCCSSSSSSSSSSIIIIIIIIIIYYYYYYYYYYOOOOOOOOOOEEEEEEEEEEUUUUUUUU
-    LsS8)|.ZLD c:         5 []    OOOOOOOOOOEEEEEEEEEEUUUUUUUUUUKKKKKKKKKKAAAAAAAAAAQQQQQQQQQQGGGGGGGGGGWWWWWWWW
-    M^*dDE;6^< c:         9 []    UUUUUUUUUUKKKKKKKKKKAAAAAAAAAAQQQQQQQQQQGGGGGGGGGGWWWWWWWWWWMMMMMMMMMMCCCCCCCC
-    ^Eu)<n#kdP c:         3 []    YYYYYYYYYYOOOOOOOOOOEEEEEEEEEEUUUUUUUUUUKKKKKKKKKKAAAAAAAAAAQQQQQQQQQQGGGGGGGG
-    le5awB.$sm c:         6 []    WWWWWWWWWWMMMMMMMMMMCCCCCCCCCCSSSSSSSSSSIIIIIIIIIIYYYYYYYYYYOOOOOOOOOOEEEEEEEE
-    q__[fwhKFg c:         7 []    EEEEEEEEEEUUUUUUUUUUKKKKKKKKKKAAAAAAAAAAQQQQQQQQQQGGGGGGGGGGWWWWWWWWWWMMMMMMMM
-    w[o||:N&H, c:         2 []    QQQQQQQQQQGGGGGGGGGGWWWWWWWWWWMMMMMMMMMMCCCCCCCCCCSSSSSSSSSSIIIIIIIIIIYYYYYYYY
-
-Of course, a real benchmark would ingest millions of entries.

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/server/monitor/src/main/resources/docs/examples/README.visibility
----------------------------------------------------------------------
diff --git a/server/monitor/src/main/resources/docs/examples/README.visibility b/server/monitor/src/main/resources/docs/examples/README.visibility
deleted file mode 100644
index ba0b44d..0000000
--- a/server/monitor/src/main/resources/docs/examples/README.visibility
+++ /dev/null
@@ -1,131 +0,0 @@
-Title: Apache Accumulo Visibility, Authorizations, and Permissions Example
-Notice:    Licensed to the Apache Software Foundation (ASF) under one
-           or more contributor license agreements.  See the NOTICE file
-           distributed with this work for additional information
-           regarding copyright ownership.  The ASF licenses this file
-           to you under the Apache License, Version 2.0 (the
-           "License"); you may not use this file except in compliance
-           with the License.  You may obtain a copy of the License at
-           .
-             http://www.apache.org/licenses/LICENSE-2.0
-           .
-           Unless required by applicable law or agreed to in writing,
-           software distributed under the License is distributed on an
-           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-           KIND, either express or implied.  See the License for the
-           specific language governing permissions and limitations
-           under the License.
-
-## Creating a new user
-
-    root@instance> createuser username
-    Enter new password for 'username': ********
-    Please confirm new password for 'username': ********
-    root@instance> user username
-    Enter password for user username: ********
-    username@instance> createtable vistest
-    06 10:48:47,931 [shell.Shell] ERROR: org.apache.accumulo.core.client.AccumuloSecurityException: Error PERMISSION_DENIED - User does not have permission to perform this action
-    username@instance> userpermissions
-    System permissions: 
-    
-    Table permissions (accumulo.metadata): Table.READ
-    username@instance> 
-
-A user does not by default have permission to create a table.
-
-## Granting permissions to a user
-
-    username@instance> user root
-    Enter password for user root: ********
-    root@instance> grant -s System.CREATE_TABLE -u username
-    root@instance> user username 
-    Enter password for user username: ********
-    username@instance> createtable vistest
-    username@instance> userpermissions
-    System permissions: System.CREATE_TABLE
-    
-    Table permissions (accumulo.metadata): Table.READ
-    Table permissions (vistest): Table.READ, Table.WRITE, Table.BULK_IMPORT, Table.ALTER_TABLE, Table.GRANT, Table.DROP_TABLE
-    username@instance vistest> 
-
-## Inserting data with visibilities
-
-Visibilities are boolean AND (&) and OR (|) combinations of authorization
-tokens.  Authorization tokens are arbitrary strings taken from a restricted 
-ASCII character set.  Parentheses are required to specify order of operations 
-in visibilities.
-
-    username@instance vistest> insert row f1 q1 v1 -l A
-    username@instance vistest> insert row f2 q2 v2 -l A&B
-    username@instance vistest> insert row f3 q3 v3 -l apple&carrot|broccoli|spinach
-    06 11:19:01,432 [shell.Shell] ERROR: org.apache.accumulo.core.util.BadArgumentException: cannot mix | and & near index 12
-    apple&carrot|broccoli|spinach
-                ^
-    username@instance vistest> insert row f3 q3 v3 -l (apple&carrot)|broccoli|spinach
-    username@instance vistest> 
-
-## Scanning with authorizations
-
-Authorizations are sets of authorization tokens.  Each Accumulo user has 
-authorizations and each Accumulo scan has authorizations.  Scan authorizations 
-are only allowed to be a subset of the user's authorizations.  By default, a 
-user's authorizations set is empty.
-
-    username@instance vistest> scan
-    username@instance vistest> scan -s A
-    06 11:43:14,951 [shell.Shell] ERROR: java.lang.RuntimeException: org.apache.accumulo.core.client.AccumuloSecurityException: Error BAD_AUTHORIZATIONS - The user does not have the specified authorizations assigned
-    username@instance vistest> 
-
-## Setting authorizations for a user
-
-    username@instance vistest> setauths -s A
-    06 11:53:42,056 [shell.Shell] ERROR: org.apache.accumulo.core.client.AccumuloSecurityException: Error PERMISSION_DENIED - User does not have permission to perform this action
-    username@instance vistest> 
-
-A user cannot set authorizations unless the user has the System.ALTER_USER permission.
-The root user has this permission.
-
-    username@instance vistest> user root
-    Enter password for user root: ********
-    root@instance vistest> setauths -s A -u username
-    root@instance vistest> user username
-    Enter password for user username: ********
-    username@instance vistest> scan -s A
-    row f1:q1 [A]    v1
-    username@instance vistest> scan
-    row f1:q1 [A]    v1
-    username@instance vistest> 
-
-The default authorizations for a scan are the user's entire set of authorizations.
-
-    username@instance vistest> user root
-    Enter password for user root: ********
-    root@instance vistest> setauths -s A,B,broccoli -u username
-    root@instance vistest> user username
-    Enter password for user username: ********
-    username@instance vistest> scan
-    row f1:q1 [A]    v1
-    row f2:q2 [A&B]    v2
-    row f3:q3 [(apple&carrot)|broccoli|spinach]    v3
-    username@instance vistest> scan -s B
-    username@instance vistest> 
-    
-If you want, you can limit a user to only be able to insert data which they can read themselves.
-It can be set with the following constraint.
-
-    username@instance vistest> user root
-    Enter password for user root: ******
-    root@instance vistest> config -t vistest -s table.constraint.1=org.apache.accumulo.core.security.VisibilityConstraint    
-    root@instance vistest> user username
-    Enter password for user username: ********
-    username@instance vistest> insert row f4 q4 v4 -l spinach                                                                
-        Constraint Failures:
-            ConstraintViolationSummary(constrainClass:org.apache.accumulo.core.security.VisibilityConstraint, violationCode:2, violationDescription:User does not have authorization on column visibility, numberOfViolatingMutations:1)
-    username@instance vistest> insert row f4 q4 v4 -l spinach|broccoli
-    username@instance vistest> scan
-    row f1:q1 [A]    v1
-    row f2:q2 [A&B]    v2
-    row f3:q3 [(apple&carrot)|broccoli|spinach]    v3
-    row f4:q4 [spinach|broccoli]    v4
-    username@instance vistest> 
-    

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/server/monitor/src/main/resources/docs/index.html
----------------------------------------------------------------------
diff --git a/server/monitor/src/main/resources/docs/index.html b/server/monitor/src/main/resources/docs/index.html
deleted file mode 100644
index fa399fb..0000000
--- a/server/monitor/src/main/resources/docs/index.html
+++ /dev/null
@@ -1,41 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-<html>
-<head>
-<title>Accumulo Documentation</title>
-<link rel='stylesheet' type='text/css' href='documentation.css' media='screen'/>
-</head>
-<body>
-
-<h1>Apache Accumulo Documentation</h1>
-<ul>
-<li><a href=accumulo_user_manual.pdf>User Manual</a></li>
-<li><a href=administration.html>Administration</a></li>
-<li><a href=combiners.html>Combiners</a></li>
-<li><a href=constraints.html>Constraints</a></li>
-<li><a href=bulkIngest.html>Bulk Ingest</a></li>
-<li><a href=config.html>Configuration</a></li>
-<li><a href=isolation.html>Isolation</a></li>
-<li><a href=apidocs/index.html>Java API</a></li>
-<li><a href=lgroups.html>Locality Groups</a></li>
-<li><a href=timestamps.html>Timestamps</a></li>
-<li><a href=metrics.html>Metrics</a></li>
-<li><a href=distributedTracing.html>Distributed Tracing</a></li>
-</ul>
-
-</body>
-</html>


[3/6] ACCUMULO-1487, ACCUMULO-1491 Stop packaging docs for monitor

Posted by ct...@apache.org.
http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/docs/src/main/resources/metrics.html
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/metrics.html b/docs/src/main/resources/metrics.html
new file mode 100644
index 0000000..00f0a5b
--- /dev/null
+++ b/docs/src/main/resources/metrics.html
@@ -0,0 +1,182 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<html>
+<head>
+<title>Accumulo Metrics</title>
+<link rel='stylesheet' type='text/css' href='documentation.css' media='screen'/>
+</head>
+<body>
+
+<h1>Apache Accumulo Documentation : Metrics</h1>
+
+As of version 1.2, metrics for the Master, Tablet Servers, and Loggers are available. A new configuration file, accumulo-metrics.xml, is located in the conf directory and can
+be modified to turn metrics collection on or off, and to enable file logging if desired. This file can be modified at runtime and the changes will be seen after a few seconds.
+Except where specified all time values are in milliseconds.
+<h1>Master Metrics</h1>
+<p>JMX Object Name: org.apache.accumulo.server.metrics:type=MasterMetricsMBean,name= &lt;current thread name&gt;</p>
+<table>
+	<thead>
+		<tr><td>Method Name</td><td>Description</td></tr>
+	</thead>
+	<tbody>
+		<tr class="highlight"><td>public long getPingCount();</td><td>Number of pings to tablet servers</td></tr>
+		<tr><td>public long getPingAvgTime();</td><td>Average time for each ping</td></tr>
+		<tr class="highlight"><td>public long getPingMinTime();</td><td>Minimum time for each ping</td></tr>
+		<tr><td>public long getPingMaxTime();</td><td>Maximum time for each ping</td></tr>
+		<tr class="highlight"><td>public String getTServerWithHighestPingTime();</td><td>tablet server with highest ping</td></tr>
+		<tr><td>public void reset();</td><td>Resets all counters to zero</td></tr>
+	</tbody>
+</table>
+<h1>Logging Server Metrics</h1>
+<p>JMX Object Name: org.apache.accumulo.server.metrics:type=LogWriterMBean,name= &lt;current thread name&gt;</p>
+<table>
+	<thead>
+		<tr><td>Method Name</td><td>Description</td></tr>
+	</thead>
+	<tbody>
+		<tr class="highlight"><td>public long getCloseCount();</td><td>Number of closed log files</td></tr>
+		<tr><td>public long getCloseAvgTime();</td><td>Average time to close a log file</td></tr>
+		<tr class="highlight"><td>public long getCloseMinTime();</td><td>Minimum time to close a log file</td></tr>
+		<tr><td>public long getCloseMaxTime();</td><td>Maximum time to close a log file</td></tr>
+		<tr class="highlight"><td>public long getCopyCount();</td><td>Number of log files copied</td></tr>
+		<tr><td>public long getCopyAvgTime();</td><td>Average time to copy a log file</td></tr>
+		<tr class="highlight"><td>public long getCopyMinTime();</td><td>Minimum time to copy a log file</td></tr>
+		<tr><td>public long getCopyMaxTime();</td><td>Maximum time to copy a log file</td></tr>
+		<tr class="highlight"><td>public long getCreateCount();</td><td>Number of log files created</td></tr>
+		<tr><td>public long getCreateMinTime();</td><td>Minimum time to create a log file</td></tr>
+		<tr class="highlight"><td>public long getCreateMaxTime();</td><td>Maximum time to create a log file</td></tr>
+		<tr><td>public long getCreateAvgTime();</td><td>Average time to create a log file</td></tr>
+		<tr class="highlight"><td>public long getLogAppendCount();</td><td>Number of times logs have been appended</td></tr>
+		<tr><td>public long getLogAppendMinTime();</td><td>Minimum time to append to a log file</td></tr>
+		<tr class="highlight"><td>public long getLogAppendMaxTime();</td><td>Maximum time to append to a log file</td></tr>
+		<tr><td>public long getLogAppendAvgTime();</td><td>Average time to append to a log file</td></tr>
+		<tr class="highlight"><td>public long getLogFlushCount();</td><td>Number of log file flushes</td></tr>
+		<tr><td>public long getLogFlushMinTime();</td><td>Minimum time to flush a log file</td></tr>
+		<tr class="highlight"><td>public long getLogFlushMaxTime();</td><td>Maximum time to flush a log file</td></tr>
+		<tr><td>public long getLogFlushAvgTime();</td><td>Average time to flush a log file</td></tr>
+		<tr class="highlight"><td>public long getLogExceptionCount();</td><td>Number of log exceptions</td></tr>
+		<tr><td>public void reset();</td><td>Resets all counters to zero</td></tr>
+	</tbody>
+</table>
+<h1>Tablet Server Metrics</h1>
+<p>JMX Object Name: org.apache.accumulo.server.metrics:type=TabletServerMBean,name= &lt;current thread name&gt;</p>
+<table>
+	<thead>
+		<tr><td>Method Name</td><td>Description</td></tr>
+	</thead>
+	<tbody>
+		<tr class="highlight"><td>public int getOnlineCount();</td><td>Number of tablets online</td></tr>
+		<tr><td>public int getOpeningCount();</td><td>Number of tablets that are being opened</td></tr>
+		<tr class="highlight"><td>public int getUnopenedCount();</td><td>Number or unopened tablets</td></tr>
+		<tr><td>public int getMajorCompactions();</td><td>Number of Major Compactions currently running</td></tr>
+		<tr class="highlight"><td>public int getMajorCompactionsQueued();</td><td>Number of Major Compactions yet to run</td></tr>
+		<tr><td>public int getMinorCompactions();</td><td>Number of Minor Compactions currently running</td></tr>
+		<tr class="highlight"><td>public int getMinorCompactionsQueued();</td><td>Number of Minor Compactions yet to run</td></tr>
+		<tr><td>public int getShutdownStage();</td><td>Current stage in the shutdown process</td></tr>
+		<tr class="highlight"><td>public long getEntries();</td><td>Number of entries in all the tablets</td></tr>
+		<tr><td>public long getEntriesInMemory();</td><td>Number of entries in memory on all tablet servers</td></tr>
+		<tr class="highlight"><td>public long getQueries();</td><td>Number of queries currently running on all the tablet servers</td></tr>
+		<tr><td>public long getIngest();</td><td>Number of entries currently being ingested on all the tablet servers</td></tr>
+		<tr class="highlight"><td>public long getTotalMinorCompactions();</td><td>Number of Minor Compactions completed</td></tr>
+		<tr><td>public double getHoldTime();</td><td>Number of seconds that ingest is waiting for memory to be freed on tablet servers</td></tr>
+		<tr class="highlight"><td>public String getName();</td><td>Address of the master</td></tr>
+	</tbody>
+</table>
+<h1>Tablet Server Minor Compaction Metrics</h1>
+<p>JMX Object Name: org.apache.accumulo.server.metrics:type=TabletServerMinCMetricsMBean,name= &lt;current thread name&gt;</p>
+<table>
+	<thead>
+		<tr><td>Method Name</td><td>Description</td></tr>
+	</thead>
+	<tbody>
+		<tr class="highlight"><td>public long getMinorCompactionCount();</td><td>Number of completed Minor Compactions on all tablet servers</td></tr>
+		<tr><td>public long getMinorCompactionAvgTime();</td><td>Average time to complete Minor Compaction</td></tr>
+		<tr class="highlight"><td>public long getMinorCompactionMinTime();</td><td>Minimum time to complete Minor Compaction</td></tr>
+		<tr><td>public long getMinorCompactionMaxTime();</td><td>Maximum time to complete Minor Compaction</td></tr>
+		<tr class="highlight"><td>public long getMinorCompactionQueueCount();</td><td>Number of Minor Compactions yet to be run</td></tr>
+		<tr><td>public long getMinorCompactionQueueAvgTime();</td><td>Average time Minor Compaction is in the queue</td></tr>
+		<tr class="highlight"><td>public long getMinorCompactionQueueMinTime();</td><td>Minimum time Minor Compaction is in the queue</td></tr>
+		<tr><td>public long getMinorCompactionQueueMaxTime();</td><td>Maximum time Minor Compaction is in the queue</td></tr>
+		<tr class="highlight"><td>public void reset();</td><td>Resets all counters to zero</td></tr>
+	</tbody>
+</table>
+<h1>Tablet Server Scan Metrics</h1>
+<p>JMX Object Name: org.apache.accumulo.server.metrics:type=TabletServerScanMetricsMBean,name= &lt;current thread name&gt;</p>
+<table>
+	<thead>
+		<tr><td>Method Name</td><td>Description</td></tr>
+	</thead>
+	<tbody>
+		<tr class="highlight"><td>public long getScanCount();</td><td>Number of scans completed</td></tr>
+		<tr><td>public long getScanAvgTime();</td><td>Average time for scan operation</td></tr>
+		<tr class="highlight"><td>public long getScanMinTime();</td><td>Minimum time for scan operation</td></tr>
+		<tr><td>public long getScanMaxTime();</td><td>Maximum time for scan operation</td></tr>
+		<tr class="highlight"><td>public long getResultCount();</td><td>Number of scans that returned a result</td></tr>
+		<tr><td>public long getResultAvgSize();</td><td>Average size of scan result</td></tr>
+		<tr class="highlight"><td>public long getResultMinSize();</td><td>Minimum size of scan result</td></tr>
+		<tr><td>public long getResultMaxSize();</td><td>Maximum size of scan result</td></tr>
+		<tr class="highlight"><td>public void reset();</td><td>Resets all counters to zero</td></tr>
+	</tbody>
+</table>
+<h1>Tablet Server Update Metrics</h1>
+<p>JMX Object Name: org.apache.accumulo.server.metrics:type=TabletServerUpdateMetricsMBean,name= &lt;current thread name&gt;</p>
+<table>
+	<thead>
+		<tr><td>Method Name</td><td>Description</td></tr>
+	</thead>
+	<tbody>
+		<tr class="highlight"><td>public long getPermissionErrorCount();</td><td>Number of permission errors</td></tr>
+		<tr><td>public long getUnknownTabletErrorCount();</td><td>Number of unknown tablet errors</td></tr>
+		<tr class="highlight"><td>public long getMutationArrayAvgSize();</td><td>Average size of mutation array</td></tr>
+		<tr><td>public long getMutationArrayMinSize();</td><td>Minimum size of mutation array</td></tr>
+		<tr class="highlight"><td>public long getMutationArrayMaxSize();</td><td>Maximum size of mutation array</td></tr>
+		<tr><td>public long getCommitPrepCount();</td><td>Number of commit preparations</td></tr>
+		<tr class="highlight"><td>public long getCommitPrepMinTime();</td><td>Minimum time for commit preparation</td></tr>
+		<tr><td>public long getCommitPrepMaxTime();</td><td>Maximum time for commit preparatation</td></tr>
+		<tr class="highlight"><td>public long getCommitPrepAvgTime();</td><td>Average time for commit preparation</td></tr>
+		<tr><td>public long getConstraintViolationCount();</td><td>Number of constraint violations</td></tr>
+		<tr class="highlight"><td>public long getWALogWriteCount();</td><td>Number of writes to the Write Ahead Log</td></tr>
+		<tr><td>public long getWALogWriteMinTime();</td><td>Minimum time of a write to the Write Ahead Log</td></tr>
+		<tr class="highlight"><td>public long getWALogWriteMaxTime();</td><td>Maximum time of a write to the Write Ahead Log</td></tr>
+		<tr><td>public long getWALogWriteAvgTime();</td><td>Average time of a write to the Write Ahead Log</td></tr>
+		<tr class="highlight"><td>public long getCommitCount();</td><td>Number of commits</td></tr>
+		<tr><td>public long getCommitMinTime();</td><td>Minimum time for a commit</td></tr>
+		<tr class="highlight"><td>public long getCommitMaxTime();</td><td>Maximum time for a commit</td></tr>
+		<tr><td>public long getCommitAvgTime();</td><td>Average time for a commit</td></tr>
+		<tr class="highlight"><td>public void reset();</td><td>Resets all counters to zero</td></tr>
+	</tbody>
+</table>
+<h1>Thrift Server Metrics</h1>
+<p>JMX Object Name: org.apache.accumulo.server.metrics:type=ThriftMetricsMBean,name= &lt;thread name&gt;</p>
+<table>
+	<thead>
+		<tr><td>Method Name</td><td>Description</td></tr>
+	</thead>
+	<tbody>
+		<tr class="highlight"><td>public long getIdleCount();</td><td>Number of times the Thrift server has been idle</td></tr>
+		<tr><td>public long getIdleMinTime();</td><td>Minimum amount of time the Thrift server has been idle</td></tr>
+		<tr class="highlight"><td>public long getIdleMaxTime();</td><td>Maximum amount of time the Thrift server has been idle</td></tr>
+		<tr><td>public long getIdleAvgTime();</td><td>Average time the Thrift server has been idle</td></tr>
+		<tr class="highlight"><td>public long getExecutionCount();</td><td>Number of calls processed by the Thrift server</td></tr>
+		<tr><td>public long getExecutionMinTime();</td><td>Minimum amount of time executing method</td></tr>
+		<tr class="highlight"><td>public long getExecutionMaxTime();</td><td>Maximum amount of time executing method</td></tr>
+		<tr><td>public long getExecutionAvgTime();</td><td>Average time executing methods</td></tr>
+		<tr class="highlight"><td>public void reset();</td><td>Resets all counters to zero</td></tr>
+	</tbody>
+</table>
+</body>
+</html>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/docs/src/main/resources/timestamps.html
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/timestamps.html b/docs/src/main/resources/timestamps.html
new file mode 100644
index 0000000..9c240d2
--- /dev/null
+++ b/docs/src/main/resources/timestamps.html
@@ -0,0 +1,160 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<html>
+<head>
+<title>Accumulo Timestamps</title>
+<link rel='stylesheet' type='text/css' href='documentation.css' media='screen'/>
+</head>
+<body>
+
+<h1>Apache Accumulo Documentation : Timestamps</h1>
+
+<p>Everything inserted into accumulo has a timestamp. If the user does not
+set it, then the system will set the timestamp. The timestamp is the last
+thing accumulo sorts on. So when two keys have the same row, column family,
+column qualifier, and column visibility then the timestamp of the two keys is
+compared.
+
+<p>Timestamps are sorted in descending order, so the most recent data comes
+first. When a table is created in accumulo, by default it has a versioning
+iterator that only shows the most recent. In the example below two identical
+things are inserted. The scan after that only shows the most recent version.
+However when the versioning iterator configuration is changed, then both are
+seen. When data is inserted with a lower timestamp than existing data, it will
+fall behind the existing data and may not be seen depending on the versioning
+settings. This is why the insert made with a timestamp of 500 is not seen in
+the scan below.
+
+<p><pre>
+root@ac12&gt; createtable foo
+root@ac12 foo&gt;
+root@ac12 foo&gt;
+root@ac12 foo&gt; insert r1 cf1 cq1 value1
+root@ac12 foo&gt; insert r1 cf1 cq1 value2
+root@ac12 foo&gt; scan -st
+r1 cf1:cq1 [] 1279906856203    value2
+root@ac12 foo&gt; config -t foo -f iterator
+---------+---------------------------------------------+-----------------------------------------------------------------------------------------------------
+SCOPE    | NAME                                        | VALUE
+---------+---------------------------------------------+-----------------------------------------------------------------------------------------------------
+table    | table.iterator.majc.vers .................. | 20,org.apache.accumulo.core.iterators.VersioningIterator
+table    | table.iterator.majc.vers.opt.maxVersions .. | 1
+table    | table.iterator.minc.vers .................. | 20,org.apache.accumulo.core.iterators.VersioningIterator
+table    | table.iterator.minc.vers.opt.maxVersions .. | 1
+table    | table.iterator.scan.vers .................. | 20,org.apache.accumulo.core.iterators.VersioningIterator
+table    | table.iterator.scan.vers.opt.maxVersions .. | 1
+---------+---------------------------------------------+-----------------------------------------------------------------------------------------------------
+root@ac12 foo&gt; config -t foo -s table.iterator.scan.vers.opt.maxVersions=3
+root@ac12 foo&gt; config -t foo -s table.iterator.minc.vers.opt.maxVersions=3
+root@ac12 foo&gt; config -t foo -s table.iterator.majc.vers.opt.maxVersions=3
+root@ac12 foo&gt; scan -st
+r1 cf1:cq1 [] 1279906856203    value2
+r1 cf1:cq1 [] 1279906853170    value1
+root@ac12 foo&gt; insert -t 600 r1 cf1 cq1 value3
+root@ac12 foo&gt; insert -t 500 r1 cf1 cq1 value4
+root@ac12 foo&gt; scan -st
+r1 cf1:cq1 [] 1279906856203    value2
+r1 cf1:cq1 [] 1279906853170    value1
+r1 cf1:cq1 [] 600    value3
+root@ac12 foo&gt;
+
+</pre>
+
+<p>Deletes are special keys in accumulo that get sorted along will all the other
+data. When a delete key is inserted, accumulo will not show anything that has
+a timestamp less than or equal to the delete key. In the example below an
+insert is made with timestamp 5 and then a delete is inserted with timestamp 3.
+The scan after that show that the delete marker does not hide the key. However
+when a delete is inserted with timestamp 5, then nothing can be seen. Once a
+delete marker is inserted, it is there until a full major compaction occurs.
+That is why the insert made after the delete can not be seen. The insert after
+the flush and compact commands can be seen because the delete marker is gone.
+The flush forced a minor compaction and compact forced a full major compaction.
+
+<p><pre>
+root@ac12&gt; createtable bar
+root@ac12 bar&gt; insert -t 5 r1 cf1 cq1 val1
+root@ac12 bar&gt; scan -st
+r1 cf1:cq1 [] 5    val1
+root@ac12 bar&gt; delete -t 3 r1 cf1 cq1
+root@ac12 bar&gt; scan
+r1 cf1:cq1 []    val1
+root@ac12 bar&gt; scan -st
+r1 cf1:cq1 [] 5    val1
+root@ac12 bar&gt; delete -t 5 r1 cf1 cq1
+root@ac12 bar&gt; scan -st
+root@ac12 bar&gt; insert -t 5 r1 cf1 cq1 val2
+root@ac12 bar&gt; scan -st
+root@ac12 bar&gt; flush -t bar
+23 14:01:36,587 [shell.Shell] INFO : Flush of table bar initiated...
+root@ac12 bar&gt; compact -t bar
+23 14:02:00,042 [shell.Shell] INFO : Compaction of table bar scheduled for 20100723140200EDT
+root@ac12 bar&gt; insert -t 5 r1 cf1 cq1 val1
+root@ac12 bar&gt; scan
+r1 cf1:cq1 []    val1
+</pre>
+
+<p>If two inserts are made into accumulo with the same row, column, and
+timestamp, then the behavior is non-deterministic.
+
+<p>Accumulo 1.2 introduces the concept of logical time. This ensures that
+timestamps set by accumulo always move forward. There have been many problems
+caused by tablet servers with different system times. In the case where a
+tablet servers time is in the future, tablets hosted on that tablet server and
+then migrated will have future timestamps in their data. This can cause newer
+keys to fall behind existing keys, which can result in seeing older data or not
+seeing data if a new key falls behind on old delete. Logical time prevents
+this by ensuring that accumulo set time stamps never go backwards, on a per
+tablet basis. So if a tablet servers time is a year in the future, then any
+tablet hosted there will generate timestamps a year in the future even when
+later hosted on a server with correct time. Logical time can be configured on a
+per table basis to either set time in millis or to use a per tablet counter.
+The per tablet counter gives unique one up time stamps on a per mutation
+basis. When using time in millis, if two things arrive within the same
+millisecond then both receive the same timestamp.
+
+<p>The example below shows a table created using a per tablet counter for
+timestamps. Two inserts are made, the first gets timestamp 0 the second 1.
+After that the table is split into two tablets and two more inserts are made.
+These inserts get the same timestamp because they are made on different
+tablets. When the original tablet is split into two, the two child tablets
+inherit the next timestamp of their parent and start from there. So do not
+expect this configuration to offer unique timestamps across a table. Its only
+purpose is to uniquely order events within a tablet.
+
+<p><pre>
+root@ac12 foo&gt; createtable -tl logical
+root@ac12 logical&gt; insert 000892 person name "John Doe"
+root@ac12 logical&gt; insert 003042 person name "Jane Doe"
+root@ac12 logical&gt; scan -st
+000892 person:name [] 0    John Doe
+003042 person:name [] 1    Jane Doe
+root@ac12 logical&gt;
+root@ac12 logical&gt; addsplits -t logical 002000
+root@ac12 logical&gt; insert 003042 person address "123 Somewhere"
+root@ac12 logical&gt; insert 000892 person address "123 Nowhere"
+root@ac12 logical&gt; scan -st
+000892 person:address [] 2    123 Nowhere
+000892 person:name [] 0    John Doe
+003042 person:address [] 2    123 Somewhere
+003042 person:name [] 1    Jane Doe
+root@ac12 logical&gt;
+
+</pre>
+
+</body>
+</html>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/pom.xml
----------------------------------------------------------------------
diff --git a/pom.xml b/pom.xml
index f06380e..c1e4b5e 100644
--- a/pom.xml
+++ b/pom.xml
@@ -654,8 +654,8 @@
           <configuration>
             <arguments>-P apache-release,thrift,assemble,docs,rpm,deb</arguments>
             <autoVersionSubmodules>true</autoVersionSubmodules>
-            <goals>clean compile javadoc:aggregate deploy</goals>
-            <preparationGoals>clean compile javadoc:aggregate verify</preparationGoals>
+            <goals>clean deploy</goals>
+            <preparationGoals>clean verify</preparationGoals>
             <tagNameFormat>@{project.version}</tagNameFormat>
             <releaseProfiles>seal-jars</releaseProfiles>
             <useReleaseProfile>false</useReleaseProfile>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/BasicServlet.java
----------------------------------------------------------------------
diff --git a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/BasicServlet.java b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/BasicServlet.java
index bf65dae..22728e2 100644
--- a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/BasicServlet.java
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/BasicServlet.java
@@ -140,8 +140,7 @@ abstract public class BasicServlet extends HttpServlet {
     // BEGIN HEADER
     sb.append("<head>\n");
     sb.append("<title>").append(getTitle(req)).append(" - Accumulo ").append(Constants.VERSION).append("</title>\n");
-    if ((refresh > 0) && (req.getRequestURI().startsWith("/docs") == false) && (req.getRequestURI().startsWith("/vis") == false)
-        && (req.getRequestURI().startsWith("/shell") == false))
+    if ((refresh > 0) && (req.getRequestURI().startsWith("/vis") == false) && (req.getRequestURI().startsWith("/shell") == false))
       sb.append("<meta http-equiv='refresh' content='" + refresh + "' />\n");
     sb.append("<meta http-equiv='Content-Type' content='").append(DEFAULT_CONTENT_TYPE).append("' />\n");
     sb.append("<meta http-equiv='Content-Script-Type' content='text/javascript' />\n");
@@ -184,7 +183,6 @@ abstract public class BasicServlet extends HttpServlet {
     sb.append("<a href='/gc'>Garbage&nbsp;Collector</a><br />\n");
     sb.append("<a href='/tables'>Tables</a><br />\n");
     sb.append("<a href='/trace/summary?minutes=10'>Recent&nbsp;Traces</a><br />\n");
-    sb.append("<a href='/docs'>Documentation</a><br />\n");
     List<DedupedLogEvent> dedupedLogEvents = LogService.getInstance().getEvents();
     int numLogs = dedupedLogEvents.size();
     boolean logsHaveError = false;

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/DefaultServlet.java
----------------------------------------------------------------------
diff --git a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/DefaultServlet.java b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/DefaultServlet.java
index 31e63ed..d88bd7c 100644
--- a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/DefaultServlet.java
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/DefaultServlet.java
@@ -18,7 +18,6 @@ package org.apache.accumulo.monitor.servlets;
 
 import java.io.IOException;
 import java.io.InputStream;
-import java.io.PrintStream;
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Calendar;
@@ -32,7 +31,6 @@ import javax.servlet.http.HttpServletRequest;
 import javax.servlet.http.HttpServletResponse;
 
 import org.apache.accumulo.core.Constants;
-import org.apache.accumulo.core.conf.DefaultConfiguration;
 import org.apache.accumulo.core.master.thrift.MasterMonitorInfo;
 import org.apache.accumulo.core.util.Duration;
 import org.apache.accumulo.core.util.NumUtil;
@@ -55,7 +53,7 @@ public class DefaultServlet extends BasicServlet {
 
   @Override
   protected String getTitle(HttpServletRequest req) {
-    return req.getRequestURI().startsWith("/docs") ? "Documentation" : "Accumulo Overview";
+    return "Accumulo Overview";
   }
 
   private void getResource(HttpServletRequest req, HttpServletResponse resp) throws IOException {
@@ -94,18 +92,6 @@ public class DefaultServlet extends BasicServlet {
   public void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {
     if (req.getRequestURI().startsWith("/web"))
       getResource(req, resp);
-    else if (req.getRequestURI().equals("/docs") || req.getRequestURI().equals("/docs/apidocs"))
-      super.doGet(req, resp);
-    else if (req.getRequestURI().equals("/docs/config.html"))
-      new DefaultConfiguration() {
-
-        public void generate(HttpServletResponse resp) throws IOException {
-          generateDocumentation(new PrintStream(resp.getOutputStream()));
-
-        }
-      }.generate(resp);
-    else if (req.getRequestURI().startsWith("/docs"))
-      getResource(req, resp);
     else if (req.getRequestURI().startsWith("/monitor"))
       resp.sendRedirect("/master");
     else if (req.getRequestURI().startsWith("/errors"))
@@ -201,10 +187,6 @@ public class DefaultServlet extends BasicServlet {
 
   @Override
   protected void pageBody(HttpServletRequest req, HttpServletResponse resp, StringBuilder sb) throws IOException {
-    if (req.getRequestURI().equals("/docs") || req.getRequestURI().equals("/docs/apidocs")) {
-      sb.append("<object data='").append(req.getRequestURI()).append("/index.html' type='text/html' width='100%' height='100%'></object>");
-      return;
-    }
 
     sb.append("<table class='noborder'>\n");
     sb.append("<tr>\n");
@@ -266,12 +248,12 @@ public class DefaultServlet extends BasicServlet {
     } else {
       long totalAcuBytesUsed = 0l;
       long totalHdfsBytesUsed = 0l;
-      
+
       try {
         for (String baseDir : VolumeConfiguration.getVolumeUris(ServerConfiguration.getSiteConfiguration())) {
           final Path basePath = new Path(baseDir);
           final FileSystem fs = vm.getVolumeByPath(basePath).getFileSystem();
-          
+
           try {
             // Calculate the amount of space used by Accumulo on the FileSystem
             ContentSummary accumuloSummary = fs.getContentSummary(basePath);
@@ -306,7 +288,7 @@ public class DefaultServlet extends BasicServlet {
         if (totalAcuBytesUsed > 0) {
           // Convert Accumulo usage to a readable String
           diskUsed = bytes(totalAcuBytesUsed);
-          
+
           if (totalHdfsBytesUsed > 0) {
             // Compute amount of space used by Accumulo as a percentage of total space usage.
             consumed = String.format("%.2f%%", totalAcuBytesUsed * 100. / totalHdfsBytesUsed);

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/server/monitor/src/main/resources/docs/administration.html
----------------------------------------------------------------------
diff --git a/server/monitor/src/main/resources/docs/administration.html b/server/monitor/src/main/resources/docs/administration.html
deleted file mode 100644
index 51b1c31..0000000
--- a/server/monitor/src/main/resources/docs/administration.html
+++ /dev/null
@@ -1,171 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-<html>
-<head>
-<title>Accumulo Administration</title>
-<link rel='stylesheet' type='text/css' href='documentation.css' media='screen'/>
-</head>
-<body>
-
-<h1>Apache Accumulo Documentation : Administration</h1>
-
-<h3>Starting accumulo for the first time</h3>
-
-<p>For the most part, accumulo is ready to go out of the box. To start it, first you must distribute and install
-the accumulo software to each machine in the cloud that you wish to run on. The software should be installed
-in the same directory on each machine and configured identically (or at least similarly... see the configuration
-sections for more details). Select one machine to be your bootstrap machine, the one that you will start accumulo
-with. Note that you must have passphrase-less ssh access to each machine from your bootstrap machine. On this machine,
-create a conf/masters and conf/slaves file. In the masters file, type the hostname of the machine you wish to run the master on (probably localhost).
-In the slaves file, type the hostnames, separated by newlines of each machine you wish to participate in accumulo as a tablet server. If you neglect
-to create these files, the startup scripts will assume you are trying to run on localhost only, and will instantiate a single-node instance only.
-It is probably a good idea to back up these files, or distribute them to the other nodes as well, so that you can easily boot up accumulo
-from another machine, if necessary. You can also make create a <code>conf/accumulo-env.sh</code> file if you want to configure any custom environment variables.
-
-<p>Once properly configured, you can initialize or prepare an instance of accumulo by running: <code>bin/accumulo&nbsp;init</code><br />
-Follow the prompts and you are ready to go. This step only prepares accumulo to run, it does not start up accumulo.
-
-<h3>Starting accumulo</h3>
-
-<p>Once you have configured accumulo to your liking, and distributed the appropriate configuration to each machine, you can start accumulo with
-bin/start-all.sh. If at any time, you wish to bring accumulo servers online after one or more have been shutdown, you can run bin/start-all.sh again.
-This step will only start services that are not already running. Be aware that if you run this command on more than one machine, you may unintentionally
-start an extra copy of the garbage collector service and the monitoring service, since each of these will run on the server on which you run this script.
-
-<h3>Stopping accumulo</h3>
-
-<p>Similar to the start-all.sh script, we provide a bin/stop-all.sh script to shut down accumulo. This will prompt for the root password so that it can
-ask the master to shut down the tablet servers gracefully. If the tablet servers do not respond, or the master takes too long, you can force a shutdown by hitting Ctrl-C
-at the password prompt, and waiting 15 seconds for the script to force a shutdown. Normally, once the shutdown happens gracefully, unresponsive tablet servers are
-forcibly shut down after 5 seconds.
-
-<h3>Adding a Node</h3>
-
-<p>Update your <code>$ACCUMULO_HOME/conf/slaves</code> (or <code>$ACCUMULO_CONF_DIR/slaves</code>) file to account for the addition; at a minimum this needs to be on the host(s) being added, but in practice it's good to ensure consistent configuration across all nodes.</p>
-
-<pre>
-$ACCUMULO_HOME/bin/accumulo admin start &gt;host(s)&gt; {&lt;host&gt; ...}
-</pre>
-
-<p>Alternatively, you can ssh to each of the hosts you want to add and run <code>$ACCUMULO_HOME/bin/start-here.sh</code>.</p>
-
-<p>Make sure the host in question has the new configuration, or else the tablet server won't start.</p>
-
-<h3>Decomissioning a Node</h3>
-
-<p>If you need to take a node out of operation, you can trigger a graceful shutdown of a tablet server. Accumulo will automatically rebalance the tablets across the available tablet servers.</p>
-
-<pre>
-$ACCUMULO_HOME/bin/accumulo admin stop &gt;host(s)&gt; {&lt;host&gt; ...}
-</pre>
-
-<p>Alternatively, you can ssh to each of the hosts you want to remove and run <code>$ACCUMULO_HOME/bin/stop-here.sh</code>.</p>
-
-<p>Be sure to update your <code>$ACCUMULO_HOME/conf/slaves</code> (or <code>$ACCUMULO_CONF_DIR/slaves</code>) file to account for the removal of these hosts. Bear in mind that the monitor will not re-read the slaves file automatically, so it will report the decomissioned servers as down; it's recommended that you restart the monitor so that the node list is up to date.</p>
-
-<h3>Configuration</h3>
-<p>Accumulo configuration information is stored in a xml file and ZooKeeper.  System wide
-configuration information is stored in accumulo-site.xml. In order for accumulo to
-find this file its directory must be on the classpath.  Accumulo will log a warning if it can not find 
-it, and will use built-in default values. The accumulo scripts try to put the config directory on the classpath.  
-
-<p>Starting with version 1.0, per-table configuration was
-introduced. This information is stored in ZooKeeper. This information
-can be manipulated using the config command in the accumulo
-shell. ZooKeeper will notify all tablet servers when config properties
-are modified. This makes it possible to change major compaction
-settings, for example, for a table while accumulo is running.
-
-<p>Per-table configuration settings override system settings. 
-
-<p>See the possible configuration options and their default values <a href='config.html'>here</a>
-
-<h3>Managing system resources</h3>
-
-<p>It is very important how disk and memory usage are allocated across the cluster and how servers processes are allocated across the cluster. 
-
-<ul>
- <li> On larger clusters, run the namenode, secondary namenode, jobtracker, accumulo master, and zookeepers on dedicated nodes.  On a smaller cluster you may want to run all master processes on one node.  When doing this ensure that the max total memory that could be used by all master processes does not exceed system memory.  Swapping on your single master node would not be good.
- <li> Accumulo 1.2 and earlier rely on zookeeper but do not use it heavily.  On a large cluster setting up 3 or 5 zookeepers should be plenty.  Since there is no performance gain when running more zookeepers, fault tolerance is the only benefit.
- <li> On slave nodes ensure the memory used by all slave processes is less than system memory.  For example the following slave node config could use up to 38G of RAM : tablet server 3G, logger 1G, data node 2G, up to 10 mappers each using 2G, and up 6 reducers each using 2G.  If the slave nodes only have 32G, then using 38G will result in swapping which could cause tablet server to lose their lock in zookeeper and die.  Even if swapping does not cause tablet servers to die, it will kill performance.
- <li>Accumulo and map reduce will work with less memory, but it has an impact.  Accumulo will minor compact more frequently when it has less map memory, resulting in more major compactions.  The minor and major compactions both use CPU and HDFS I/O.   The same goes for map reduce, the less memory you give it, the more it has to sort and spill.  Try to minimize spilling and compactions as much as possible without causing swapping.
- <li>Accumulo writes data to disk before it sorts it in memory.  This allows data that was in memory when a tablet server crashes to be recovered.  Each slave node needs a local directory to write this data to.  Ensure the file system holding this directory has at least 100G free on all nodes.  Also, if this directory is in a filesystem used by map reduce or hdfs they may effect each others performance.
-</ul>
-
-<p>There are a few settings that determine how much memory accumulo tablet
-servers use.  In accumulo-env.sh there is a setting called
-ACCUMULO_TSERVER_OPTS.  By default this is set to something like "-Xmx512m
--Xms512m".  These are Java jvm options asking Java to use 512 megabytes of
-memory.  By default accumulo stores data written to it outside of the Java
-memory space in order to avoid pauses caused by the Java garbage collector.  The
-amount of memory it uses for this data is determined by the accumulo setting
-"tserver.memory.maps.max".  Since this memory is outside of the Java managed
-memory, the process can grow larger than the -Xmx setting.  So if -Xmx is set
-to 512M and tserver.memory.maps.max is set to 1G, a tablet server process can
-be expected to use 1.5G.  If tserver.memory.maps.native.enabled is set to
-false, then accumulo will only use memory managed by Java and the process will
-not use more than what -Xmx is set to.  In this case the
-tserver.memory.maps.max setting should be 75% of the -Xmx setting. 
-
-<h3>Swappiness</h3>
-
-<p>The linux kernel will swap out memory of running programs to increase
-the size of the disk buffers.  This tendency to swap out is controlled by
-a kernel setting called "swappiness."  This behavior does not work well for
-large java servers.  When a java process runs a garbage collection, it touches
-lots of pages forcing all swapped out pages back into memory.  It is suggested
-that swappiness be set to zero.
-
-<pre>
- # sysctl -w vm.swappiness=0
- # echo "vm.swappiness = 0" &gt;&gt; /etc/sysctl.conf
-</pre>
-
-<h3>Hadoop timeouts</h3>
-
-<p>In order to detect failed datanodes, use shorter timeouts.  Add the following to your
-hdfs-site.xml file:
-
-<pre>
-
-  &lt;property&gt;
-    &lt;name&gt;dfs.socket.timeout&lt;/name&gt;
-    &lt;value&gt;3000&lt;/value&gt;
-  &lt;/property&gt;
-
-  &lt;property&gt;
-    &lt;name&gt;dfs.socket.write.timeout&lt;/name&gt;
-    &lt;value&gt;5000&lt;/value&gt;
-  &lt;/property&gt;
-
-  &lt;property&gt;
-    &lt;name&gt;ipc.client.connect.timeout&lt;/name&gt;
-    &lt;value&gt;1000&lt;/value&gt;
-  &lt;/property&gt;
-
-  &lt;property&gt;
-    &lt;name&gt;ipc.clident.connect.max.retries.on.timeouts&lt;/name&gt;
-    &lt;value&gt;2&lt;/value&gt;
-  &lt;/property&gt;
-
-
-
-</pre>
-
-
-</body>
-</html>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/server/monitor/src/main/resources/docs/bulkIngest.html
----------------------------------------------------------------------
diff --git a/server/monitor/src/main/resources/docs/bulkIngest.html b/server/monitor/src/main/resources/docs/bulkIngest.html
deleted file mode 100644
index 86cdb71..0000000
--- a/server/monitor/src/main/resources/docs/bulkIngest.html
+++ /dev/null
@@ -1,114 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-<html>
-<head>
-<title>Accumulo Bulk Ingest</title>
-<link rel='stylesheet' type='text/css' href='documentation.css' media='screen'/>
-</head>
-<body>
-
-<h1>Apache Accumulo Documentation : Bulk Ingest</h2>
-
-<p>Accumulo supports the ability to import sorted files produced by an
-external process into an online table.  Often, it is much faster to churn
-through large amounts of data using map/reduce to produce the these files. 
-The new files can be incorporated into Accumulo using bulk ingest.
-
-<ul>
-<li>Construct an <code>org.apache.accumulo.core.client.Connector</code> instance</li>
-<li>Call <code>connector.tableOperations().getSplits()</code></li>
-<li>Run a map/reduce job using <a href='apidocs/org/apache/accumulo/core/client/mapreduce/lib/partition/RangePartitioner.html'>RangePartitioner</a> 
-with splits from the previous step</li>
-<li>Call <code>connector.tableOperations().importDirectory()</code> passing the output directory of the MapReduce job</li>
-</ul> 
-
-<p>Files can also be imported using the "importdirectory" shell command.
-
-<p>A complete example is available in <a href='examples/README.bulkIngest'>README.bulkIngest</a>
-
-<p>Importing data using whole files of sorted data can be very efficient, but it differs
-from live ingest in the following ways:
-<ul>
- <li>Table constraints are not applied against they data in the file.
- <li>Adding new files to tables are likely to trigger major compactions.
- <li>The timestamp in the file could contain strange values.  Accumulo can be asked to use the ingest timestamp for all values if this is a concern.
- <li>It is possible to create invalid visibility values (for example "&|").  This will cause errors when the data is accessed.
- <li>Bulk imports do not effect the entry counts in the monitor page until the files are compacted.
-</ul>
-
-<h2>Best Practices</h2>
-
-<p>Consider two approaches to creating ingest files using map/reduce.
-
-<ol>
- <li>A large file containing the Key/Value pairs for only a single tablet.
- <li>A set of small files containing Key/Value pairs for every tablet.
-<ol>
-
-<p>In the first case, adding the file requires telling a single tablet server about a single file.  Even if the file
-is 20G in size, it is one call to the tablet server.  The tablet server makes one extra file entry in the
-tablet's metadata, and the data is now part of the tablet.
-
-<p>In the second case, an request must be made for each tablet for each file to be added.  If there
-100 files and 100 tablets, this will be 10K requests, and the number of files needed to be opened
-for scans on these tablets will be very large.  Major compactions will most likely start which will eventually 
-fix the problem, but a lot more work needs to be done by accumulo to read these files.
-
-<p>Getting good, fast, bulk import performance depends on creating files like the first, and avoiding files like
-the second.
-
-<p>For this reason, a RangePartitioner should be used to create files when
-writing with the AccumuloFileOutputFormat.
-
-<p>Hash partition is not recommended because it will put keys in random
-groups, exactly like our bad approach.
-
-<P>Any set of cut points for range partitioning can be used in a map
-reduce job, but using Accumulo's current splits is probably the most
-optimal thing to do.  However in some cases there may be too many
-splits.  For example if there are 2000 splits, you would need to run
-2001 reducers.  To overcome this problem use the
-<code>connector.tableOperations.getSplits(&lt;table name&gt;,&lt;max
-splits&gt;)</code> method.  This method will not return more than
-<code> &lt;max splits&gt; </code> splits, but the splits it returns
-will optimally partition the data for Accumulo.
-  
-<p>Remember that Accumulo never splits rows across tablets.
-Therefore the range partitioner only considers rows when partitioning.
-
-<p>When bulk importing many files into a new table, it might be good to pre-split the table to bring
-additional resources to accepting the data.  For example, if you know your data is indexed based on the
-date, pre-creating splits for each day will allow files to fall into natural splits.  Having more tablets
-accept the new data means that more resources can be used to import the data right away.
-
-<p>An alternative to bulk ingest is to have a map/reduce job use
-<code>AccumuloOutputFormat</code>, which can support billions of inserts per
-hour, depending on the size of your cluster. This is sufficient for
-most users, but bulk ingest remains the fastest way to incorporate
-data into Accumulo.  In addition, bulk ingest has one advantage over
-AccumuloOutputFormat: there is no duplicate data insertion.  When one uses
-map/reduce to output data to accumulo, restarted jobs may re-enter
-data from previous failed attempts. Generally, this only matters when
-there are aggregators. With bulk ingest, reducers are writing to new
-map files, so it does not matter. If a reduce fails, you create a new
-map file.  When all reducers finish, you bulk ingest the map files
-into Accumulo.  The disadvantage to bulk ingest over <code>AccumuloOutputFormat</code> is 
-greater latency: the entire map/reduce job must complete
-before any data is available.
-
-</body>
-</html>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/server/monitor/src/main/resources/docs/combiners.html
----------------------------------------------------------------------
diff --git a/server/monitor/src/main/resources/docs/combiners.html b/server/monitor/src/main/resources/docs/combiners.html
deleted file mode 100644
index cf18e05..0000000
--- a/server/monitor/src/main/resources/docs/combiners.html
+++ /dev/null
@@ -1,85 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-<html>
-<head>
-<title>Accumulo Combiners</title>
-<link rel='stylesheet' type='text/css' href='documentation.css' media='screen'/>
-</head>
-<body>
-
-<h1>Apache Accumulo Documentation : Combiners</h1>
-
-<p>Accumulo supports on the fly lazy aggregation of data using Combiners.  Aggregation is done at compaction and scan time.  No lookup is done at insert time, which` greatly speeds up ingest. 
-
-<p>Combiners are easy to use.  You use the setiters command to configure a combiner for a table.  Allowing a Combiner to apply to a whole column family is an interesting twist that gives the user great flexibility.  The example below demonstrates this flexibility.  
-
-<p><pre>
-
-Shell - Apache Accumulo Interactive Shell
-- version: 1.5.0
-- instance id: 863fc0d1-3623-4b6c-8c23-7d4fdb1c8a49
-- 
-- type 'help' for a list of available commands
--
-user@instance&gt; createtable perDayCounts
-user@instance perDayCounts&gt; setiter -t perDayCounts -p 10 -scan -minc -majc -n daycount -class org.apache.accumulo.core.iterators.user.SummingCombiner
-TypedValueCombiner can interpret Values as a variety of number encodings (VLong, Long, or String) before combining
-----------&gt; set SummingCombiner parameter columns, &lt;col fam&gt;[:&lt;col qual&gt;]{,&lt;col fam&gt;[:&lt;col qual&gt;]} escape non aplhanum chars using %&lt;hex&gt;.: day
-----------&gt; set SummingCombiner parameter type, &lt;VARNUM|LONG|STRING&gt;: STRING
-user@instance perDayCounts&gt; insert foo day 20080101 1
-user@instance perDayCounts&gt; insert foo day 20080101 1
-user@instance perDayCounts&gt; insert foo day 20080103 1
-user@instance perDayCounts&gt; insert bar day 20080101 1
-user@instance perDayCounts&gt; insert bar day 20080101 1
-user@instance perDayCounts&gt; scan
-bar day:20080101 []    2
-foo day:20080101 []    2
-foo day:20080103 []    1
-</pre>
-
-
-<p>Implementing a new Combiner is a snap.  Simply write some Java code that extends <a href='apidocs/org/apache/accumulo/core/iterators/Combiner.html'>org.apache.accumulo.core.iterators.Combiner</a>. A good place to look for examples is the <a href='apidocs/org/apache/accumulo/core/iterators/user/package-summary.html'>org.apache.accumulo.core.iterators.user</a> package.  Also look at the example StatsCombiner.     
-
-<p>To deploy a new aggregator, jar it up and put the jar in accumulo/lib/ext.  To see an example look at <a href='examples/README.combiner'>README.combiner</a>
-
-<p>If you would like to see what iterators a table has you can use the config command like in the following example.
-
-<p><pre>
-user@instance perDayCounts&gt; config -t perDayCounts -f iterator
----------+---------------------------------------------+-----------------------------------------------------------
-SCOPE    | NAME                                        | VALUE
----------+---------------------------------------------+-----------------------------------------------------------
-table    | table.iterator.majc.daycount .............. | 10,org.apache.accumulo.core.iterators.user.SummingCombiner
-table    | table.iterator.majc.daycount.opt.columns .. | day
-table    | table.iterator.majc.daycount.opt.type ..... | STRING
-table    | table.iterator.majc.vers .................. | 20,org.apache.accumulo.core.iterators.VersioningIterator
-table    | table.iterator.majc.vers.opt.maxVersions .. | 1
-table    | table.iterator.minc.daycount .............. | 10,org.apache.accumulo.core.iterators.user.SummingCombiner
-table    | table.iterator.minc.daycount.opt.columns .. | day
-table    | table.iterator.minc.daycount.opt.type ..... | STRING
-table    | table.iterator.minc.vers .................. | 20,org.apache.accumulo.core.iterators.VersioningIterator
-table    | table.iterator.minc.vers.opt.maxVersions .. | 1
-table    | table.iterator.scan.daycount .............. | 10,org.apache.accumulo.core.iterators.user.SummingCombiner
-table    | table.iterator.scan.daycount.opt.columns .. | day
-table    | table.iterator.scan.daycount.opt.type ..... | STRING
-table    | table.iterator.scan.vers .................. | 20,org.apache.accumulo.core.iterators.VersioningIterator
-table    | table.iterator.scan.vers.opt.maxVersions .. | 1
----------+---------------------------------------------+-----------------------------------------------------------
-</pre>
-
-</body>
-</html>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/server/monitor/src/main/resources/docs/constraints.html
----------------------------------------------------------------------
diff --git a/server/monitor/src/main/resources/docs/constraints.html b/server/monitor/src/main/resources/docs/constraints.html
deleted file mode 100644
index b227ed7..0000000
--- a/server/monitor/src/main/resources/docs/constraints.html
+++ /dev/null
@@ -1,49 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-<html>
-<head>
-<title>Accumulo Constraints</title>
-<link rel='stylesheet' type='text/css' href='documentation.css' media='screen'/>
-</head>
-<body>
-
-<h1>Apache Accumulo Documentation : Constraints</h1>
-
-Accumulo supports constraints.  Constraints are applied to mutations at ingest time.  
-
-<p>Implementing a new constraint is a snap.  Simply write some Java code that implements <a href='apidocs/org/apache/accumulo/core/constraints/Constraint.html'>org.apache.accumulo.core.constraints.Constraint</a>.     
-
-<p>To deploy a new constraint, jar it up and put the jar in accumulo/lib/ext.
-
-<p>After creating a constraint, set a table specific property to use it.  The following example adds two constraints to table foo. In the example com.test.ExampleConstraint and com.test.AnotherConstraint are class names.
-
-<p><pre>
-user@instance:9999 perDayCounts&gt; createtable foo
-user@instance:9999 foo&gt; config -t foo -s table.constraint.1=com.test.ExampleConstraint
-user@instance:9999 foo&gt; config -t foo -s table.constraint.2=com.test.AnotherConstraint
-user@instance:9999 foo&gt; config -t foo -f constraint
----------+------------------------------------------+-----------------------------------------
-SCOPE    | NAME                                     | VALUE
----------+------------------------------------------+-----------------------------------------
-table    | table.constraint.1...................... | com.test.ExampleConstraint
-table    | table.constraint.2...................... | com.test.AnotherConstraint
----------+------------------------------------------+-----------------------------------------
-user@instance:9999 foo&gt; 
-</pre>
-
-</body>
-</html>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/server/monitor/src/main/resources/docs/distributedTracing.html
----------------------------------------------------------------------
diff --git a/server/monitor/src/main/resources/docs/distributedTracing.html b/server/monitor/src/main/resources/docs/distributedTracing.html
deleted file mode 100644
index 5d363f3..0000000
--- a/server/monitor/src/main/resources/docs/distributedTracing.html
+++ /dev/null
@@ -1,99 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-<html>
-<head>
-<title>Accumulo Distributed Tracing</title>
-<link rel='stylesheet' type='text/css' href='documentation.css' media='screen'/>
-</head>
-<body>
-
-<h1>Apache Accumulo Documentation : Distributed Tracing</h1>
-
-<p>It can be difficult to determine why some operations are taking longer than expected.  For example, you may be looking up items with 
-very low latency, but sometimes the lookups take much longer.  Determining the cause of the delay is difficult because the system is 
-distributed, and the typical lookup is fast.</p>
-
-<p>To provide insight into what accumulo is doing during your scan, you can turn on tracing before you do your operation:</p>
-
-<pre>
-   DistributedTrace.enable(instance, zooReader, hostname, "myApplication");
-   Trace scanTrace = Trace.on("client:scan");
-   BatchScanner scanner = conn.createBatchScanner(...);
-   // Configure your scanner
-   for (Entry<Key, Value> entry : scanner) {
-   }
-   Trace.off();
-</pre>
-
-
-<p>Accumulo has been instrumented to record the time that various operations take when tracing is turned on.  The fact that tracing is 
-enabled follows all the requests made on behalf of the user throughout the distributed infrastructure of accumulo, and across all 
-threads of execution.</p>
-
-<p>These time spans will be inserted into the trace accumulo table.  You can browse recent traces from the accumulo monitor page.  
-You can also read the trace table directly.</p>
-
-<p>Tracing is supported in the shell.  For example:
-
-<pre>
-root@test&gt; createtable test
-root@test test&gt; insert a b c d
-root@test test&gt; trace on              
-root@test test&gt; scan
-a b:c []    d
-root@test test&gt; trace off
-Waiting for trace information
-Waiting for trace information
-Waiting for trace information
-Trace started at 2011/03/16 09:20:31.387
-Time  Start  Service@Location       Name
- 3355+0      shell@host2 shell:root
-    1+1        shell@host2 client:listUsers
-    1+1434     tserver@host2 getUserAuthorizations
-    1+1434     shell@host2 client:getUserAuthorizations
-   10+1550     shell@host2 scan
-    9+1551       shell@host2 scan:location
-    7+1552         shell@host2 client:startScan
-    6+1553         tserver@host2 startScan
-    5+1553           tserver@host2 tablet read ahead 11
-    1+1559         shell@host2 client:closeScan
-    1+1561     shell@host2 client:listUsers
-</pre>
-
-<p>Here we can see that the shell is getting the list of users (which is used for tab-completion) after every command.  While
-unexpected, it is a fast operation.  In fact, all the requests are very fast, and most of the time is spent waiting for the user
-to make a request while tracing is turned on.</p>
-
-<p>Spans are added to the trace table asynchronously.  The user may have to wait several seconds for all requests to complete before the 
-trace information is complete.</p>
-
-<p>You can extract the trace data out of the trace table.  Each span is a stored as a column in a row named for the trace id. 
-The following code will print out a trace:</p>
-
-<pre>
-String table = AccumuloConfiguration.getSystemConfiguration().get(Property.TRACE_TABLE);
-Scanner scanner = shellState.connector.createScanner(table, auths);
-scanner.setRange(new Range(new Text(Long.toHexString(scanTrace.traceId()))));
-TraceDump.printTrace(scanner, new Printer() {
-    void print(String line) {
-        System.out.println(line);
-    }
-});
-</pre>
-
-</body>
-</html>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/server/monitor/src/main/resources/docs/documentation.css
----------------------------------------------------------------------
diff --git a/server/monitor/src/main/resources/docs/documentation.css b/server/monitor/src/main/resources/docs/documentation.css
deleted file mode 100644
index 99ed599..0000000
--- a/server/monitor/src/main/resources/docs/documentation.css
+++ /dev/null
@@ -1,112 +0,0 @@
-/*
-* Licensed to the Apache Software Foundation (ASF) under one or more
-* contributor license agreements.  See the NOTICE file distributed with
-* this work for additional information regarding copyright ownership.
-* The ASF licenses this file to You under the Apache License, Version 2.0
-* (the "License"); you may not use this file except in compliance with
-* the License.  You may obtain a copy of the License at
-*
-*     http://www.apache.org/licenses/LICENSE-2.0
-*
-* Unless required by applicable law or agreed to in writing, software
-* distributed under the License is distributed on an "AS IS" BASIS,
-* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-* See the License for the specific language governing permissions and
-* limitations under the License.
-*/
-html, body {
-    font-size: 10pt;
-    font-family: verdana, arial;
-}
-
-h1 {
-    font-size: 1.7em;
-    font-variant: small-caps;
-    text-align: left;
-}
-
-h2 {
-    font-size: 1.3em; 
-    text-align: left;
-}
-
-.highlight {
-    background-color: rgb(206,244,181);
-}
-
-.deprecated {
-    text-decoration: line-through;
-}
-
-table {
-    min-width: 60%;
-    border: 1px #333333 solid;
-    border-spacing-top: 0;
-    border-spacing-bottom: 0;
-    border: 1px #333333 solid;
-    border: 1px #333333 solid;
-}
-
-th {
-    border-top: 0;
-    border-bottom: 3px #333333 solid;
-    border-left: 1px #333333 dotted;
-    border-right: 0;
-    border-spacing-top: 0;
-    border-spacing-bottom: 0;
-    text-align: center;
-    font-variant: small-caps;
-    padding-left: 0.1em;
-    padding-right: 0.1em;
-    padding-top: 0.2em;
-    padding-bottom: 0.2em;
-    vertical-align: bottom;
-}
-
-td {
-    border-top: 0;
-    border-bottom: 0;
-    border-left: 0;
-    border-right: 0;
-    border-spacing-top: 0;
-    border-spacing-bottom: 0;
-    padding-left: 0.05em;
-    padding-right: 0.05em;
-    padding-top: 0.15em;
-    padding-bottom: 0.15em;
-}
-
-thead {
-    color: rgb(66,114,185);
-    text-align: center;
-    text-weight: bold;
-}
-
-td {
-    font-size: 10pt;
-    text-align:left;
-    padding-left:7pt;
-    padding-right:7pt;
-}
-
-pre {
-    font-size: 9pt;
-}
-
-a {
-    text-decoration: none;
-    color: #0000ff;
-    line-height: 1.5em;
-}
-
-a:hover {
-    color: #004400;
-    text-decoration: underline;
-}
-
-.large {
-    font-size: 1.5em;
-    font-variant: small-caps;
-    text-align: left;
-}
-

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/server/monitor/src/main/resources/docs/examples/README
----------------------------------------------------------------------
diff --git a/server/monitor/src/main/resources/docs/examples/README b/server/monitor/src/main/resources/docs/examples/README
deleted file mode 100644
index 0aad866..0000000
--- a/server/monitor/src/main/resources/docs/examples/README
+++ /dev/null
@@ -1,95 +0,0 @@
-Title: Apache Accumulo Examples
-Notice:    Licensed to the Apache Software Foundation (ASF) under one
-           or more contributor license agreements.  See the NOTICE file
-           distributed with this work for additional information
-           regarding copyright ownership.  The ASF licenses this file
-           to you under the Apache License, Version 2.0 (the
-           "License"); you may not use this file except in compliance
-           with the License.  You may obtain a copy of the License at
-           .
-             http://www.apache.org/licenses/LICENSE-2.0
-           .
-           Unless required by applicable law or agreed to in writing,
-           software distributed under the License is distributed on an
-           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-           KIND, either express or implied.  See the License for the
-           specific language governing permissions and limitations
-           under the License.
-
-Before running any of the examples, the following steps must be performed.
-
-1. Install and run Accumulo via the instructions found in $ACCUMULO_HOME/README.
-   Remember the instance name.  It will be referred to as "instance" throughout 
-   the examples. A comma-separated list of zookeeper servers will be referred 
-   to as "zookeepers".
-
-2. Create an Accumulo user (see the [user manual][1]), or use the root user.
-   The "username" Accumulo user name with password "password" is used 
-   throughout the examples. This user needs the ability to create tables.
-
-In all commands, you will need to replace "instance", "zookeepers", 
-"username", and "password" with the values you set for your Accumulo instance.
-
-Commands intended to be run in bash are prefixed by '$'.  These are always 
-assumed to be run from the $ACCUMULO_HOME directory.
-
-Commands intended to be run in the Accumulo shell are prefixed by '>'.
-
-Each README in the examples directory highlights the use of particular 
-features of Apache Accumulo.
-
-   README.batch:       Using the batch writer and batch scanner.
-
-   README.bloom:       Creating a bloom filter enabled table to increase query 
-                       performance.
-
-   README.bulkIngest:  Ingesting bulk data using map/reduce jobs on Hadoop.
-
-   README.classpath:   Using per-table classpaths.
-
-   README.client:      Using table operations, reading and writing data in Java.
-
-   README.combiner:    Using example StatsCombiner to find min, max, sum, and 
-                       count.
-
-   README.constraints: Using constraints with tables.
-
-   README.dirlist:     Storing filesystem information.
-
-   README.export:      Exporting and importing tables.
-
-   README.filedata:    Storing file data.
-
-   README.filter:      Using the AgeOffFilter to remove records more than 30 
-                       seconds old.
-
-   README.helloworld:  Inserting records both inside map/reduce jobs and 
-                       outside. And reading records between two rows.
-
-   README.isolation:   Using the isolated scanner to ensure partial changes 
-                       are not seen.
-
-   README.mapred:      Using MapReduce to read from and write to Accumulo 
-                       tables.
-
-   README.maxmutation: Limiting mutation size to avoid running out of memory.
-
-   README.regex:       Using MapReduce and Accumulo to find data using regular
-                       expressions.
-
-   README.rowhash:     Using MapReduce to read a table and write to a new 
-                       column in the same table.
-
-   README.shard:       Using the intersecting iterator with a term index 
-                       partitioned by document.
-
-   README.tabletofile: Using MapReduce to read a table and write one of its
-                       columns to a file in HDFS.
-
-   README.terasort:    Generating random data and sorting it using Accumulo.  
-
-   README.visibility:  Using visibilities (or combinations of authorizations). 
-                       Also shows user permissions.
-
-
-[1]: /1.5/user_manual/Accumulo_Shell.html#User_Administration

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/server/monitor/src/main/resources/docs/examples/README.batch
----------------------------------------------------------------------
diff --git a/server/monitor/src/main/resources/docs/examples/README.batch b/server/monitor/src/main/resources/docs/examples/README.batch
deleted file mode 100644
index e78e808..0000000
--- a/server/monitor/src/main/resources/docs/examples/README.batch
+++ /dev/null
@@ -1,55 +0,0 @@
-Title: Apache Accumulo Batch Writing and Scanning Example
-Notice:    Licensed to the Apache Software Foundation (ASF) under one
-           or more contributor license agreements.  See the NOTICE file
-           distributed with this work for additional information
-           regarding copyright ownership.  The ASF licenses this file
-           to you under the Apache License, Version 2.0 (the
-           "License"); you may not use this file except in compliance
-           with the License.  You may obtain a copy of the License at
-           .
-             http://www.apache.org/licenses/LICENSE-2.0
-           .
-           Unless required by applicable law or agreed to in writing,
-           software distributed under the License is distributed on an
-           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-           KIND, either express or implied.  See the License for the
-           specific language governing permissions and limitations
-           under the License.
-
-This tutorial uses the following Java classes, which can be found in org.apache.accumulo.examples.simple.client in the examples-simple module:
-
- * SequentialBatchWriter.java - writes mutations with sequential rows and random values
- * RandomBatchWriter.java - used by SequentialBatchWriter to generate random values
- * RandomBatchScanner.java - reads random rows and verifies their values
-
-This is an example of how to use the batch writer and batch scanner. To compile
-the example, run maven and copy the produced jar into the accumulo lib dir.
-This is already done in the tar distribution. 
-
-Below are commands that add 10000 entries to accumulo and then do 100 random
-queries.  The write command generates random 50 byte values. 
-
-Be sure to use the name of your instance (given as instance here) and the appropriate 
-list of zookeeper nodes (given as zookeepers here).
-
-Before you run this, you must ensure that the user you are running has the
-"exampleVis" authorization. (you can set this in the shell with "setauths -u username -s exampleVis")
-
-    $ ./bin/accumulo shell -u root -e "setauths -u username -s exampleVis"
-
-You must also create the table, batchtest1, ahead of time. (In the shell, use "createtable batchtest1")
-
-    $ ./bin/accumulo shell -u username -e "createtable batchtest1"
-    $ ./bin/accumulo org.apache.accumulo.examples.simple.client.SequentialBatchWriter -i instance -z zookeepers -u username -p password -t batchtest1 --start 0 --num 10000 --size 50 --batchMemory 20M --batchLatency 500 --batchThreads 20 --vis exampleVis
-    $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner -i instance -z zookeepers -u username -p password -t batchtest1 --num 100 --min 0 --max 10000 --size 50 --scanThreads 20 --vis exampleVis
-    07 11:33:11,103 [client.CountingVerifyingReceiver] INFO : Generating 100 random queries...
-    07 11:33:11,112 [client.CountingVerifyingReceiver] INFO : finished
-    07 11:33:11,260 [client.CountingVerifyingReceiver] INFO : 694.44 lookups/sec   0.14 secs
-    
-    07 11:33:11,260 [client.CountingVerifyingReceiver] INFO : num results : 100
-    
-    07 11:33:11,364 [client.CountingVerifyingReceiver] INFO : Generating 100 random queries...
-    07 11:33:11,370 [client.CountingVerifyingReceiver] INFO : finished
-    07 11:33:11,416 [client.CountingVerifyingReceiver] INFO : 2173.91 lookups/sec   0.05 secs
-    
-    07 11:33:11,416 [client.CountingVerifyingReceiver] INFO : num results : 100

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/server/monitor/src/main/resources/docs/examples/README.bloom
----------------------------------------------------------------------
diff --git a/server/monitor/src/main/resources/docs/examples/README.bloom b/server/monitor/src/main/resources/docs/examples/README.bloom
deleted file mode 100644
index a7330da..0000000
--- a/server/monitor/src/main/resources/docs/examples/README.bloom
+++ /dev/null
@@ -1,219 +0,0 @@
-Title: Apache Accumulo Bloom Filter Example
-Notice:    Licensed to the Apache Software Foundation (ASF) under one
-           or more contributor license agreements.  See the NOTICE file
-           distributed with this work for additional information
-           regarding copyright ownership.  The ASF licenses this file
-           to you under the Apache License, Version 2.0 (the
-           "License"); you may not use this file except in compliance
-           with the License.  You may obtain a copy of the License at
-           .
-             http://www.apache.org/licenses/LICENSE-2.0
-           .
-           Unless required by applicable law or agreed to in writing,
-           software distributed under the License is distributed on an
-           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-           KIND, either express or implied.  See the License for the
-           specific language governing permissions and limitations
-           under the License.
-
-This example shows how to create a table with bloom filters enabled.  It also
-shows how bloom filters increase query performance when looking for values that
-do not exist in a table.
-
-Below table named bloom_test is created and bloom filters are enabled.
-
-    $ ./bin/accumulo shell -u username -p password
-    Shell - Apache Accumulo Interactive Shell
-    - version: 1.5.0
-    - instance name: instance
-    - instance id: 00000000-0000-0000-0000-000000000000
-    - 
-    - type 'help' for a list of available commands
-    - 
-    username@instance> setauths -u username -s exampleVis
-    username@instance> createtable bloom_test
-    username@instance bloom_test> config -t bloom_test -s table.bloom.enabled=true
-    username@instance bloom_test> exit
-
-Below 1 million random values are inserted into accumulo.  The randomly
-generated rows range between 0 and 1 billion.  The random number generator is
-initialized with the seed 7.
-
-    $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter --seed 7 -i instance -z zookeepers -u username -p password -t bloom_test --num 1000000 -min 0 -max 1000000000 -valueSize 50 -batchMemory 2M -batchLatency 60s -batchThreads 3 --vis exampleVis
-
-Below the table is flushed:
-
-    $ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test -w'
-    05 10:40:06,069 [shell.Shell] INFO : Flush of table bloom_test completed.
-
-After the flush completes, 500 random queries are done against the table.  The
-same seed is used to generate the queries, therefore everything is found in the
-table.
-
-    $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 7 -i instance -z zookeepers -u username -p password -t bloom_test --num 500 --min 0 --max 1000000000 --size 50 -batchThreads 20 --vis exampleVis
-    Generating 500 random queries...finished
-    96.19 lookups/sec   5.20 secs
-    num results : 500
-    Generating 500 random queries...finished
-    102.35 lookups/sec   4.89 secs
-    num results : 500
-
-Below another 500 queries are performed, using a different seed which results
-in nothing being found.  In this case the lookups are much faster because of
-the bloom filters.
-
-    $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 8 -i instance -z zookeepers -u username -p password -t bloom_test --num 500 --min 0 --max 1000000000 --size 50 -batchThreads 20 -auths exampleVis
-    Generating 500 random queries...finished
-    2212.39 lookups/sec   0.23 secs
-    num results : 0
-    Did not find 500 rows
-    Generating 500 random queries...finished
-    4464.29 lookups/sec   0.11 secs
-    num results : 0
-    Did not find 500 rows
-
-********************************************************************************
-
-Bloom filters can also speed up lookups for entries that exist.  In accumulo
-data is divided into tablets and each tablet has multiple map files. Every
-lookup in accumulo goes to a specific tablet where a lookup is done on each
-map file in the tablet.  So if a tablet has three map files, lookup performance
-can be three times slower than a tablet with one map file.  However if the map
-files contain unique sets of data, then bloom filters can help eliminate map
-files that do not contain the row being looked up.  To illustrate this two
-identical tables were created using the following process.  One table had bloom
-filters, the other did not.  Also the major compaction ratio was increased to
-prevent the files from being compacted into one file.
-
- * Insert 1 million entries using  RandomBatchWriter with a seed of 7
- * Flush the table using the shell
- * Insert 1 million entries using  RandomBatchWriter with a seed of 8
- * Flush the table using the shell
- * Insert 1 million entries using  RandomBatchWriter with a seed of 9
- * Flush the table using the shell
-
-After following the above steps, each table will have a tablet with three map
-files.  Flushing the table after each batch of inserts will create a map file.
-Each map file will contain 1 million entries generated with a different seed.
-This is assuming that Accumulo is configured with enough memory to hold 1
-million inserts.  If not, then more map files will be created. 
-
-The commands for creating the first table without bloom filters are below.
-
-    $ ./bin/accumulo shell -u username -p password
-    Shell - Apache Accumulo Interactive Shell
-    - version: 1.5.0
-    - instance name: instance
-    - instance id: 00000000-0000-0000-0000-000000000000
-    - 
-    - type 'help' for a list of available commands
-    - 
-    username@instance> setauths -u username -s exampleVis
-    username@instance> createtable bloom_test1
-    username@instance bloom_test1> config -t bloom_test1 -s table.compaction.major.ratio=7
-    username@instance bloom_test1> exit
-
-    $ ARGS="-i instance -z zookeepers -u username -p password -t bloom_test1 --num 1000000 --min 0 --max 1000000000 --size 50 --batchMemory 2M --batchLatency 60s --batchThreads 3 --auths exampleVis"
-    $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter --seed 7 $ARGS
-    $ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test1 -w'
-    $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter --seed 8 $ARGS
-    $ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test1 -w'
-    $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter --seed 9 $ARGS
-    $ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test1 -w'
-
-The commands for creating the second table with bloom filers are below.
-
-    $ ./bin/accumulo shell -u username -p password
-    Shell - Apache Accumulo Interactive Shell
-    - version: 1.5.0
-    - instance name: instance
-    - instance id: 00000000-0000-0000-0000-000000000000
-    - 
-    - type 'help' for a list of available commands
-    - 
-    username@instance> setauths -u username -s exampleVis
-    username@instance> createtable bloom_test2
-    username@instance bloom_test2> config -t bloom_test2 -s table.compaction.major.ratio=7
-    username@instance bloom_test2> config -t bloom_test2 -s table.bloom.enabled=true
-    username@instance bloom_test2> exit
-
-    $ ARGS="-i instance -z zookeepers -u username -p password -t bloom_test2 --num 1000000 --min 0 --max 1000000000 --size 50 --batchMemory 2M --batchLatency 60s --batchThreads 3 --auths exampleVis"
-    $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter --seed 7 $ARGS
-    $ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test2 -w'
-    $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter --seed 8 $ARGS
-    $ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test2 -w'
-    $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter --seed 9 $ARGS
-    $ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test2 -w'
-
-Below 500 lookups are done against the table without bloom filters using random
-NG seed 7.  Even though only one map file will likely contain entries for this
-seed, all map files will be interrogated.
-
-    $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 7 -i instance -z zookeepers -u username -p password -t bloom_test1 --num 500 --min 0 --max 1000000000 --size 50 --scanThreads 20 --auths exampleVis
-    Generating 500 random queries...finished
-    35.09 lookups/sec  14.25 secs
-    num results : 500
-    Generating 500 random queries...finished
-    35.33 lookups/sec  14.15 secs
-    num results : 500
-
-Below the same lookups are done against the table with bloom filters.  The
-lookups were 2.86 times faster because only one map file was used, even though three
-map files existed.
-
-    $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 7 -i instance -z zookeepers -u username -p password -t bloom_test2 --num 500 --min 0 --max 1000000000 --size 50 -scanThreads 20 --auths exampleVis
-    Generating 500 random queries...finished
-    99.03 lookups/sec   5.05 secs
-    num results : 500
-    Generating 500 random queries...finished
-    101.15 lookups/sec   4.94 secs
-    num results : 500
-
-You can verify the table has three files by looking in HDFS.  To look in HDFS
-you will need the table ID, because this is used in HDFS instead of the table
-name.  The following command will show table ids.
-
-    $ ./bin/accumulo shell -u username -p password -e 'tables -l'
-    accumulo.metadata    =>        !0
-    accumulo.root        =>        +r
-    bloom_test1          =>        o7
-    bloom_test2          =>        o8
-    trace                =>         1
-
-So the table id for bloom_test2 is o8.  The command below shows what files this
-table has in HDFS.  This assumes Accumulo is at the default location in HDFS. 
-
-    $ hadoop fs -lsr /accumulo/tables/o8
-    drwxr-xr-x   - username supergroup          0 2012-01-10 14:02 /accumulo/tables/o8/default_tablet
-    -rw-r--r--   3 username supergroup   52672650 2012-01-10 14:01 /accumulo/tables/o8/default_tablet/F00000dj.rf
-    -rw-r--r--   3 username supergroup   52436176 2012-01-10 14:01 /accumulo/tables/o8/default_tablet/F00000dk.rf
-    -rw-r--r--   3 username supergroup   52850173 2012-01-10 14:02 /accumulo/tables/o8/default_tablet/F00000dl.rf
-
-Running the rfile-info command shows that one of the files has a bloom filter
-and its 1.5MB.
-
-    $ ./bin/accumulo rfile-info /accumulo/tables/o8/default_tablet/F00000dj.rf
-    Locality group         : <DEFAULT>
-	Start block          : 0
-	Num   blocks         : 752
-	Index level 0        : 43,598 bytes  1 blocks
-	First key            : row_0000001169 foo:1 [exampleVis] 1326222052539 false
-	Last key             : row_0999999421 foo:1 [exampleVis] 1326222052058 false
-	Num entries          : 999,536
-	Column families      : [foo]
-
-    Meta block     : BCFile.index
-      Raw size             : 4 bytes
-      Compressed size      : 12 bytes
-      Compression type     : gz
-
-    Meta block     : RFile.index
-      Raw size             : 43,696 bytes
-      Compressed size      : 15,592 bytes
-      Compression type     : gz
-
-    Meta block     : acu_bloom
-      Raw size             : 1,540,292 bytes
-      Compressed size      : 1,433,115 bytes
-      Compression type     : gz
-

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/server/monitor/src/main/resources/docs/examples/README.bulkIngest
----------------------------------------------------------------------
diff --git a/server/monitor/src/main/resources/docs/examples/README.bulkIngest b/server/monitor/src/main/resources/docs/examples/README.bulkIngest
deleted file mode 100644
index 0e049b3..0000000
--- a/server/monitor/src/main/resources/docs/examples/README.bulkIngest
+++ /dev/null
@@ -1,33 +0,0 @@
-Title: Apache Accumulo Bulk Ingest Example
-Notice:    Licensed to the Apache Software Foundation (ASF) under one
-           or more contributor license agreements.  See the NOTICE file
-           distributed with this work for additional information
-           regarding copyright ownership.  The ASF licenses this file
-           to you under the Apache License, Version 2.0 (the
-           "License"); you may not use this file except in compliance
-           with the License.  You may obtain a copy of the License at
-           .
-             http://www.apache.org/licenses/LICENSE-2.0
-           .
-           Unless required by applicable law or agreed to in writing,
-           software distributed under the License is distributed on an
-           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-           KIND, either express or implied.  See the License for the
-           specific language governing permissions and limitations
-           under the License.
-
-This is an example of how to bulk ingest data into accumulo using map reduce.
-
-The following commands show how to run this example.  This example creates a
-table called test_bulk which has two initial split points. Then 1000 rows of
-test data are created in HDFS. After that the 1000 rows are ingested into
-accumulo.  Then we verify the 1000 rows are in accumulo. 
-
-    $ PKG=org.apache.accumulo.examples.simple.mapreduce.bulk
-    $ ARGS="-i instance -z zookeepers -u username -p password"
-    $ ./bin/accumulo $PKG.SetupTable $ARGS -t test_bulk row_00000333 row_00000666
-    $ ./bin/accumulo $PKG.GenerateTestData --start-row 0 --count 1000 --output bulk/test_1.txt
-    $ ./bin/tool.sh lib/accumulo-examples-simple.jar $PKG.BulkIngestExample $ARGS -t test_bulk --inputDir bulk --workDir tmp/bulkWork
-    $ ./bin/accumulo $PKG.VerifyIngest $ARGS -t test_bulk --start-row 0 --count 1000
-
-For a high level discussion of bulk ingest, see the docs dir.

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/server/monitor/src/main/resources/docs/examples/README.classpath
----------------------------------------------------------------------
diff --git a/server/monitor/src/main/resources/docs/examples/README.classpath b/server/monitor/src/main/resources/docs/examples/README.classpath
deleted file mode 100644
index e816222..0000000
--- a/server/monitor/src/main/resources/docs/examples/README.classpath
+++ /dev/null
@@ -1,68 +0,0 @@
-Title: Apache Accumulo Classpath Example
-Notice:    Licensed to the Apache Software Foundation (ASF) under one
-           or more contributor license agreements.  See the NOTICE file
-           distributed with this work for additional information
-           regarding copyright ownership.  The ASF licenses this file
-           to you under the Apache License, Version 2.0 (the
-           "License"); you may not use this file except in compliance
-           with the License.  You may obtain a copy of the License at
-           .
-             http://www.apache.org/licenses/LICENSE-2.0
-           .
-           Unless required by applicable law or agreed to in writing,
-           software distributed under the License is distributed on an
-           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-           KIND, either express or implied.  See the License for the
-           specific language governing permissions and limitations
-           under the License.
-
-
-This example shows how to use per table classpaths.   The example leverages a
-test jar which contains a Filter that supresses rows containing "foo".  The
-example shows copying the FooFilter.jar into HDFS and then making an Accumulo
-table reference that jar.
-
-
-Execute the following command in the shell.
-
-    $ hadoop fs -copyFromLocal $ACCUMULO_HOME/test/src/test/resources/FooFilter.jar /user1/lib
-
-Execute following in Accumulo shell to setup classpath context
-
-    root@test15> config -s general.vfs.context.classpath.cx1=hdfs://<namenode host>:<namenode port>/user1/lib
-
-Create a table
-
-    root@test15> createtable nofoo
-
-The following command makes this table use the configured classpath context
-
-    root@test15 nofoo> config -t nofoo -s table.classpath.context=cx1
-
-The following command configures an iterator thats in FooFilter.jar
-
-    root@test15 nofoo> setiter -n foofilter -p 10 -scan -minc -majc -class org.apache.accumulo.test.FooFilter
-    Filter accepts or rejects each Key/Value pair
-    ----------> set FooFilter parameter negate, default false keeps k/v that pass accept method, true rejects k/v that pass accept method: false
-
-The commands below show the filter is working.
-
-    root@test15 nofoo> insert foo1 f1 q1 v1
-    root@test15 nofoo> insert noo1 f1 q1 v2
-    root@test15 nofoo> scan
-    noo1 f1:q1 []    v2
-    root@test15 nofoo> 
-
-Below, an attempt is made to add the FooFilter to a table thats not configured
-to use the clasppath context cx1.  This fails util the table is configured to
-use cx1.
-
-    root@test15 nofoo> createtable nofootwo
-    root@test15 nofootwo> setiter -n foofilter -p 10 -scan -minc -majc -class org.apache.accumulo.test.FooFilter
-    2013-05-03 12:49:35,943 [shell.Shell] ERROR: java.lang.IllegalArgumentException: org.apache.accumulo.test.FooFilter
-    root@test15 nofootwo> config -t nofootwo -s table.classpath.context=cx1
-    root@test15 nofootwo> setiter -n foofilter -p 10 -scan -minc -majc -class org.apache.accumulo.test.FooFilter
-    Filter accepts or rejects each Key/Value pair
-    ----------> set FooFilter parameter negate, default false keeps k/v that pass accept method, true rejects k/v that pass accept method: false
-
-


[6/6] git commit: Merge branch '1.6.0-SNAPSHOT'

Posted by ct...@apache.org.
Merge branch '1.6.0-SNAPSHOT'

Conflicts:
	core/src/main/java/org/apache/accumulo/core/conf/DefaultConfiguration.java


Project: http://git-wip-us.apache.org/repos/asf/accumulo/repo
Commit: http://git-wip-us.apache.org/repos/asf/accumulo/commit/5655a044
Tree: http://git-wip-us.apache.org/repos/asf/accumulo/tree/5655a044
Diff: http://git-wip-us.apache.org/repos/asf/accumulo/diff/5655a044

Branch: refs/heads/master
Commit: 5655a044e027ed44131de943e2419c3a3289ffce
Parents: 0721f8d a20e19f
Author: Christopher Tubbs <ct...@apache.org>
Authored: Thu Mar 27 20:48:02 2014 -0400
Committer: Christopher Tubbs <ct...@apache.org>
Committed: Thu Mar 27 20:48:02 2014 -0400

----------------------------------------------------------------------
 assemble/src/main/assemblies/component.xml      |  17 +-
 .../core/conf/DefaultConfiguration.java         |  13 +-
 docs/pom.xml                                    |  21 ++
 .../chapters/administration.tex                 |   2 +-
 .../chapters/table_configuration.tex            |   4 +-
 docs/src/main/resources/administration.html     | 171 +++++++++++++++
 docs/src/main/resources/bulkIngest.html         | 114 ++++++++++
 docs/src/main/resources/combiners.html          |  87 ++++++++
 docs/src/main/resources/constraints.html        |  50 +++++
 docs/src/main/resources/distributedTracing.html |  99 +++++++++
 docs/src/main/resources/documentation.css       | 112 ++++++++++
 docs/src/main/resources/examples/README         |  95 ++++++++
 docs/src/main/resources/examples/README.batch   |  55 +++++
 docs/src/main/resources/examples/README.bloom   | 219 +++++++++++++++++++
 .../main/resources/examples/README.bulkIngest   |  33 +++
 .../main/resources/examples/README.classpath    |  68 ++++++
 docs/src/main/resources/examples/README.client  |  79 +++++++
 .../src/main/resources/examples/README.combiner |  70 ++++++
 .../main/resources/examples/README.constraints  |  54 +++++
 docs/src/main/resources/examples/README.dirlist | 114 ++++++++++
 docs/src/main/resources/examples/README.export  |  91 ++++++++
 .../src/main/resources/examples/README.filedata |  47 ++++
 docs/src/main/resources/examples/README.filter  | 110 ++++++++++
 .../main/resources/examples/README.helloworld   |  47 ++++
 .../main/resources/examples/README.isolation    |  50 +++++
 docs/src/main/resources/examples/README.mapred  | 154 +++++++++++++
 .../main/resources/examples/README.maxmutation  |  47 ++++
 docs/src/main/resources/examples/README.regex   |  58 +++++
 .../main/resources/examples/README.reservations |  66 ++++++
 docs/src/main/resources/examples/README.rowhash |  59 +++++
 docs/src/main/resources/examples/README.shard   |  67 ++++++
 .../main/resources/examples/README.tabletofile  |  59 +++++
 .../src/main/resources/examples/README.terasort |  50 +++++
 .../main/resources/examples/README.visibility   | 131 +++++++++++
 docs/src/main/resources/index.html              |  40 ++++
 docs/src/main/resources/isolation.html          |  51 +++++
 docs/src/main/resources/lgroups.html            |  45 ++++
 docs/src/main/resources/metrics.html            | 182 +++++++++++++++
 docs/src/main/resources/timestamps.html         | 160 ++++++++++++++
 pom.xml                                         |   4 +-
 .../accumulo/monitor/servlets/BasicServlet.java |   4 +-
 .../monitor/servlets/DefaultServlet.java        |  26 +--
 .../src/main/resources/docs/administration.html | 171 ---------------
 .../src/main/resources/docs/bulkIngest.html     | 114 ----------
 .../src/main/resources/docs/combiners.html      |  85 -------
 .../src/main/resources/docs/constraints.html    |  49 -----
 .../main/resources/docs/distributedTracing.html |  99 ---------
 .../src/main/resources/docs/documentation.css   | 112 ----------
 .../src/main/resources/docs/examples/README     |  95 --------
 .../main/resources/docs/examples/README.batch   |  55 -----
 .../main/resources/docs/examples/README.bloom   | 219 -------------------
 .../resources/docs/examples/README.bulkIngest   |  33 ---
 .../resources/docs/examples/README.classpath    |  68 ------
 .../main/resources/docs/examples/README.client  |  79 -------
 .../resources/docs/examples/README.combiner     |  70 ------
 .../resources/docs/examples/README.constraints  |  54 -----
 .../main/resources/docs/examples/README.dirlist | 114 ----------
 .../main/resources/docs/examples/README.export  |  91 --------
 .../resources/docs/examples/README.filedata     |  47 ----
 .../main/resources/docs/examples/README.filter  | 110 ----------
 .../resources/docs/examples/README.helloworld   |  47 ----
 .../resources/docs/examples/README.isolation    |  50 -----
 .../main/resources/docs/examples/README.mapred  | 154 -------------
 .../resources/docs/examples/README.maxmutation  |  47 ----
 .../main/resources/docs/examples/README.regex   |  58 -----
 .../resources/docs/examples/README.reservations |  66 ------
 .../main/resources/docs/examples/README.rowhash |  59 -----
 .../main/resources/docs/examples/README.shard   |  67 ------
 .../resources/docs/examples/README.tabletofile  |  59 -----
 .../resources/docs/examples/README.terasort     |  50 -----
 .../resources/docs/examples/README.visibility   | 131 -----------
 .../monitor/src/main/resources/docs/index.html  |  41 ----
 .../src/main/resources/docs/isolation.html      |  39 ----
 .../src/main/resources/docs/lgroups.html        |  42 ----
 .../src/main/resources/docs/metrics.html        | 182 ---------------
 .../src/main/resources/docs/timestamps.html     | 160 --------------
 76 files changed, 2982 insertions(+), 2960 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/accumulo/blob/5655a044/core/src/main/java/org/apache/accumulo/core/conf/DefaultConfiguration.java
----------------------------------------------------------------------
diff --cc core/src/main/java/org/apache/accumulo/core/conf/DefaultConfiguration.java
index 843d126,847fd02..607300c
--- a/core/src/main/java/org/apache/accumulo/core/conf/DefaultConfiguration.java
+++ b/core/src/main/java/org/apache/accumulo/core/conf/DefaultConfiguration.java
@@@ -70,24 -60,11 +70,15 @@@ public class DefaultConfiguration exten
          props.put(entry.getKey(), entry.getValue());
    }
  
-   /**
-    * Generates HTML documentation on the default configuration. Used by the monitor to show configuration properties.
-    *
-    * @param doc stream to write HTML to
-    */
-   protected static void generateDocumentation(PrintStream doc) {
-     new ConfigurationDocGen(doc).generateHtml();
-   }
- 
    /*
 -   * Generate documentation for conf/accumulo-site.xml file usage
 +   * Generates documentation for conf/accumulo-site.xml file usage. Arguments
 +   * are: "--generate-doc", file to write to.
 +   *
 +   * @param args command-line arguments
 +   * @throws IllegalArgumentException if args is invalid
     */
    public static void main(String[] args) throws FileNotFoundException, UnsupportedEncodingException {
-     if (args.length == 2 && args[0].equals("--generate-doc")) {
+     if (args.length == 2 && args[0].equals("--generate-html")) {
        new ConfigurationDocGen(new PrintStream(args[1], Constants.UTF8.name())).generateHtml();
      } else if (args.length == 2 && args[0].equals("--generate-latex")) {
        new ConfigurationDocGen(new PrintStream(args[1], Constants.UTF8.name())).generateLaTeX();

http://git-wip-us.apache.org/repos/asf/accumulo/blob/5655a044/docs/pom.xml
----------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/accumulo/blob/5655a044/pom.xml
----------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/accumulo/blob/5655a044/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/BasicServlet.java
----------------------------------------------------------------------


[4/6] ACCUMULO-1487, ACCUMULO-1491 Stop packaging docs for monitor

Posted by ct...@apache.org.
http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/docs/src/main/resources/examples/README.dirlist
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/examples/README.dirlist b/docs/src/main/resources/examples/README.dirlist
new file mode 100644
index 0000000..eb129dd
--- /dev/null
+++ b/docs/src/main/resources/examples/README.dirlist
@@ -0,0 +1,114 @@
+Title: Apache Accumulo File System Archive
+Notice:    Licensed to the Apache Software Foundation (ASF) under one
+           or more contributor license agreements.  See the NOTICE file
+           distributed with this work for additional information
+           regarding copyright ownership.  The ASF licenses this file
+           to you under the Apache License, Version 2.0 (the
+           "License"); you may not use this file except in compliance
+           with the License.  You may obtain a copy of the License at
+           .
+             http://www.apache.org/licenses/LICENSE-2.0
+           .
+           Unless required by applicable law or agreed to in writing,
+           software distributed under the License is distributed on an
+           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+           KIND, either express or implied.  See the License for the
+           specific language governing permissions and limitations
+           under the License.
+
+This example stores filesystem information in accumulo. The example stores the information in the following three tables. More information about the table structures can be found at the end of README.dirlist.
+
+ * directory table : This table stores information about the filesystem directory structure.
+ * index table     : This table stores a file name index. It can be used to quickly find files with given name, suffix, or prefix.
+ * data table      : This table stores the file data. File with duplicate data are only stored once.
+
+This example shows how to use Accumulo to store a file system history. It has the following classes:
+
+ * Ingest.java - Recursively lists the files and directories under a given path, ingests their names and file info into one Accumulo table, indexes the file names in a separate table, and the file data into a third table.
+ * QueryUtil.java - Provides utility methods for getting the info for a file, listing the contents of a directory, and performing single wild card searches on file or directory names.
+ * Viewer.java - Provides a GUI for browsing the file system information stored in Accumulo.
+ * FileCount.java - Computes recursive counts over file system information and stores them back into the same Accumulo table.
+
+To begin, ingest some data with Ingest.java.
+
+    $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.Ingest -i instance -z zookeepers -u username -p password --vis exampleVis --chunkSize 100000 /local/username/workspace
+
+This may take some time if there are large files in the /local/username/workspace directory. If you use 0 instead of 100000 on the command line, the ingest will run much faster, but it will not put any file data into Accumulo (the dataTable will be empty).
+Note that running this example will create tables dirTable, indexTable, and dataTable in Accumulo that you should delete when you have completed the example.
+If you modify a file or add new files in the directory ingested (e.g. /local/username/workspace), you can run Ingest again to add new information into the Accumulo tables.
+
+To browse the data ingested, use Viewer.java. Be sure to give the "username" user the authorizations to see the data (in this case, run
+
+    $ ./bin/accumulo shell -u root -e 'setauths -u username -s exampleVis'
+
+then run the Viewer:
+
+    $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.Viewer -i instance -z zookeepers -u username -p password -t dirTable --dataTable dataTable --auths exampleVis --path /local/username/workspace
+
+To list the contents of specific directories, use QueryUtil.java.
+
+    $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t dirTable --auths exampleVis --path /local/username
+    $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t dirTable --auths exampleVis --path /local/username/workspace
+
+To perform searches on file or directory names, also use QueryUtil.java. Search terms must contain no more than one wild card and cannot contain "/".
+*Note* these queries run on the _indexTable_ table instead of the dirTable table.
+
+    $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t indexTable --auths exampleVis --path filename --search
+    $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t indexTable --auths exampleVis --path 'filename*' --search
+    $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t indexTable --auths exampleVis --path '*jar' --search
+    $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i instance -z zookeepers -u username -p password -t indexTable --auths exampleVis --path 'filename*jar' --search
+
+To count the number of direct children (directories and files) and descendants (children and children's descendants, directories and files), run the FileCount over the dirTable table.
+The results are written back to the same table. FileCount reads from and writes to Accumulo. This requires scan authorizations for the read and a visibility for the data written.
+In this example, the authorizations and visibility are set to the same value, exampleVis. See README.visibility for more information on visibility and authorizations.
+
+    $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.FileCount -i instance -z zookeepers -u username -p password -t dirTable --auths exampleVis
+
+## Directory Table
+
+Here is a illustration of what data looks like in the directory table:
+
+    row colf:colq [vis]	value
+    000 dir:exec [exampleVis]    true
+    000 dir:hidden [exampleVis]    false
+    000 dir:lastmod [exampleVis]    1291996886000
+    000 dir:length [exampleVis]    1666
+    001/local dir:exec [exampleVis]    true
+    001/local dir:hidden [exampleVis]    false
+    001/local dir:lastmod [exampleVis]    1304945270000
+    001/local dir:length [exampleVis]    272
+    002/local/Accumulo.README \x7F\xFF\xFE\xCFH\xA1\x82\x97:exec [exampleVis]    false
+    002/local/Accumulo.README \x7F\xFF\xFE\xCFH\xA1\x82\x97:hidden [exampleVis]    false
+    002/local/Accumulo.README \x7F\xFF\xFE\xCFH\xA1\x82\x97:lastmod [exampleVis]    1308746481000
+    002/local/Accumulo.README \x7F\xFF\xFE\xCFH\xA1\x82\x97:length [exampleVis]    9192
+    002/local/Accumulo.README \x7F\xFF\xFE\xCFH\xA1\x82\x97:md5 [exampleVis]    274af6419a3c4c4a259260ac7017cbf1
+
+The rows are of the form depth + path, where depth is the number of slashes ("/") in the path padded to 3 digits. This is so that all the children of a directory appear as consecutive keys in Accumulo; without the depth, you would for example see all the subdirectories of /local before you saw /usr.
+For directories the column family is "dir". For files the column family is Long.MAX_VALUE - lastModified in bytes rather than string format so that newer versions sort earlier.
+
+## Index Table
+
+Here is an illustration of what data looks like in the index table:
+
+    row colf:colq [vis]
+    fAccumulo.README i:002/local/Accumulo.README [exampleVis]
+    flocal i:001/local [exampleVis]
+    rEMDAER.olumuccA i:002/local/Accumulo.README [exampleVis]
+    rlacol i:001/local [exampleVis]
+
+The values of the index table are null. The rows are of the form "f" + filename or "r" + reverse file name. This is to enable searches with wildcards at the beginning, middle, or end.
+
+## Data Table
+
+Here is an illustration of what data looks like in the data table:
+
+    row colf:colq [vis]	value
+    274af6419a3c4c4a259260ac7017cbf1 refs:e77276a2b56e5c15b540eaae32b12c69\x00filext [exampleVis]    README
+    274af6419a3c4c4a259260ac7017cbf1 refs:e77276a2b56e5c15b540eaae32b12c69\x00name [exampleVis]    /local/Accumulo.README
+    274af6419a3c4c4a259260ac7017cbf1 ~chunk:\x00\x0FB@\x00\x00\x00\x00 [exampleVis]    *******************************************************************************\x0A1. Building\x0A\x0AIn the normal tarball or RPM release of accumulo, [truncated]
+    274af6419a3c4c4a259260ac7017cbf1 ~chunk:\x00\x0FB@\x00\x00\x00\x01 [exampleVis]
+
+The rows are the md5 hash of the file. Some column family : column qualifier pairs are "refs" : hash of file name + null byte + property name, in which case the value is property value. There can be multiple references to the same file which are distinguished by the hash of the file name.
+Other column family : column qualifier pairs are "~chunk" : chunk size in bytes + chunk number in bytes, in which case the value is the bytes for that chunk of the file. There is an end of file data marker whose chunk number is the number of chunks for the file and whose value is empty.
+
+There may exist multiple copies of the same file (with the same md5 hash) with different chunk sizes or different visibilities. There is an iterator that can be set on the data table that combines these copies into a single copy with a visibility taken from the visibilities of the file references, e.g. (vis from ref1)|(vis from ref2).

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/docs/src/main/resources/examples/README.export
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/examples/README.export b/docs/src/main/resources/examples/README.export
new file mode 100644
index 0000000..b6ea8f8
--- /dev/null
+++ b/docs/src/main/resources/examples/README.export
@@ -0,0 +1,91 @@
+Title: Apache Accumulo Export/Import Example
+Notice:    Licensed to the Apache Software Foundation (ASF) under one
+           or more contributor license agreements.  See the NOTICE file
+           distributed with this work for additional information
+           regarding copyright ownership.  The ASF licenses this file
+           to you under the Apache License, Version 2.0 (the
+           "License"); you may not use this file except in compliance
+           with the License.  You may obtain a copy of the License at
+           .
+             http://www.apache.org/licenses/LICENSE-2.0
+           .
+           Unless required by applicable law or agreed to in writing,
+           software distributed under the License is distributed on an
+           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+           KIND, either express or implied.  See the License for the
+           specific language governing permissions and limitations
+           under the License.
+
+Accumulo provides a mechanism to export and import tables. This README shows
+how to use this feature.
+
+The shell session below shows creating a table, inserting data, and exporting
+the table. A table must be offline to export it, and it should remain offline
+for the duration of the distcp. An easy way to take a table offline without
+interuppting access to it is to clone it and take the clone offline.
+
+    root@test15> createtable table1
+    root@test15 table1> insert a cf1 cq1 v1
+    root@test15 table1> insert h cf1 cq1 v2
+    root@test15 table1> insert z cf1 cq1 v3
+    root@test15 table1> insert z cf1 cq2 v4
+    root@test15 table1> addsplits -t table1 b r
+    root@test15 table1> scan
+    a cf1:cq1 []    v1
+    h cf1:cq1 []    v2
+    z cf1:cq1 []    v3
+    z cf1:cq2 []    v4
+    root@test15> config -t table1 -s table.split.threshold=100M
+    root@test15 table1> clonetable table1 table1_exp
+    root@test15 table1> offline table1_exp
+    root@test15 table1> exporttable -t table1_exp /tmp/table1_export
+    root@test15 table1> quit
+
+After executing the export command, a few files are created in the hdfs dir.
+One of the files is a list of files to distcp as shown below.
+
+    $ hadoop fs -ls /tmp/table1_export
+    Found 2 items
+    -rw-r--r--   3 user supergroup        162 2012-07-25 09:56 /tmp/table1_export/distcp.txt
+    -rw-r--r--   3 user supergroup        821 2012-07-25 09:56 /tmp/table1_export/exportMetadata.zip
+    $ hadoop fs -cat /tmp/table1_export/distcp.txt
+    hdfs://n1.example.com:6093/accumulo/tables/3/default_tablet/F0000000.rf
+    hdfs://n1.example.com:6093/tmp/table1_export/exportMetadata.zip
+
+Before the table can be imported, it must be copied using distcp. After the
+discp completed, the cloned table may be deleted.
+
+    $ hadoop distcp -f /tmp/table1_export/distcp.txt /tmp/table1_export_dest
+
+The Accumulo shell session below shows importing the table and inspecting it.
+The data, splits, config, and logical time information for the table were
+preserved.
+
+    root@test15> importtable table1_copy /tmp/table1_export_dest
+    root@test15> table table1_copy
+    root@test15 table1_copy> scan
+    a cf1:cq1 []    v1
+    h cf1:cq1 []    v2
+    z cf1:cq1 []    v3
+    z cf1:cq2 []    v4
+    root@test15 table1_copy> getsplits -t table1_copy
+    b
+    r
+    root@test15> config -t table1_copy -f split
+    ---------+--------------------------+-------------------------------------------
+    SCOPE    | NAME                     | VALUE
+    ---------+--------------------------+-------------------------------------------
+    default  | table.split.threshold .. | 1G
+    table    |    @override ........... | 100M
+    ---------+--------------------------+-------------------------------------------
+    root@test15> tables -l
+    accumulo.metadata    =>        !0
+    accumulo.root        =>        +r
+    table1_copy          =>         5
+    trace                =>         1
+    root@test15 table1_copy> scan -t accumulo.metadata -b 5 -c srv:time
+    5;b srv:time []    M1343224500467
+    5;r srv:time []    M1343224500467
+    5< srv:time []    M1343224500467
+
+

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/docs/src/main/resources/examples/README.filedata
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/examples/README.filedata b/docs/src/main/resources/examples/README.filedata
new file mode 100644
index 0000000..26a6c1e
--- /dev/null
+++ b/docs/src/main/resources/examples/README.filedata
@@ -0,0 +1,47 @@
+Title: Apache Accumulo File System Archive Example (Data Only)
+Notice:    Licensed to the Apache Software Foundation (ASF) under one
+           or more contributor license agreements.  See the NOTICE file
+           distributed with this work for additional information
+           regarding copyright ownership.  The ASF licenses this file
+           to you under the Apache License, Version 2.0 (the
+           "License"); you may not use this file except in compliance
+           with the License.  You may obtain a copy of the License at
+           .
+             http://www.apache.org/licenses/LICENSE-2.0
+           .
+           Unless required by applicable law or agreed to in writing,
+           software distributed under the License is distributed on an
+           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+           KIND, either express or implied.  See the License for the
+           specific language governing permissions and limitations
+           under the License.
+
+This example archives file data into an Accumulo table. Files with duplicate data are only stored once.
+The example has the following classes:
+
+ * CharacterHistogram - A MapReduce that computes a histogram of byte frequency for each file and stores the histogram alongside the file data. An example use of the ChunkInputFormat.
+ * ChunkCombiner - An Iterator that dedupes file data and sets their visibilities to a combined visibility based on current references to the file data.
+ * ChunkInputFormat - An Accumulo InputFormat that provides keys containing file info (List<Entry<Key,Value>>) and values with an InputStream over the file (ChunkInputStream).
+ * ChunkInputStream - An input stream over file data stored in Accumulo.
+ * FileDataIngest - Takes a list of files and archives them into Accumulo keyed on hashes of the files.
+ * FileDataQuery - Retrieves file data based on the hash of the file. (Used by the dirlist.Viewer.)
+ * KeyUtil - A utility for creating and parsing null-byte separated strings into/from Text objects.
+ * VisibilityCombiner - A utility for merging visibilities into the form (VIS1)|(VIS2)|...
+
+This example is coupled with the dirlist example. See README.dirlist for instructions.
+
+If you haven't already run the README.dirlist example, ingest a file with FileDataIngest.
+
+    $ ./bin/accumulo org.apache.accumulo.examples.simple.filedata.FileDataIngest -i instance -z zookeepers -u username -p password -t dataTable --auths exampleVis --chunk 1000 $ACCUMULO_HOME/README
+
+Open the accumulo shell and look at the data. The row is the MD5 hash of the file, which you can verify by running a command such as 'md5sum' on the file.
+
+    > scan -t dataTable
+
+Run the CharacterHistogram MapReduce to add some information about the file.
+
+    $ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.filedata.CharacterHistogram -i instance -z zookeepers -u username -p password -t dataTable --auths exampleVis --vis exampleVis
+
+Scan again to see the histogram stored in the 'info' column family.
+
+    > scan -t dataTable

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/docs/src/main/resources/examples/README.filter
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/examples/README.filter b/docs/src/main/resources/examples/README.filter
new file mode 100644
index 0000000..e00ba4a
--- /dev/null
+++ b/docs/src/main/resources/examples/README.filter
@@ -0,0 +1,110 @@
+Title: Apache Accumulo Filter Example
+Notice:    Licensed to the Apache Software Foundation (ASF) under one
+           or more contributor license agreements.  See the NOTICE file
+           distributed with this work for additional information
+           regarding copyright ownership.  The ASF licenses this file
+           to you under the Apache License, Version 2.0 (the
+           "License"); you may not use this file except in compliance
+           with the License.  You may obtain a copy of the License at
+           .
+             http://www.apache.org/licenses/LICENSE-2.0
+           .
+           Unless required by applicable law or agreed to in writing,
+           software distributed under the License is distributed on an
+           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+           KIND, either express or implied.  See the License for the
+           specific language governing permissions and limitations
+           under the License.
+
+This is a simple filter example. It uses the AgeOffFilter that is provided as
+part of the core package org.apache.accumulo.core.iterators.user. Filters are
+iterators that select desired key/value pairs (or weed out undesired ones).
+Filters extend the org.apache.accumulo.core.iterators.Filter class
+and must implement a method accept(Key k, Value v). This method returns true
+if the key/value pair are to be delivered and false if they are to be ignored.
+Filter takes a "negate" parameter which defaults to false. If set to true, the
+return value of the accept method is negated, so that key/value pairs accepted
+by the method are omitted by the Filter.
+
+    username@instance> createtable filtertest
+    username@instance filtertest> setiter -t filtertest -scan -p 10 -n myfilter -ageoff
+    AgeOffFilter removes entries with timestamps more than <ttl> milliseconds old
+    ----------> set AgeOffFilter parameter negate, default false keeps k/v that pass accept method, true rejects k/v that pass accept method:
+    ----------> set AgeOffFilter parameter ttl, time to live (milliseconds): 30000
+    ----------> set AgeOffFilter parameter currentTime, if set, use the given value as the absolute time in milliseconds as the current time of day:
+    username@instance filtertest> scan
+    username@instance filtertest> insert foo a b c
+    username@instance filtertest> scan
+    foo a:b []    c
+    username@instance filtertest>
+
+... wait 30 seconds ...
+
+    username@instance filtertest> scan
+    username@instance filtertest>
+
+Note the absence of the entry inserted more than 30 seconds ago. Since the
+scope was set to "scan", this means the entry is still in Accumulo, but is
+being filtered out at query time. To delete entries from Accumulo based on
+the ages of their timestamps, AgeOffFilters should be set up for the "minc"
+and "majc" scopes, as well.
+
+To force an ageoff of the persisted data, after setting up the ageoff iterator
+on the "minc" and "majc" scopes you can flush and compact your table. This will
+happen automatically as a background operation on any table that is being
+actively written to, but can also be requested in the shell.
+
+The first setiter command used the special -ageoff flag to specify the
+AgeOffFilter, but any Filter can be configured by using the -class flag. The
+following commands show how to enable the AgeOffFilter for the minc and majc
+scopes using the -class flag, then flush and compact the table.
+
+    username@instance filtertest> setiter -t filtertest -minc -majc -p 10 -n myfilter -class org.apache.accumulo.core.iterators.user.AgeOffFilter
+    AgeOffFilter removes entries with timestamps more than <ttl> milliseconds old
+    ----------> set AgeOffFilter parameter negate, default false keeps k/v that pass accept method, true rejects k/v that pass accept method:
+    ----------> set AgeOffFilter parameter ttl, time to live (milliseconds): 30000
+    ----------> set AgeOffFilter parameter currentTime, if set, use the given value as the absolute time in milliseconds as the current time of day:
+    username@instance filtertest> flush
+    06 10:42:24,806 [shell.Shell] INFO : Flush of table filtertest initiated...
+    username@instance filtertest> compact
+    06 10:42:36,781 [shell.Shell] INFO : Compaction of table filtertest started for given range
+    username@instance filtertest> flush -t filtertest -w
+    06 10:42:52,881 [shell.Shell] INFO : Flush of table filtertest completed.
+    username@instance filtertest> compact -t filtertest -w
+    06 10:43:00,632 [shell.Shell] INFO : Compacting table ...
+    06 10:43:01,307 [shell.Shell] INFO : Compaction of table filtertest completed for given range
+    username@instance filtertest>
+
+By default, flush and compact execute in the background, but with the -w flag
+they will wait to return until the operation has completed. Both are
+demonstrated above, though only one call to each would be necessary. A
+specific table can be specified with -t.
+
+After the compaction runs, the newly created files will not contain any data
+that should have been aged off, and the Accumulo garbage collector will remove
+the old files.
+
+To see the iterator settings for a table, use config.
+
+    username@instance filtertest> config -t filtertest -f iterator
+    ---------+---------------------------------------------+---------------------------------------------------------------------------
+    SCOPE    | NAME                                        | VALUE
+    ---------+---------------------------------------------+---------------------------------------------------------------------------
+    table    | table.iterator.majc.myfilter .............. | 10,org.apache.accumulo.core.iterators.user.AgeOffFilter
+    table    | table.iterator.majc.myfilter.opt.ttl ...... | 30000
+    table    | table.iterator.majc.vers .................. | 20,org.apache.accumulo.core.iterators.user.VersioningIterator
+    table    | table.iterator.majc.vers.opt.maxVersions .. | 1
+    table    | table.iterator.minc.myfilter .............. | 10,org.apache.accumulo.core.iterators.user.AgeOffFilter
+    table    | table.iterator.minc.myfilter.opt.ttl ...... | 30000
+    table    | table.iterator.minc.vers .................. | 20,org.apache.accumulo.core.iterators.user.VersioningIterator
+    table    | table.iterator.minc.vers.opt.maxVersions .. | 1
+    table    | table.iterator.scan.myfilter .............. | 10,org.apache.accumulo.core.iterators.user.AgeOffFilter
+    table    | table.iterator.scan.myfilter.opt.ttl ...... | 30000
+    table    | table.iterator.scan.vers .................. | 20,org.apache.accumulo.core.iterators.user.VersioningIterator
+    table    | table.iterator.scan.vers.opt.maxVersions .. | 1
+    ---------+---------------------------------------------+---------------------------------------------------------------------------
+    username@instance filtertest>
+
+When setting new iterators, make sure to order their priority numbers
+(specified with -p) in the order you would like the iterators to be applied.
+Also, each iterator must have a unique name and priority within each scope.

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/docs/src/main/resources/examples/README.helloworld
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/examples/README.helloworld b/docs/src/main/resources/examples/README.helloworld
new file mode 100644
index 0000000..7d41ba3
--- /dev/null
+++ b/docs/src/main/resources/examples/README.helloworld
@@ -0,0 +1,47 @@
+Title: Apache Accumulo Hello World Example
+Notice:    Licensed to the Apache Software Foundation (ASF) under one
+           or more contributor license agreements.  See the NOTICE file
+           distributed with this work for additional information
+           regarding copyright ownership.  The ASF licenses this file
+           to you under the Apache License, Version 2.0 (the
+           "License"); you may not use this file except in compliance
+           with the License.  You may obtain a copy of the License at
+           .
+             http://www.apache.org/licenses/LICENSE-2.0
+           .
+           Unless required by applicable law or agreed to in writing,
+           software distributed under the License is distributed on an
+           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+           KIND, either express or implied.  See the License for the
+           specific language governing permissions and limitations
+           under the License.
+
+This tutorial uses the following Java classes, which can be found in org.apache.accumulo.examples.simple.helloworld in the examples-simple module:
+
+ * InsertWithBatchWriter.java - Inserts 10K rows (50K entries) into accumulo with each row having 5 entries
+ * ReadData.java - Reads all data between two rows
+
+Log into the accumulo shell:
+
+    $ ./bin/accumulo shell -u username -p password
+
+Create a table called 'hellotable':
+
+    username@instance> createtable hellotable
+
+Launch a Java program that inserts data with a BatchWriter:
+
+    $ ./bin/accumulo org.apache.accumulo.examples.simple.helloworld.InsertWithBatchWriter -i instance -z zookeepers -u username -p password -t hellotable
+
+On the accumulo status page at the URL below (where 'master' is replaced with the name or IP of your accumulo master), you should see 50K entries
+
+    http://master:50095/
+
+To view the entries, use the shell to scan the table:
+
+    username@instance> table hellotable
+    username@instance hellotable> scan
+
+You can also use a Java class to scan the table:
+
+    $ ./bin/accumulo org.apache.accumulo.examples.simple.helloworld.ReadData -i instance -z zookeepers -u username -p password -t hellotable --startKey row_0 --endKey row_1001

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/docs/src/main/resources/examples/README.isolation
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/examples/README.isolation b/docs/src/main/resources/examples/README.isolation
new file mode 100644
index 0000000..4739f59
--- /dev/null
+++ b/docs/src/main/resources/examples/README.isolation
@@ -0,0 +1,50 @@
+Title: Apache Accumulo Isolation Example
+Notice:    Licensed to the Apache Software Foundation (ASF) under one
+           or more contributor license agreements.  See the NOTICE file
+           distributed with this work for additional information
+           regarding copyright ownership.  The ASF licenses this file
+           to you under the Apache License, Version 2.0 (the
+           "License"); you may not use this file except in compliance
+           with the License.  You may obtain a copy of the License at
+           .
+             http://www.apache.org/licenses/LICENSE-2.0
+           .
+           Unless required by applicable law or agreed to in writing,
+           software distributed under the License is distributed on an
+           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+           KIND, either express or implied.  See the License for the
+           specific language governing permissions and limitations
+           under the License.
+
+
+Accumulo has an isolated scanner that ensures partial changes to rows are not
+seen. Isolation is documented in ../docs/isolation.html and the user manual.
+
+InterferenceTest is a simple example that shows the effects of scanning with
+and without isolation. This program starts two threads. One threads
+continually upates all of the values in a row to be the same thing, but
+different from what it used to be. The other thread continually scans the
+table and checks that all values in a row are the same. Without isolation the
+scanning thread will sometimes see different values, which is the result of
+reading the row at the same time a mutation is changing the row.
+
+Below, Interference Test is run without isolation enabled for 5000 iterations
+and it reports problems.
+
+    $ ./bin/accumulo org.apache.accumulo.examples.simple.isolation.InterferenceTest -i instance -z zookeepers -u username -p password -t isotest --iterations 5000
+    ERROR Columns in row 053 had multiple values [53, 4553]
+    ERROR Columns in row 061 had multiple values [561, 61]
+    ERROR Columns in row 070 had multiple values [570, 1070]
+    ERROR Columns in row 079 had multiple values [1079, 1579]
+    ERROR Columns in row 088 had multiple values [2588, 1588]
+    ERROR Columns in row 106 had multiple values [2606, 3106]
+    ERROR Columns in row 115 had multiple values [4615, 3115]
+    finished
+
+Below, Interference Test is run with isolation enabled for 5000 iterations and
+it reports no problems.
+
+    $ ./bin/accumulo org.apache.accumulo.examples.simple.isolation.InterferenceTest -i instance -z zookeepers -u username -p password -t isotest --iterations 5000 --isolated
+    finished
+
+

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/docs/src/main/resources/examples/README.mapred
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/examples/README.mapred b/docs/src/main/resources/examples/README.mapred
new file mode 100644
index 0000000..9e9b17f
--- /dev/null
+++ b/docs/src/main/resources/examples/README.mapred
@@ -0,0 +1,154 @@
+Title: Apache Accumulo MapReduce Example
+Notice:    Licensed to the Apache Software Foundation (ASF) under one
+           or more contributor license agreements.  See the NOTICE file
+           distributed with this work for additional information
+           regarding copyright ownership.  The ASF licenses this file
+           to you under the Apache License, Version 2.0 (the
+           "License"); you may not use this file except in compliance
+           with the License.  You may obtain a copy of the License at
+           .
+             http://www.apache.org/licenses/LICENSE-2.0
+           .
+           Unless required by applicable law or agreed to in writing,
+           software distributed under the License is distributed on an
+           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+           KIND, either express or implied.  See the License for the
+           specific language governing permissions and limitations
+           under the License.
+
+This example uses mapreduce and accumulo to compute word counts for a set of
+documents. This is accomplished using a map-only mapreduce job and a
+accumulo table with combiners.
+
+To run this example you will need a directory in HDFS containing text files.
+The accumulo readme will be used to show how to run this example.
+
+    $ hadoop fs -copyFromLocal $ACCUMULO_HOME/README /user/username/wc/Accumulo.README
+    $ hadoop fs -ls /user/username/wc
+    Found 1 items
+    -rw-r--r--   2 username supergroup       9359 2009-07-15 17:54 /user/username/wc/Accumulo.README
+
+The first part of running this example is to create a table with a combiner
+for the column family count.
+
+    $ ./bin/accumulo shell -u username -p password
+    Shell - Apache Accumulo Interactive Shell
+    - version: 1.5.0
+    - instance name: instance
+    - instance id: 00000000-0000-0000-0000-000000000000
+    -
+    - type 'help' for a list of available commands
+    -
+    username@instance> createtable wordCount
+    username@instance wordCount> setiter -class org.apache.accumulo.core.iterators.user.SummingCombiner -p 10 -t wordCount -majc -minc -scan
+    SummingCombiner interprets Values as Longs and adds them together. A variety of encodings (variable length, fixed length, or string) are available
+    ----------> set SummingCombiner parameter all, set to true to apply Combiner to every column, otherwise leave blank. if true, columns option will be ignored.: false
+    ----------> set SummingCombiner parameter columns, <col fam>[:<col qual>]{,<col fam>[:<col qual>]} escape non-alphanum chars using %<hex>.: count
+    ----------> set SummingCombiner parameter lossy, if true, failed decodes are ignored. Otherwise combiner will error on failed decodes (default false): <TRUE|FALSE>: false
+    ----------> set SummingCombiner parameter type, <VARLEN|FIXEDLEN|STRING|fullClassName>: STRING
+    username@instance wordCount> quit
+
+After creating the table, run the word count map reduce job.
+
+    $ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.WordCount -i instance -z zookeepers  --input /user/username/wc -t wordCount -u username -p password
+
+    11/02/07 18:20:11 INFO input.FileInputFormat: Total input paths to process : 1
+    11/02/07 18:20:12 INFO mapred.JobClient: Running job: job_201102071740_0003
+    11/02/07 18:20:13 INFO mapred.JobClient:  map 0% reduce 0%
+    11/02/07 18:20:20 INFO mapred.JobClient:  map 100% reduce 0%
+    11/02/07 18:20:22 INFO mapred.JobClient: Job complete: job_201102071740_0003
+    11/02/07 18:20:22 INFO mapred.JobClient: Counters: 6
+    11/02/07 18:20:22 INFO mapred.JobClient:   Job Counters
+    11/02/07 18:20:22 INFO mapred.JobClient:     Launched map tasks=1
+    11/02/07 18:20:22 INFO mapred.JobClient:     Data-local map tasks=1
+    11/02/07 18:20:22 INFO mapred.JobClient:   FileSystemCounters
+    11/02/07 18:20:22 INFO mapred.JobClient:     HDFS_BYTES_READ=10487
+    11/02/07 18:20:22 INFO mapred.JobClient:   Map-Reduce Framework
+    11/02/07 18:20:22 INFO mapred.JobClient:     Map input records=255
+    11/02/07 18:20:22 INFO mapred.JobClient:     Spilled Records=0
+    11/02/07 18:20:22 INFO mapred.JobClient:     Map output records=1452
+
+After the map reduce job completes, query the accumulo table to see word
+counts.
+
+    $ ./bin/accumulo shell -u username -p password
+    username@instance> table wordCount
+    username@instance wordCount> scan -b the
+    the count:20080906 []    75
+    their count:20080906 []    2
+    them count:20080906 []    1
+    then count:20080906 []    1
+    there count:20080906 []    1
+    these count:20080906 []    3
+    this count:20080906 []    6
+    through count:20080906 []    1
+    time count:20080906 []    3
+    time. count:20080906 []    1
+    to count:20080906 []    27
+    total count:20080906 []    1
+    tserver, count:20080906 []    1
+    tserver.compaction.major.concurrent.max count:20080906 []    1
+    ...
+
+Another example to look at is
+org.apache.accumulo.examples.simple.mapreduce.UniqueColumns. This example
+computes the unique set of columns in a table and shows how a map reduce job
+can directly read a tables files from HDFS.
+
+One more example available is
+org.apache.accumulo.examples.simple.mapreduce.TokenFileWordCount.
+The TokenFileWordCount example works exactly the same as the WordCount example
+explained above except that it uses a token file rather than giving the
+password directly to the map-reduce job (this avoids having the password
+displayed in the job's configuration which is world-readable).
+
+To create a token file, use the create-token utility
+
+  $ ./bin/accumulo create-token
+
+It defaults to creating a PasswordToken, but you can specify the token class
+with -tc (requires the fully qualified class name). Based on the token class,
+it will prompt you for each property required to create the token.
+
+The last value it prompts for is a local filename to save to. If this file
+exists, it will append the new token to the end. Multiple tokens can exist in
+a file, but only the first one for each user will be recognized.
+
+Rather than waiting for the prompts, you can specify some options when calling
+create-token, for example
+
+  $ ./bin/accumulo create-token -u root -p secret -f root.pw
+
+would create a token file containing a PasswordToken for
+user 'root' with password 'secret' and saved to 'root.pw'
+
+This local file needs to be uploaded to hdfs to be used with the
+map-reduce job. For example, if the file were 'root.pw' in the local directory:
+
+  $ hadoop fs -put root.pw root.pw
+
+This would put 'root.pw' in the user's home directory in hdfs.
+
+Because the basic WordCount example uses Opts to parse its arguments
+(which extends ClientOnRequiredTable), you can use a token file with
+the basic WordCount example by calling the same command as explained above
+except replacing the password with the token file (rather than -p, use -tf).
+
+  $ ./bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.WordCount -i instance -z zookeepers  --input /user/username/wc -t wordCount -u username -tf tokenfile
+
+In the above examples, username was 'root' and tokenfile was 'root.pw'
+
+However, if you don't want to use the Opts class to parse arguments,
+the TokenFileWordCount is an example of using the token file manually.
+
+  $ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.TokenFileWordCount instance zookeepers username tokenfile /user/username/wc wordCount
+
+The results should be the same as the WordCount example except that the
+authentication token was not stored in the configuration. It was instead
+stored in a file that the map-reduce job pulled into the distributed cache.
+(If you ran either of these on the same table right after the
+WordCount example, then the resulting counts should just double.)
+
+
+
+

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/docs/src/main/resources/examples/README.maxmutation
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/examples/README.maxmutation b/docs/src/main/resources/examples/README.maxmutation
new file mode 100644
index 0000000..7fb3e08
--- /dev/null
+++ b/docs/src/main/resources/examples/README.maxmutation
@@ -0,0 +1,47 @@
+Title: Apache Accumulo MaxMutation Constraints Example
+Notice:    Licensed to the Apache Software Foundation (ASF) under one
+           or more contributor license agreements.  See the NOTICE file
+           distributed with this work for additional information
+           regarding copyright ownership.  The ASF licenses this file
+           to you under the Apache License, Version 2.0 (the
+           "License"); you may not use this file except in compliance
+           with the License.  You may obtain a copy of the License at
+           .
+             http://www.apache.org/licenses/LICENSE-2.0
+           .
+           Unless required by applicable law or agreed to in writing,
+           software distributed under the License is distributed on an
+           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+           KIND, either express or implied.  See the License for the
+           specific language governing permissions and limitations
+           under the License.
+
+This an example of how to limit the size of mutations that will be accepted into
+a table. Under the default configuration, accumulo does not provide a limitation
+on the size of mutations that can be ingested. Poorly behaved writers might
+inadvertently create mutations so large, that they cause the tablet servers to
+run out of memory. A simple contraint can be added to a table to reject very
+large mutations.
+
+    $ ./bin/accumulo shell -u username -p password
+
+    Shell - Apache Accumulo Interactive Shell
+    -
+    - version: 1.5.0
+    - instance name: instance
+    - instance id: 00000000-0000-0000-0000-000000000000
+    -
+    - type 'help' for a list of available commands
+    -
+    username@instance> createtable test_ingest
+    username@instance test_ingest> config -t test_ingest -s table.constraint.1=org.apache.accumulo.examples.simple.constraints.MaxMutationSize
+    username@instance test_ingest>
+
+
+Now the table will reject any mutation that is larger than 1/256th of the
+working memory of the tablet server. The following command attempts to ingest
+a single row with 10000 columns, which exceeds the memory limit:
+
+    $ ./bin/accumulo org.apache.accumulo.test.TestIngest -i instance -z zookeepers -u username -p password --rows 1 --cols 10000
+ERROR : Constraint violates : ConstraintViolationSummary(constrainClass:org.apache.accumulo.examples.simple.constraints.MaxMutationSize, violationCode:0, violationDescription:mutation exceeded maximum size of 188160, numberOfViolatingMutations:1)
+

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/docs/src/main/resources/examples/README.regex
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/examples/README.regex b/docs/src/main/resources/examples/README.regex
new file mode 100644
index 0000000..a5cc854
--- /dev/null
+++ b/docs/src/main/resources/examples/README.regex
@@ -0,0 +1,58 @@
+Title: Apache Accumulo Regex Example
+Notice:    Licensed to the Apache Software Foundation (ASF) under one
+           or more contributor license agreements.  See the NOTICE file
+           distributed with this work for additional information
+           regarding copyright ownership.  The ASF licenses this file
+           to you under the Apache License, Version 2.0 (the
+           "License"); you may not use this file except in compliance
+           with the License.  You may obtain a copy of the License at
+           .
+             http://www.apache.org/licenses/LICENSE-2.0
+           .
+           Unless required by applicable law or agreed to in writing,
+           software distributed under the License is distributed on an
+           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+           KIND, either express or implied.  See the License for the
+           specific language governing permissions and limitations
+           under the License.
+
+This example uses mapreduce and accumulo to find items using regular expressions.
+This is accomplished using a map-only mapreduce job and a scan-time iterator.
+
+To run this example you will need some data in a table. The following will
+put a trivial amount of data into accumulo using the accumulo shell:
+
+    $ ./bin/accumulo shell -u username -p password
+    Shell - Apache Accumulo Interactive Shell
+    - version: 1.5.0
+    - instance name: instance
+    - instance id: 00000000-0000-0000-0000-000000000000
+    -
+    - type 'help' for a list of available commands
+    -
+    username@instance> createtable input
+    username@instance> insert dogrow dogcf dogcq dogvalue
+    username@instance> insert catrow catcf catcq catvalue
+    username@instance> quit
+
+The RegexExample class sets an iterator on the scanner. This does pattern matching
+against each key/value in accumulo, and only returns matching items. It will do this
+in parallel and will store the results in files in hdfs.
+
+The following will search for any rows in the input table that starts with "dog":
+
+    $ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.RegexExample -u user -p passwd -i instance -t input --rowRegex 'dog.*' --output /tmp/output
+
+    $ hadoop fs -ls /tmp/output
+    Found 3 items
+    -rw-r--r--   1 username supergroup          0 2013-01-10 14:11 /tmp/output/_SUCCESS
+    drwxr-xr-x   - username supergroup          0 2013-01-10 14:10 /tmp/output/_logs
+    -rw-r--r--   1 username supergroup         51 2013-01-10 14:10 /tmp/output/part-m-00000
+
+We can see the output of our little map-reduce job:
+
+    $ hadoop fs -text /tmp/output/output/part-m-00000
+    dogrow dogcf:dogcq [] 1357844987994 false	dogvalue
+    $
+
+

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/docs/src/main/resources/examples/README.reservations
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/examples/README.reservations b/docs/src/main/resources/examples/README.reservations
new file mode 100644
index 0000000..ff111b4
--- /dev/null
+++ b/docs/src/main/resources/examples/README.reservations
@@ -0,0 +1,66 @@
+Title: Apache Accumulo Isolation Example
+Notice:    Licensed to the Apache Software Foundation (ASF) under one
+           or more contributor license agreements.  See the NOTICE file
+           distributed with this work for additional information
+           regarding copyright ownership.  The ASF licenses this file
+           to you under the Apache License, Version 2.0 (the
+           "License"); you may not use this file except in compliance
+           with the License.  You may obtain a copy of the License at
+           .
+             http://www.apache.org/licenses/LICENSE-2.0
+           .
+           Unless required by applicable law or agreed to in writing,
+           software distributed under the License is distributed on an
+           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+           KIND, either express or implied.  See the License for the
+           specific language governing permissions and limitations
+           under the License.
+
+This example shows running a simple reservation system implemented using
+conditional mutations. This system guarantees that only one concurrent user can
+reserve a resource. The example's reserve command allows multiple users to be
+specified. When this is done, it creates a separate reservation thread for each
+user. In the example below threads are spun up for alice, bob, eve, mallory,
+and trent to reserve room06 on 20140101. Bob ends up getting the reservation
+and everyone else is put on a wait list. The example code will take any string
+for what, when and who.
+
+    $ ./bin/accumulo org.apache.accumulo.examples.simple.reservations.ARS
+    >connect test16 localhost root secret ars
+      connected
+    >
+      Commands :
+        reserve <what> <when> <who> {who}
+        cancel <what> <when> <who>
+        list <what> <when>
+    >reserve room06 20140101 alice bob eve mallory trent
+                       bob : RESERVED
+                   mallory : WAIT_LISTED
+                     alice : WAIT_LISTED
+                     trent : WAIT_LISTED
+                       eve : WAIT_LISTED
+    >list room06 20140101
+      Reservation holder : bob
+      Wait list : [mallory, alice, trent, eve]
+    >cancel room06 20140101 alice
+    >cancel room06 20140101 bob
+    >list room06 20140101
+      Reservation holder : mallory
+      Wait list : [trent, eve]
+    >quit
+
+Scanning the table in the Accumulo shell after running the example shows the
+following:
+
+    root@test16> table ars
+    root@test16 ars> scan
+    room06:20140101 res:0001 []    mallory
+    room06:20140101 res:0003 []    trent
+    room06:20140101 res:0004 []    eve
+    room06:20140101 tx:seq []    6
+
+The tx:seq column is incremented for each update to the row allowing for
+detection of concurrent changes. For an update to go through, the sequence
+number must not have changed since the data was read. If it does change,
+the conditional mutation will fail and the example code will retry.
+

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/docs/src/main/resources/examples/README.rowhash
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/examples/README.rowhash b/docs/src/main/resources/examples/README.rowhash
new file mode 100644
index 0000000..43782c9
--- /dev/null
+++ b/docs/src/main/resources/examples/README.rowhash
@@ -0,0 +1,59 @@
+Title: Apache Accumulo RowHash Example
+Notice:    Licensed to the Apache Software Foundation (ASF) under one
+           or more contributor license agreements.  See the NOTICE file
+           distributed with this work for additional information
+           regarding copyright ownership.  The ASF licenses this file
+           to you under the Apache License, Version 2.0 (the
+           "License"); you may not use this file except in compliance
+           with the License.  You may obtain a copy of the License at
+           .
+             http://www.apache.org/licenses/LICENSE-2.0
+           .
+           Unless required by applicable law or agreed to in writing,
+           software distributed under the License is distributed on an
+           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+           KIND, either express or implied.  See the License for the
+           specific language governing permissions and limitations
+           under the License.
+
+This example shows a simple map/reduce job that reads from an accumulo table and
+writes back into that table.
+
+To run this example you will need some data in a table. The following will
+put a trivial amount of data into accumulo using the accumulo shell:
+
+    $ ./bin/accumulo shell -u username -p password
+    Shell - Apache Accumulo Interactive Shell
+    - version: 1.5.0
+    - instance name: instance
+    - instance id: 00000000-0000-0000-0000-000000000000
+    -
+    - type 'help' for a list of available commands
+    -
+    username@instance> createtable input
+    username@instance> insert a-row cf cq value
+    username@instance> insert b-row cf cq value
+    username@instance> quit
+
+The RowHash class will insert a hash for each row in the database if it contains a
+specified colum. Here's how you run the map/reduce job
+
+    $ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.RowHash -u user -p passwd -i instance -t input --column cf:cq
+
+Now we can scan the table and see the hashes:
+
+    $ ./bin/accumulo shell -u username -p password
+    Shell - Apache Accumulo Interactive Shell
+    - version: 1.5.0
+    - instance name: instance
+    - instance id: 00000000-0000-0000-0000-000000000000
+    -
+    - type 'help' for a list of available commands
+    -
+    username@instance> scan -t input
+    a-row cf:cq []    value
+    a-row cf-HASHTYPE:cq-MD5BASE64 []    IGPBYI1uC6+AJJxC4r5YBA==
+    b-row cf:cq []    value
+    b-row cf-HASHTYPE:cq-MD5BASE64 []    IGPBYI1uC6+AJJxC4r5YBA==
+    username@instance>
+

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/docs/src/main/resources/examples/README.shard
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/examples/README.shard b/docs/src/main/resources/examples/README.shard
new file mode 100644
index 0000000..d08658a
--- /dev/null
+++ b/docs/src/main/resources/examples/README.shard
@@ -0,0 +1,67 @@
+Title: Apache Accumulo Shard Example
+Notice:    Licensed to the Apache Software Foundation (ASF) under one
+           or more contributor license agreements.  See the NOTICE file
+           distributed with this work for additional information
+           regarding copyright ownership.  The ASF licenses this file
+           to you under the Apache License, Version 2.0 (the
+           "License"); you may not use this file except in compliance
+           with the License.  You may obtain a copy of the License at
+           .
+             http://www.apache.org/licenses/LICENSE-2.0
+           .
+           Unless required by applicable law or agreed to in writing,
+           software distributed under the License is distributed on an
+           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+           KIND, either express or implied.  See the License for the
+           specific language governing permissions and limitations
+           under the License.
+
+Accumulo has an iterator called the intersecting iterator which supports querying a term index that is partitioned by
+document, or "sharded". This example shows how to use the intersecting iterator through these four programs:
+
+ * Index.java - Indexes a set of text files into an Accumulo table
+ * Query.java - Finds documents containing a given set of terms.
+ * Reverse.java - Reads the index table and writes a map of documents to terms into another table.
+ * ContinuousQuery.java  Uses the table populated by Reverse.java to select N random terms per document. Then it continuously and randomly queries those terms.
+
+To run these example programs, create two tables like below.
+
+    username@instance> createtable shard
+    username@instance shard> createtable doc2term
+
+After creating the tables, index some files. The following command indexes all of the java files in the Accumulo source code.
+
+    $ cd /local/username/workspace/accumulo/
+    $ find core/src server/src -name "*.java" | xargs ./bin/accumulo org.apache.accumulo.examples.simple.shard.Index -i instance -z zookeepers -t shard -u username -p password --partitions 30
+
+The following command queries the index to find all files containing 'foo' and 'bar'.
+
+    $ cd $ACCUMULO_HOME
+    $ ./bin/accumulo org.apache.accumulo.examples.simple.shard.Query -i instance -z zookeepers -t shard -u username -p password foo bar
+    /local/username/workspace/accumulo/src/core/src/test/java/accumulo/core/security/ColumnVisibilityTest.java
+    /local/username/workspace/accumulo/src/core/src/test/java/accumulo/core/client/mock/MockConnectorTest.java
+    /local/username/workspace/accumulo/src/core/src/test/java/accumulo/core/security/VisibilityEvaluatorTest.java
+    /local/username/workspace/accumulo/src/server/src/main/java/accumulo/test/functional/RowDeleteTest.java
+    /local/username/workspace/accumulo/src/server/src/test/java/accumulo/server/logger/TestLogWriter.java
+    /local/username/workspace/accumulo/src/server/src/main/java/accumulo/test/functional/DeleteEverythingTest.java
+    /local/username/workspace/accumulo/src/core/src/test/java/accumulo/core/data/KeyExtentTest.java
+    /local/username/workspace/accumulo/src/server/src/test/java/accumulo/server/constraints/MetadataConstraintsTest.java
+    /local/username/workspace/accumulo/src/core/src/test/java/accumulo/core/iterators/WholeRowIteratorTest.java
+    /local/username/workspace/accumulo/src/server/src/test/java/accumulo/server/util/DefaultMapTest.java
+    /local/username/workspace/accumulo/src/server/src/test/java/accumulo/server/tabletserver/InMemoryMapTest.java
+
+In order to run ContinuousQuery, we need to run Reverse.java to populate doc2term.
+
+    $ ./bin/accumulo org.apache.accumulo.examples.simple.shard.Reverse -i instance -z zookeepers --shardTable shard --doc2Term doc2term -u username -p password
+
+Below ContinuousQuery is run using 5 terms. So it selects 5 random terms from each document, then it continually
+randomly selects one set of 5 terms and queries. It prints the number of matching documents and the time in seconds.
+
+    $ ./bin/accumulo org.apache.accumulo.examples.simple.shard.ContinuousQuery -i instance -z zookeepers --shardTable shard --doc2Term doc2term -u username -p password --terms 5
+    [public, core, class, binarycomparable, b] 2  0.081
+    [wordtodelete, unindexdocument, doctablename, putdelete, insert] 1  0.041
+    [import, columnvisibilityinterpreterfactory, illegalstateexception, cv, columnvisibility] 1  0.049
+    [getpackage, testversion, util, version, 55] 1  0.048
+    [for, static, println, public, the] 55  0.211
+    [sleeptime, wrappingiterator, options, long, utilwaitthread] 1  0.057
+    [string, public, long, 0, wait] 12  0.132

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/docs/src/main/resources/examples/README.tabletofile
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/examples/README.tabletofile b/docs/src/main/resources/examples/README.tabletofile
new file mode 100644
index 0000000..08b7cc9
--- /dev/null
+++ b/docs/src/main/resources/examples/README.tabletofile
@@ -0,0 +1,59 @@
+Title: Apache Accumulo Table-to-File Example
+Notice:    Licensed to the Apache Software Foundation (ASF) under one
+           or more contributor license agreements.  See the NOTICE file
+           distributed with this work for additional information
+           regarding copyright ownership.  The ASF licenses this file
+           to you under the Apache License, Version 2.0 (the
+           "License"); you may not use this file except in compliance
+           with the License.  You may obtain a copy of the License at
+           .
+             http://www.apache.org/licenses/LICENSE-2.0
+           .
+           Unless required by applicable law or agreed to in writing,
+           software distributed under the License is distributed on an
+           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+           KIND, either express or implied.  See the License for the
+           specific language governing permissions and limitations
+           under the License.
+
+This example uses mapreduce to extract specified columns from an existing table.
+
+To run this example you will need some data in a table. The following will
+put a trivial amount of data into accumulo using the accumulo shell:
+
+    $ ./bin/accumulo shell -u username -p password
+    Shell - Apache Accumulo Interactive Shell
+    - version: 1.5.0
+    - instance name: instance
+    - instance id: 00000000-0000-0000-0000-000000000000
+    -
+    - type 'help' for a list of available commands
+    -
+    username@instance> createtable input
+    username@instance> insert dog cf cq dogvalue
+    username@instance> insert cat cf cq catvalue
+    username@instance> insert junk family qualifier junkvalue
+    username@instance> quit
+
+The TableToFile class configures a map-only job to read the specified columns and
+write the key/value pairs to a file in HDFS.
+
+The following will extract the rows containing the column "cf:cq":
+
+    $ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.TableToFile -u user -p passwd -i instance -t input --columns cf:cq --output /tmp/output
+
+    $ hadoop fs -ls /tmp/output
+    -rw-r--r--   1 username supergroup          0 2013-01-10 14:44 /tmp/output/_SUCCESS
+    drwxr-xr-x   - username supergroup          0 2013-01-10 14:44 /tmp/output/_logs
+    drwxr-xr-x   - username supergroup          0 2013-01-10 14:44 /tmp/output/_logs/history
+    -rw-r--r--   1 username supergroup       9049 2013-01-10 14:44 /tmp/output/_logs/history/job_201301081658_0011_1357847072863_username_TableToFile%5F1357847071434
+    -rw-r--r--   1 username supergroup      26172 2013-01-10 14:44 /tmp/output/_logs/history/job_201301081658_0011_conf.xml
+    -rw-r--r--   1 username supergroup         50 2013-01-10 14:44 /tmp/output/part-m-00000
+
+We can see the output of our little map-reduce job:
+
+    $ hadoop fs -text /tmp/output/output/part-m-00000
+    catrow cf:cq []	catvalue
+    dogrow cf:cq []	dogvalue
+    $
+

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/docs/src/main/resources/examples/README.terasort
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/examples/README.terasort b/docs/src/main/resources/examples/README.terasort
new file mode 100644
index 0000000..409c1d1
--- /dev/null
+++ b/docs/src/main/resources/examples/README.terasort
@@ -0,0 +1,50 @@
+Title: Apache Accumulo Terasort Example
+Notice:    Licensed to the Apache Software Foundation (ASF) under one
+           or more contributor license agreements.  See the NOTICE file
+           distributed with this work for additional information
+           regarding copyright ownership.  The ASF licenses this file
+           to you under the Apache License, Version 2.0 (the
+           "License"); you may not use this file except in compliance
+           with the License.  You may obtain a copy of the License at
+           .
+             http://www.apache.org/licenses/LICENSE-2.0
+           .
+           Unless required by applicable law or agreed to in writing,
+           software distributed under the License is distributed on an
+           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+           KIND, either express or implied.  See the License for the
+           specific language governing permissions and limitations
+           under the License.
+
+This example uses map/reduce to generate random input data that will
+be sorted by storing it into accumulo. It uses data very similar to the
+hadoop terasort benchmark.
+
+To run this example you run it with arguments describing the amount of data:
+
+    $ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.TeraSortIngest \
+    -i instance -z zookeepers -u user -p password \
+    --count 10 \
+    --minKeySize 10 \
+    --maxKeySize 10 \
+    --minValueSize 78 \
+    --maxValueSize 78 \
+    --table sort \
+    --splits 10 \
+
+After the map reduce job completes, scan the data:
+
+    $ ./bin/accumulo shell -u username -p password
+    username@instance> scan -t sort
+    +l-$$OE/ZH c:         4 []    GGGGGGGGGGWWWWWWWWWWMMMMMMMMMMCCCCCCCCCCSSSSSSSSSSIIIIIIIIIIYYYYYYYYYYOOOOOOOO
+    ,C)wDw//u= c:        10 []    CCCCCCCCCCSSSSSSSSSSIIIIIIIIIIYYYYYYYYYYOOOOOOOOOOEEEEEEEEEEUUUUUUUUUUKKKKKKKK
+    75@~?'WdUF c:         1 []    IIIIIIIIIIYYYYYYYYYYOOOOOOOOOOEEEEEEEEEEUUUUUUUUUUKKKKKKKKKKAAAAAAAAAAQQQQQQQQ
+    ;L+!2rT~hd c:         8 []    MMMMMMMMMMCCCCCCCCCCSSSSSSSSSSIIIIIIIIIIYYYYYYYYYYOOOOOOOOOOEEEEEEEEEEUUUUUUUU
+    LsS8)|.ZLD c:         5 []    OOOOOOOOOOEEEEEEEEEEUUUUUUUUUUKKKKKKKKKKAAAAAAAAAAQQQQQQQQQQGGGGGGGGGGWWWWWWWW
+    M^*dDE;6^< c:         9 []    UUUUUUUUUUKKKKKKKKKKAAAAAAAAAAQQQQQQQQQQGGGGGGGGGGWWWWWWWWWWMMMMMMMMMMCCCCCCCC
+    ^Eu)<n#kdP c:         3 []    YYYYYYYYYYOOOOOOOOOOEEEEEEEEEEUUUUUUUUUUKKKKKKKKKKAAAAAAAAAAQQQQQQQQQQGGGGGGGG
+    le5awB.$sm c:         6 []    WWWWWWWWWWMMMMMMMMMMCCCCCCCCCCSSSSSSSSSSIIIIIIIIIIYYYYYYYYYYOOOOOOOOOOEEEEEEEE
+    q__[fwhKFg c:         7 []    EEEEEEEEEEUUUUUUUUUUKKKKKKKKKKAAAAAAAAAAQQQQQQQQQQGGGGGGGGGGWWWWWWWWWWMMMMMMMM
+    w[o||:N&H, c:         2 []    QQQQQQQQQQGGGGGGGGGGWWWWWWWWWWMMMMMMMMMMCCCCCCCCCCSSSSSSSSSSIIIIIIIIIIYYYYYYYY
+
+Of course, a real benchmark would ingest millions of entries.

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/docs/src/main/resources/examples/README.visibility
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/examples/README.visibility b/docs/src/main/resources/examples/README.visibility
new file mode 100644
index 0000000..b766dba
--- /dev/null
+++ b/docs/src/main/resources/examples/README.visibility
@@ -0,0 +1,131 @@
+Title: Apache Accumulo Visibility, Authorizations, and Permissions Example
+Notice:    Licensed to the Apache Software Foundation (ASF) under one
+           or more contributor license agreements.  See the NOTICE file
+           distributed with this work for additional information
+           regarding copyright ownership.  The ASF licenses this file
+           to you under the Apache License, Version 2.0 (the
+           "License"); you may not use this file except in compliance
+           with the License.  You may obtain a copy of the License at
+           .
+             http://www.apache.org/licenses/LICENSE-2.0
+           .
+           Unless required by applicable law or agreed to in writing,
+           software distributed under the License is distributed on an
+           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+           KIND, either express or implied.  See the License for the
+           specific language governing permissions and limitations
+           under the License.
+
+## Creating a new user
+
+    root@instance> createuser username
+    Enter new password for 'username': ********
+    Please confirm new password for 'username': ********
+    root@instance> user username
+    Enter password for user username: ********
+    username@instance> createtable vistest
+    06 10:48:47,931 [shell.Shell] ERROR: org.apache.accumulo.core.client.AccumuloSecurityException: Error PERMISSION_DENIED - User does not have permission to perform this action
+    username@instance> userpermissions
+    System permissions:
+
+    Table permissions (accumulo.metadata): Table.READ
+    username@instance>
+
+A user does not by default have permission to create a table.
+
+## Granting permissions to a user
+
+    username@instance> user root
+    Enter password for user root: ********
+    root@instance> grant -s System.CREATE_TABLE -u username
+    root@instance> user username
+    Enter password for user username: ********
+    username@instance> createtable vistest
+    username@instance> userpermissions
+    System permissions: System.CREATE_TABLE
+
+    Table permissions (accumulo.metadata): Table.READ
+    Table permissions (vistest): Table.READ, Table.WRITE, Table.BULK_IMPORT, Table.ALTER_TABLE, Table.GRANT, Table.DROP_TABLE
+    username@instance vistest>
+
+## Inserting data with visibilities
+
+Visibilities are boolean AND (&) and OR (|) combinations of authorization
+tokens. Authorization tokens are arbitrary strings taken from a restricted
+ASCII character set. Parentheses are required to specify order of operations
+in visibilities.
+
+    username@instance vistest> insert row f1 q1 v1 -l A
+    username@instance vistest> insert row f2 q2 v2 -l A&B
+    username@instance vistest> insert row f3 q3 v3 -l apple&carrot|broccoli|spinach
+    06 11:19:01,432 [shell.Shell] ERROR: org.apache.accumulo.core.util.BadArgumentException: cannot mix | and & near index 12
+    apple&carrot|broccoli|spinach
+                ^
+    username@instance vistest> insert row f3 q3 v3 -l (apple&carrot)|broccoli|spinach
+    username@instance vistest>
+
+## Scanning with authorizations
+
+Authorizations are sets of authorization tokens. Each Accumulo user has
+authorizations and each Accumulo scan has authorizations. Scan authorizations
+are only allowed to be a subset of the user's authorizations. By default, a
+user's authorizations set is empty.
+
+    username@instance vistest> scan
+    username@instance vistest> scan -s A
+    06 11:43:14,951 [shell.Shell] ERROR: java.lang.RuntimeException: org.apache.accumulo.core.client.AccumuloSecurityException: Error BAD_AUTHORIZATIONS - The user does not have the specified authorizations assigned
+    username@instance vistest>
+
+## Setting authorizations for a user
+
+    username@instance vistest> setauths -s A
+    06 11:53:42,056 [shell.Shell] ERROR: org.apache.accumulo.core.client.AccumuloSecurityException: Error PERMISSION_DENIED - User does not have permission to perform this action
+    username@instance vistest>
+
+A user cannot set authorizations unless the user has the System.ALTER_USER permission.
+The root user has this permission.
+
+    username@instance vistest> user root
+    Enter password for user root: ********
+    root@instance vistest> setauths -s A -u username
+    root@instance vistest> user username
+    Enter password for user username: ********
+    username@instance vistest> scan -s A
+    row f1:q1 [A]    v1
+    username@instance vistest> scan
+    row f1:q1 [A]    v1
+    username@instance vistest>
+
+The default authorizations for a scan are the user's entire set of authorizations.
+
+    username@instance vistest> user root
+    Enter password for user root: ********
+    root@instance vistest> setauths -s A,B,broccoli -u username
+    root@instance vistest> user username
+    Enter password for user username: ********
+    username@instance vistest> scan
+    row f1:q1 [A]    v1
+    row f2:q2 [A&B]    v2
+    row f3:q3 [(apple&carrot)|broccoli|spinach]    v3
+    username@instance vistest> scan -s B
+    username@instance vistest>
+
+If you want, you can limit a user to only be able to insert data which they can read themselves.
+It can be set with the following constraint.
+
+    username@instance vistest> user root
+    Enter password for user root: ******
+    root@instance vistest> config -t vistest -s table.constraint.1=org.apache.accumulo.core.security.VisibilityConstraint
+    root@instance vistest> user username
+    Enter password for user username: ********
+    username@instance vistest> insert row f4 q4 v4 -l spinach
+        Constraint Failures:
+            ConstraintViolationSummary(constrainClass:org.apache.accumulo.core.security.VisibilityConstraint, violationCode:2, violationDescription:User does not have authorization on column visibility, numberOfViolatingMutations:1)
+    username@instance vistest> insert row f4 q4 v4 -l spinach|broccoli
+    username@instance vistest> scan
+    row f1:q1 [A]    v1
+    row f2:q2 [A&B]    v2
+    row f3:q3 [(apple&carrot)|broccoli|spinach]    v3
+    row f4:q4 [spinach|broccoli]    v4
+    username@instance vistest>
+

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/docs/src/main/resources/index.html
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/index.html b/docs/src/main/resources/index.html
new file mode 100644
index 0000000..cc4ecb7
--- /dev/null
+++ b/docs/src/main/resources/index.html
@@ -0,0 +1,40 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<html>
+<head>
+<title>Accumulo Documentation</title>
+<link rel='stylesheet' type='text/css' href='documentation.css' media='screen'/>
+</head>
+<body>
+
+<h1>Apache Accumulo Documentation</h1>
+<ul>
+<li><a href=accumulo_user_manual.pdf>User Manual</a></li>
+<li><a href=administration.html>Administration</a></li>
+<li><a href=combiners.html>Combiners</a></li>
+<li><a href=constraints.html>Constraints</a></li>
+<li><a href=bulkIngest.html>Bulk Ingest</a></li>
+<li><a href=config.html>Configuration</a></li>
+<li><a href=isolation.html>Isolation</a></li>
+<li><a href=lgroups.html>Locality Groups</a></li>
+<li><a href=timestamps.html>Timestamps</a></li>
+<li><a href=metrics.html>Metrics</a></li>
+<li><a href=distributedTracing.html>Distributed Tracing</a></li>
+</ul>
+
+</body>
+</html>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/docs/src/main/resources/isolation.html
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/isolation.html b/docs/src/main/resources/isolation.html
new file mode 100644
index 0000000..00f47a5
--- /dev/null
+++ b/docs/src/main/resources/isolation.html
@@ -0,0 +1,51 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<html>
+<head>
+<title>Accumulo Isolation</title>
+<link rel='stylesheet' type='text/css' href='documentation.css' media='screen'/>
+</head>
+<body>
+
+<h1>Apache Accumulo Documentation : Isolation</h1>
+
+<h3>Scanning</h3>
+
+<p>Accumulo supports the ability to present an isolated view of rows when scanning. There are three possible ways that a row could change in accumulo :
+<ul>
+ <li>a mutation applied to a table
+ <li>iterators executed as part of a minor or major compaction
+ <li>bulk import of new files
+</ul>
+Isolation guarantees that either all or none of the changes made by these
+operations on a row are seen. Use the <code>IsolatedScanner</code> to obtain an
+isolated view of an accumulo table. When using the regular scanner it is
+possible to see a non isolated view of a row. For example if a mutation
+modifies three columns, it is possible that you will only see two of those
+modifications. With the isolated scanner either all three of the changes are
+seen or none. For an example of this try running the
+<code>InterferenceTest</code> example.
+
+<p>At this time there is no client side isolation support for the
+<code>BatchScanner</code>. You may consider using the
+<code>WholeRowIterator</code> with the  <code>BatchScanner</code> to achieve
+isolation though. This drawback of doing this is that entire rows are read into
+memory on the server side. If a row is too big, it may crash a tablet server.
+The <code>IsolatedScanner</code> buffers rows on the client side so a large row will not crash a tablet server.
+
+<h3>Iterators</h3>
+<p>When writing server side iterators for accumulo isolation is something to be aware of. A scan time iterator in accumulo reads from a set of data sources. While an iterator is reading data it has an isolated view. However, after it returns a key/value it is possible that accumulo may switch data sources and re-seek the iterator. This is done so that resources may be reclaimed. When the user does not request isolation this can occur after any key is returned. When a user request isolation this will only occur after a new row is returned, in which case it will re-seek to the very beginning of the next possible row.

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/docs/src/main/resources/lgroups.html
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/lgroups.html b/docs/src/main/resources/lgroups.html
new file mode 100644
index 0000000..3d2bc0e
--- /dev/null
+++ b/docs/src/main/resources/lgroups.html
@@ -0,0 +1,45 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<html>
+<head>
+<title>Accumulo Locality Groups</title>
+<link rel='stylesheet' type='text/css' href='documentation.css' media='screen'/>
+</head>
+<body>
+
+<h1>Apache Accumulo Documentation : Locality Groups</h1>
+
+<p>Accumulo supports locality groups similar to those described in the Big Table paper. Locality groups allow vertical partitioning of data by column family. This allows user to configure their tables such that scans over a subset of column families are much faster. The Accumulo locality group model has the following features.
+
+<ul>
+ <li>There is a default locality group that holds all column families not in a declared locality group.
+ <li>No requirement to declare locality groups or column families at table creation.
+ <li>Can change locality group configuration on the fly.
+</ul>
+
+
+<P>When the locality group configuration for a table is changed it has no effect on existing data. All minor and major compactions that occur after the change will organize data into the new locality group structure. As data is written into a table, it will cause minor and major compactions to occur. Over time this will result in all data being organized according to the new locality groups. If all data must be reorganized into the new locality groups immediately, this can be accomplished by forcing a full major compaction of the table. Use the compact command in the shell to accomplish this.
+
+<P>There are two ways to manipulate locality groups, via the shell or through
+the Java API. From the shell use the getgroups and setgroups commands. Through
+the API, <code>TableOperations</code> has the methods setLocalityGroups() and getLocalityGroups().
+
+<P>To limit scans to a set of locality groups, use the fetchColumnFamily()
+function on  <code>Scanner</code> or <code>BatchScanner</code>. From the shell use scan with the -c option.
+
+</body>
+</html>


[5/6] git commit: ACCUMULO-1487, ACCUMULO-1491 Stop packaging docs for monitor

Posted by ct...@apache.org.
ACCUMULO-1487, ACCUMULO-1491 Stop packaging docs for monitor

Moved docs out of monitor and into docs directory. Added docs to assemblies.
Remove unnecessary goals from release profile. Remove links from docs to
apidocs. Restricted rpms/debs from being placed in lib/ and docs/ in tarball.


Project: http://git-wip-us.apache.org/repos/asf/accumulo/repo
Commit: http://git-wip-us.apache.org/repos/asf/accumulo/commit/a20e19fc
Tree: http://git-wip-us.apache.org/repos/asf/accumulo/tree/a20e19fc
Diff: http://git-wip-us.apache.org/repos/asf/accumulo/diff/a20e19fc

Branch: refs/heads/master
Commit: a20e19fc4f7c7989ba1b50459d9f762063e3e631
Parents: 0428122
Author: Christopher Tubbs <ct...@apache.org>
Authored: Thu Mar 27 20:35:32 2014 -0400
Committer: Christopher Tubbs <ct...@apache.org>
Committed: Thu Mar 27 20:41:08 2014 -0400

----------------------------------------------------------------------
 assemble/src/main/assemblies/component.xml      |  17 +-
 .../core/conf/DefaultConfiguration.java         |  11 +-
 docs/pom.xml                                    |  21 ++
 .../chapters/administration.tex                 |   2 +-
 .../chapters/table_configuration.tex            |   4 +-
 docs/src/main/resources/administration.html     | 171 +++++++++++++++
 docs/src/main/resources/bulkIngest.html         | 114 ++++++++++
 docs/src/main/resources/combiners.html          |  87 ++++++++
 docs/src/main/resources/constraints.html        |  50 +++++
 docs/src/main/resources/distributedTracing.html |  99 +++++++++
 docs/src/main/resources/documentation.css       | 112 ++++++++++
 docs/src/main/resources/examples/README         |  95 ++++++++
 docs/src/main/resources/examples/README.batch   |  55 +++++
 docs/src/main/resources/examples/README.bloom   | 219 +++++++++++++++++++
 .../main/resources/examples/README.bulkIngest   |  33 +++
 .../main/resources/examples/README.classpath    |  68 ++++++
 docs/src/main/resources/examples/README.client  |  79 +++++++
 .../src/main/resources/examples/README.combiner |  70 ++++++
 .../main/resources/examples/README.constraints  |  54 +++++
 docs/src/main/resources/examples/README.dirlist | 114 ++++++++++
 docs/src/main/resources/examples/README.export  |  91 ++++++++
 .../src/main/resources/examples/README.filedata |  47 ++++
 docs/src/main/resources/examples/README.filter  | 110 ++++++++++
 .../main/resources/examples/README.helloworld   |  47 ++++
 .../main/resources/examples/README.isolation    |  50 +++++
 docs/src/main/resources/examples/README.mapred  | 154 +++++++++++++
 .../main/resources/examples/README.maxmutation  |  47 ++++
 docs/src/main/resources/examples/README.regex   |  58 +++++
 .../main/resources/examples/README.reservations |  66 ++++++
 docs/src/main/resources/examples/README.rowhash |  59 +++++
 docs/src/main/resources/examples/README.shard   |  67 ++++++
 .../main/resources/examples/README.tabletofile  |  59 +++++
 .../src/main/resources/examples/README.terasort |  50 +++++
 .../main/resources/examples/README.visibility   | 131 +++++++++++
 docs/src/main/resources/index.html              |  40 ++++
 docs/src/main/resources/isolation.html          |  51 +++++
 docs/src/main/resources/lgroups.html            |  45 ++++
 docs/src/main/resources/metrics.html            | 182 +++++++++++++++
 docs/src/main/resources/timestamps.html         | 160 ++++++++++++++
 pom.xml                                         |   4 +-
 .../accumulo/monitor/servlets/BasicServlet.java |   4 +-
 .../monitor/servlets/DefaultServlet.java        |  26 +--
 .../src/main/resources/docs/administration.html | 171 ---------------
 .../src/main/resources/docs/bulkIngest.html     | 114 ----------
 .../src/main/resources/docs/combiners.html      |  85 -------
 .../src/main/resources/docs/constraints.html    |  49 -----
 .../main/resources/docs/distributedTracing.html |  99 ---------
 .../src/main/resources/docs/documentation.css   | 112 ----------
 .../src/main/resources/docs/examples/README     |  95 --------
 .../main/resources/docs/examples/README.batch   |  55 -----
 .../main/resources/docs/examples/README.bloom   | 219 -------------------
 .../resources/docs/examples/README.bulkIngest   |  33 ---
 .../resources/docs/examples/README.classpath    |  68 ------
 .../main/resources/docs/examples/README.client  |  79 -------
 .../resources/docs/examples/README.combiner     |  70 ------
 .../resources/docs/examples/README.constraints  |  54 -----
 .../main/resources/docs/examples/README.dirlist | 114 ----------
 .../main/resources/docs/examples/README.export  |  91 --------
 .../resources/docs/examples/README.filedata     |  47 ----
 .../main/resources/docs/examples/README.filter  | 110 ----------
 .../resources/docs/examples/README.helloworld   |  47 ----
 .../resources/docs/examples/README.isolation    |  50 -----
 .../main/resources/docs/examples/README.mapred  | 154 -------------
 .../resources/docs/examples/README.maxmutation  |  47 ----
 .../main/resources/docs/examples/README.regex   |  58 -----
 .../resources/docs/examples/README.reservations |  66 ------
 .../main/resources/docs/examples/README.rowhash |  59 -----
 .../main/resources/docs/examples/README.shard   |  67 ------
 .../resources/docs/examples/README.tabletofile  |  59 -----
 .../resources/docs/examples/README.terasort     |  50 -----
 .../resources/docs/examples/README.visibility   | 131 -----------
 .../monitor/src/main/resources/docs/index.html  |  41 ----
 .../src/main/resources/docs/isolation.html      |  39 ----
 .../src/main/resources/docs/lgroups.html        |  42 ----
 .../src/main/resources/docs/metrics.html        | 182 ---------------
 .../src/main/resources/docs/timestamps.html     | 160 --------------
 76 files changed, 2982 insertions(+), 2958 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/assemble/src/main/assemblies/component.xml
----------------------------------------------------------------------
diff --git a/assemble/src/main/assemblies/component.xml b/assemble/src/main/assemblies/component.xml
index ccd2653..24515ea 100644
--- a/assemble/src/main/assemblies/component.xml
+++ b/assemble/src/main/assemblies/component.xml
@@ -36,6 +36,8 @@
       </includes>
       <excludes>
         <exclude>${groupId}:${artifactId}-docs</exclude>
+        <exclude>${groupId}:${artifactId}-*:rpm</exclude>
+        <exclude>${groupId}:${artifactId}-*:deb</exclude>
       </excludes>
     </dependencySet>
     <dependencySet>
@@ -46,7 +48,7 @@
       <outputFileNameMapping>${artifactId}_user_manual.${artifact.extension}</outputFileNameMapping>
       <useTransitiveDependencies>false</useTransitiveDependencies>
       <includes>
-        <include>${groupId}:${artifactId}-docs</include>
+        <include>${groupId}:${artifactId}-docs:pdf:user-manual</include>
       </includes>
     </dependencySet>
   </dependencySets>
@@ -100,15 +102,26 @@
       </excludes>
     </fileSet>
     <fileSet>
-      <directory>../docs</directory>
+      <directory>../docs/src/main/resources</directory>
+      <outputDirectory>/docs</outputDirectory>
       <directoryMode>0755</directoryMode>
       <fileMode>0644</fileMode>
       <includes>
         <include>*.html</include>
+        <include>*.css</include>
         <include>examples/*</include>
       </includes>
     </fileSet>
     <fileSet>
+      <directory>../docs/target</directory>
+      <outputDirectory>/docs</outputDirectory>
+      <directoryMode>0755</directoryMode>
+      <fileMode>0644</fileMode>
+      <includes>
+        <include>config.html</include>
+      </includes>
+    </fileSet>
+    <fileSet>
       <directory>../conf</directory>
       <directoryMode>0755</directoryMode>
       <fileMode>0755</fileMode>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/core/src/main/java/org/apache/accumulo/core/conf/DefaultConfiguration.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/conf/DefaultConfiguration.java b/core/src/main/java/org/apache/accumulo/core/conf/DefaultConfiguration.java
index 030e88a..847fd02 100644
--- a/core/src/main/java/org/apache/accumulo/core/conf/DefaultConfiguration.java
+++ b/core/src/main/java/org/apache/accumulo/core/conf/DefaultConfiguration.java
@@ -61,22 +61,15 @@ public class DefaultConfiguration extends AccumuloConfiguration {
   }
 
   /*
-   * Used by the monitor to show configuration properties
-   */
-  protected static void generateDocumentation(PrintStream doc) {
-    new ConfigurationDocGen(doc).generateHtml();
-  }
-
-  /*
    * Generate documentation for conf/accumulo-site.xml file usage
    */
   public static void main(String[] args) throws FileNotFoundException, UnsupportedEncodingException {
-    if (args.length == 2 && args[0].equals("--generate-doc")) {
+    if (args.length == 2 && args[0].equals("--generate-html")) {
       new ConfigurationDocGen(new PrintStream(args[1], Constants.UTF8.name())).generateHtml();
     } else if (args.length == 2 && args[0].equals("--generate-latex")) {
       new ConfigurationDocGen(new PrintStream(args[1], Constants.UTF8.name())).generateLaTeX();
     } else {
-      throw new IllegalArgumentException("Usage: " + DefaultConfiguration.class.getName() + " --generate-doc <filename> | --generate-latex <filename>");
+      throw new IllegalArgumentException("Usage: " + DefaultConfiguration.class.getName() + " --generate-html <filename> | --generate-latex <filename>");
     }
   }
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/docs/pom.xml
----------------------------------------------------------------------
diff --git a/docs/pom.xml b/docs/pom.xml
index 32ba317..f7ad760 100644
--- a/docs/pom.xml
+++ b/docs/pom.xml
@@ -56,6 +56,21 @@
                 </configuration>
               </execution>
               <execution>
+                <id>config-html</id>
+                <goals>
+                  <goal>java</goal>
+                </goals>
+                <phase>compile</phase>
+                <configuration>
+                  <mainClass>org.apache.accumulo.core.conf.DefaultConfiguration</mainClass>
+                  <classpathScope>compile</classpathScope>
+                  <arguments>
+                    <argument>--generate-html</argument>
+                    <argument>${project.build.directory}/config.html</argument>
+                  </arguments>
+                </configuration>
+              </execution>
+              <execution>
                 <id>config-appendix</id>
                 <goals>
                   <goal>java</goal>
@@ -136,6 +151,12 @@
                         <source>
                           <location>${project.build.directory}/accumulo_user_manual.pdf</location>
                         </source>
+                        <source>
+                          <location>src/main/resources/</location>
+                        </source>
+                        <source>
+                          <location>${project.build.directory}/config.html</location>
+                        </source>
                       </sources>
                     </mapping>
                   </mappings>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/docs/src/main/latex/accumulo_user_manual/chapters/administration.tex
----------------------------------------------------------------------
diff --git a/docs/src/main/latex/accumulo_user_manual/chapters/administration.tex b/docs/src/main/latex/accumulo_user_manual/chapters/administration.tex
index 57c8760..08c5108 100644
--- a/docs/src/main/latex/accumulo_user_manual/chapters/administration.tex
+++ b/docs/src/main/latex/accumulo_user_manual/chapters/administration.tex
@@ -161,7 +161,7 @@ secret and make sure that the \texttt{accumulo-site.xml} file is not readable to
 
 Some settings can be modified via the Accumulo shell and take effect immediately, but
 some settings require a process restart to take effect. See the configuration documentation
-(available on the monitor web pages and in Appendix~\ref{app:config}) for details.
+(available in the docs directory of the tarball and in Appendix~\ref{app:config}) for details.
 
 \subsection{Deploy Configuration}
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/docs/src/main/latex/accumulo_user_manual/chapters/table_configuration.tex
----------------------------------------------------------------------
diff --git a/docs/src/main/latex/accumulo_user_manual/chapters/table_configuration.tex b/docs/src/main/latex/accumulo_user_manual/chapters/table_configuration.tex
index 0e0dad4..a19cb52 100644
--- a/docs/src/main/latex/accumulo_user_manual/chapters/table_configuration.tex
+++ b/docs/src/main/latex/accumulo_user_manual/chapters/table_configuration.tex
@@ -110,7 +110,7 @@ change to an existing constraint class requires Accumulo to be restarted.
 
 An example of constraints can be found in\\
 \texttt{accumulo/docs/examples/README.constraints} with corresponding code under\\
-\texttt{accumulo/examples/simple/main/java/accumulo/examples/simple/constraints} .
+\texttt{accumulo/examples/simple/src/main/java/accumulo/examples/simple/constraints} .
 
 \section{Bloom Filters}
 As mutations are applied to an Accumulo table, several files are created per tablet. If
@@ -355,7 +355,7 @@ class to Accumulo's lib/ext directory.
 An example of a Combiner can be found under
 
 \begingroup\fontsize{8pt}{8pt}\selectfont\begin{verbatim}
-accumulo/examples/simple/main/java/org/apache/accumulo/examples/simple/combiner/StatsCombiner.java
+accumulo/examples/simple/src/main/java/org/apache/accumulo/examples/simple/combiner/StatsCombiner.java
 \end{verbatim}\endgroup
 
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/docs/src/main/resources/administration.html
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/administration.html b/docs/src/main/resources/administration.html
new file mode 100644
index 0000000..5898037
--- /dev/null
+++ b/docs/src/main/resources/administration.html
@@ -0,0 +1,171 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<html>
+<head>
+<title>Accumulo Administration</title>
+<link rel='stylesheet' type='text/css' href='documentation.css' media='screen'/>
+</head>
+<body>
+
+<h1>Apache Accumulo Documentation : Administration</h1>
+
+<h3>Starting accumulo for the first time</h3>
+
+<p>For the most part, accumulo is ready to go out of the box. To start it, first you must distribute and install
+the accumulo software to each machine in the cloud that you wish to run on. The software should be installed
+in the same directory on each machine and configured identically (or at least similarly... see the configuration
+sections for more details). Select one machine to be your bootstrap machine, the one that you will start accumulo
+with. Note that you must have passphrase-less ssh access to each machine from your bootstrap machine. On this machine,
+create a conf/masters and conf/slaves file. In the masters file, type the hostname of the machine you wish to run the master on (probably localhost).
+In the slaves file, type the hostnames, separated by newlines of each machine you wish to participate in accumulo as a tablet server. If you neglect
+to create these files, the startup scripts will assume you are trying to run on localhost only, and will instantiate a single-node instance only.
+It is probably a good idea to back up these files, or distribute them to the other nodes as well, so that you can easily boot up accumulo
+from another machine, if necessary. You can also make create a <code>conf/accumulo-env.sh</code> file if you want to configure any custom environment variables.
+
+<p>Once properly configured, you can initialize or prepare an instance of accumulo by running: <code>bin/accumulo&nbsp;init</code><br />
+Follow the prompts and you are ready to go. This step only prepares accumulo to run, it does not start up accumulo.
+
+<h3>Starting accumulo</h3>
+
+<p>Once you have configured accumulo to your liking, and distributed the appropriate configuration to each machine, you can start accumulo with
+bin/start-all.sh. If at any time, you wish to bring accumulo servers online after one or more have been shutdown, you can run bin/start-all.sh again.
+This step will only start services that are not already running. Be aware that if you run this command on more than one machine, you may unintentionally
+start an extra copy of the garbage collector service and the monitoring service, since each of these will run on the server on which you run this script.
+
+<h3>Stopping accumulo</h3>
+
+<p>Similar to the start-all.sh script, we provide a bin/stop-all.sh script to shut down accumulo. This will prompt for the root password so that it can
+ask the master to shut down the tablet servers gracefully. If the tablet servers do not respond, or the master takes too long, you can force a shutdown by hitting Ctrl-C
+at the password prompt, and waiting 15 seconds for the script to force a shutdown. Normally, once the shutdown happens gracefully, unresponsive tablet servers are
+forcibly shut down after 5 seconds.
+
+<h3>Adding a Node</h3>
+
+<p>Update your <code>$ACCUMULO_HOME/conf/slaves</code> (or <code>$ACCUMULO_CONF_DIR/slaves</code>) file to account for the addition; at a minimum this needs to be on the host(s) being added, but in practice it's good to ensure consistent configuration across all nodes.</p>
+
+<pre>
+$ACCUMULO_HOME/bin/accumulo admin start &gt;host(s)&gt; {&lt;host&gt; ...}
+</pre>
+
+<p>Alternatively, you can ssh to each of the hosts you want to add and run <code>$ACCUMULO_HOME/bin/start-here.sh</code>.</p>
+
+<p>Make sure the host in question has the new configuration, or else the tablet server won't start.</p>
+
+<h3>Decomissioning a Node</h3>
+
+<p>If you need to take a node out of operation, you can trigger a graceful shutdown of a tablet server. Accumulo will automatically rebalance the tablets across the available tablet servers.</p>
+
+<pre>
+$ACCUMULO_HOME/bin/accumulo admin stop &gt;host(s)&gt; {&lt;host&gt; ...}
+</pre>
+
+<p>Alternatively, you can ssh to each of the hosts you want to remove and run <code>$ACCUMULO_HOME/bin/stop-here.sh</code>.</p>
+
+<p>Be sure to update your <code>$ACCUMULO_HOME/conf/slaves</code> (or <code>$ACCUMULO_CONF_DIR/slaves</code>) file to account for the removal of these hosts. Bear in mind that the monitor will not re-read the slaves file automatically, so it will report the decomissioned servers as down; it's recommended that you restart the monitor so that the node list is up to date.</p>
+
+<h3>Configuration</h3>
+<p>Accumulo configuration information is stored in a xml file and ZooKeeper. System wide
+configuration information is stored in accumulo-site.xml. In order for accumulo to
+find this file its directory must be on the classpath. Accumulo will log a warning if it can not find
+it, and will use built-in default values. The accumulo scripts try to put the config directory on the classpath.
+
+<p>Starting with version 1.0, per-table configuration was
+introduced. This information is stored in ZooKeeper. This information
+can be manipulated using the config command in the accumulo
+shell. ZooKeeper will notify all tablet servers when config properties
+are modified. This makes it possible to change major compaction
+settings, for example, for a table while accumulo is running.
+
+<p>Per-table configuration settings override system settings.
+
+<p>See the possible configuration options and their default values <a href='config.html'>here</a>
+
+<h3>Managing system resources</h3>
+
+<p>It is very important how disk and memory usage are allocated across the cluster and how servers processes are allocated across the cluster.
+
+<ul>
+ <li> On larger clusters, run the namenode, secondary namenode, jobtracker, accumulo master, and zookeepers on dedicated nodes. On a smaller cluster you may want to run all master processes on one node. When doing this ensure that the max total memory that could be used by all master processes does not exceed system memory. Swapping on your single master node would not be good.
+ <li> Accumulo 1.2 and earlier rely on zookeeper but do not use it heavily. On a large cluster setting up 3 or 5 zookeepers should be plenty. Since there is no performance gain when running more zookeepers, fault tolerance is the only benefit.
+ <li> On slave nodes ensure the memory used by all slave processes is less than system memory. For example the following slave node config could use up to 38G of RAM : tablet server 3G, logger 1G, data node 2G, up to 10 mappers each using 2G, and up 6 reducers each using 2G. If the slave nodes only have 32G, then using 38G will result in swapping which could cause tablet server to lose their lock in zookeeper and die. Even if swapping does not cause tablet servers to die, it will kill performance.
+ <li>Accumulo and map reduce will work with less memory, but it has an impact. Accumulo will minor compact more frequently when it has less map memory, resulting in more major compactions. The minor and major compactions both use CPU and HDFS I/O. The same goes for map reduce, the less memory you give it, the more it has to sort and spill. Try to minimize spilling and compactions as much as possible without causing swapping.
+ <li>Accumulo writes data to disk before it sorts it in memory. This allows data that was in memory when a tablet server crashes to be recovered. Each slave node needs a local directory to write this data to. Ensure the file system holding this directory has at least 100G free on all nodes. Also, if this directory is in a filesystem used by map reduce or hdfs they may effect each others performance.
+</ul>
+
+<p>There are a few settings that determine how much memory accumulo tablet
+servers use. In accumulo-env.sh there is a setting called
+ACCUMULO_TSERVER_OPTS. By default this is set to something like "-Xmx512m
+-Xms512m". These are Java jvm options asking Java to use 512 megabytes of
+memory. By default accumulo stores data written to it outside of the Java
+memory space in order to avoid pauses caused by the Java garbage collector. The
+amount of memory it uses for this data is determined by the accumulo setting
+"tserver.memory.maps.max". Since this memory is outside of the Java managed
+memory, the process can grow larger than the -Xmx setting. So if -Xmx is set
+to 512M and tserver.memory.maps.max is set to 1G, a tablet server process can
+be expected to use 1.5G. If tserver.memory.maps.native.enabled is set to
+false, then accumulo will only use memory managed by Java and the process will
+not use more than what -Xmx is set to. In this case the
+tserver.memory.maps.max setting should be 75% of the -Xmx setting.
+
+<h3>Swappiness</h3>
+
+<p>The linux kernel will swap out memory of running programs to increase
+the size of the disk buffers. This tendency to swap out is controlled by
+a kernel setting called "swappiness."  This behavior does not work well for
+large java servers. When a java process runs a garbage collection, it touches
+lots of pages forcing all swapped out pages back into memory. It is suggested
+that swappiness be set to zero.
+
+<pre>
+ # sysctl -w vm.swappiness=0
+ # echo "vm.swappiness = 0" &gt;&gt; /etc/sysctl.conf
+</pre>
+
+<h3>Hadoop timeouts</h3>
+
+<p>In order to detect failed datanodes, use shorter timeouts. Add the following to your
+hdfs-site.xml file:
+
+<pre>
+
+  &lt;property&gt;
+    &lt;name&gt;dfs.socket.timeout&lt;/name&gt;
+    &lt;value&gt;3000&lt;/value&gt;
+  &lt;/property&gt;
+
+  &lt;property&gt;
+    &lt;name&gt;dfs.socket.write.timeout&lt;/name&gt;
+    &lt;value&gt;5000&lt;/value&gt;
+  &lt;/property&gt;
+
+  &lt;property&gt;
+    &lt;name&gt;ipc.client.connect.timeout&lt;/name&gt;
+    &lt;value&gt;1000&lt;/value&gt;
+  &lt;/property&gt;
+
+  &lt;property&gt;
+    &lt;name&gt;ipc.clident.connect.max.retries.on.timeouts&lt;/name&gt;
+    &lt;value&gt;2&lt;/value&gt;
+  &lt;/property&gt;
+
+
+
+</pre>
+
+
+</body>
+</html>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/docs/src/main/resources/bulkIngest.html
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/bulkIngest.html b/docs/src/main/resources/bulkIngest.html
new file mode 100644
index 0000000..9e9896e
--- /dev/null
+++ b/docs/src/main/resources/bulkIngest.html
@@ -0,0 +1,114 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<html>
+<head>
+<title>Accumulo Bulk Ingest</title>
+<link rel='stylesheet' type='text/css' href='documentation.css' media='screen'/>
+</head>
+<body>
+
+<h1>Apache Accumulo Documentation : Bulk Ingest</h2>
+
+<p>Accumulo supports the ability to import sorted files produced by an
+external process into an online table. Often, it is much faster to churn
+through large amounts of data using map/reduce to produce the these files.
+The new files can be incorporated into Accumulo using bulk ingest.
+
+<ul>
+<li>Construct an <code>org.apache.accumulo.core.client.Connector</code> instance</li>
+<li>Call <code>connector.tableOperations().getSplits()</code></li>
+<li>Run a map/reduce job using <code>RangePartitioner</code>
+with splits from the previous step</li>
+<li>Call <code>connector.tableOperations().importDirectory()</code> passing the output directory of the MapReduce job</li>
+</ul>
+
+<p>Files can also be imported using the "importdirectory" shell command.
+
+<p>A complete example is available in <a href='examples/README.bulkIngest'>README.bulkIngest</a>
+
+<p>Importing data using whole files of sorted data can be very efficient, but it differs
+from live ingest in the following ways:
+<ul>
+ <li>Table constraints are not applied against they data in the file.
+ <li>Adding new files to tables are likely to trigger major compactions.
+ <li>The timestamp in the file could contain strange values. Accumulo can be asked to use the ingest timestamp for all values if this is a concern.
+ <li>It is possible to create invalid visibility values (for example "&|"). This will cause errors when the data is accessed.
+ <li>Bulk imports do not effect the entry counts in the monitor page until the files are compacted.
+</ul>
+
+<h2>Best Practices</h2>
+
+<p>Consider two approaches to creating ingest files using map/reduce.
+
+<ol>
+ <li>A large file containing the Key/Value pairs for only a single tablet.
+ <li>A set of small files containing Key/Value pairs for every tablet.
+<ol>
+
+<p>In the first case, adding the file requires telling a single tablet server about a single file. Even if the file
+is 20G in size, it is one call to the tablet server. The tablet server makes one extra file entry in the
+tablet's metadata, and the data is now part of the tablet.
+
+<p>In the second case, an request must be made for each tablet for each file to be added. If there
+100 files and 100 tablets, this will be 10K requests, and the number of files needed to be opened
+for scans on these tablets will be very large. Major compactions will most likely start which will eventually
+fix the problem, but a lot more work needs to be done by accumulo to read these files.
+
+<p>Getting good, fast, bulk import performance depends on creating files like the first, and avoiding files like
+the second.
+
+<p>For this reason, a RangePartitioner should be used to create files when
+writing with the AccumuloFileOutputFormat.
+
+<p>Hash partition is not recommended because it will put keys in random
+groups, exactly like our bad approach.
+
+<P>Any set of cut points for range partitioning can be used in a map
+reduce job, but using Accumulo's current splits is probably the most
+optimal thing to do. However in some cases there may be too many
+splits. For example if there are 2000 splits, you would need to run
+2001 reducers. To overcome this problem use the
+<code>connector.tableOperations.getSplits(&lt;table name&gt;,&lt;max
+splits&gt;)</code> method. This method will not return more than
+<code> &lt;max splits&gt; </code> splits, but the splits it returns
+will optimally partition the data for Accumulo.
+
+<p>Remember that Accumulo never splits rows across tablets.
+Therefore the range partitioner only considers rows when partitioning.
+
+<p>When bulk importing many files into a new table, it might be good to pre-split the table to bring
+additional resources to accepting the data. For example, if you know your data is indexed based on the
+date, pre-creating splits for each day will allow files to fall into natural splits. Having more tablets
+accept the new data means that more resources can be used to import the data right away.
+
+<p>An alternative to bulk ingest is to have a map/reduce job use
+<code>AccumuloOutputFormat</code>, which can support billions of inserts per
+hour, depending on the size of your cluster. This is sufficient for
+most users, but bulk ingest remains the fastest way to incorporate
+data into Accumulo. In addition, bulk ingest has one advantage over
+AccumuloOutputFormat: there is no duplicate data insertion. When one uses
+map/reduce to output data to accumulo, restarted jobs may re-enter
+data from previous failed attempts. Generally, this only matters when
+there are aggregators. With bulk ingest, reducers are writing to new
+map files, so it does not matter. If a reduce fails, you create a new
+map file. When all reducers finish, you bulk ingest the map files
+into Accumulo. The disadvantage to bulk ingest over <code>AccumuloOutputFormat</code> is
+greater latency: the entire map/reduce job must complete
+before any data is available.
+
+</body>
+</html>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/docs/src/main/resources/combiners.html
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/combiners.html b/docs/src/main/resources/combiners.html
new file mode 100644
index 0000000..a5e3dc0
--- /dev/null
+++ b/docs/src/main/resources/combiners.html
@@ -0,0 +1,87 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<html>
+<head>
+<title>Accumulo Combiners</title>
+<link rel='stylesheet' type='text/css' href='documentation.css' media='screen'/>
+</head>
+<body>
+
+<h1>Apache Accumulo Documentation : Combiners</h1>
+
+<p>Accumulo supports on the fly lazy aggregation of data using Combiners. Aggregation is done at compaction and scan time. No lookup is done at insert time, which` greatly speeds up ingest.
+
+<p>Combiners are easy to use. You use the setiters command to configure a combiner for a table. Allowing a Combiner to apply to a whole column family is an interesting twist that gives the user great flexibility. The example below demonstrates this flexibility.
+
+<p><pre>
+
+Shell - Apache Accumulo Interactive Shell
+- version: 1.5.0
+- instance id: 863fc0d1-3623-4b6c-8c23-7d4fdb1c8a49
+-
+- type 'help' for a list of available commands
+-
+user@instance&gt; createtable perDayCounts
+user@instance perDayCounts&gt; setiter -t perDayCounts -p 10 -scan -minc -majc -n daycount -class org.apache.accumulo.core.iterators.user.SummingCombiner
+TypedValueCombiner can interpret Values as a variety of number encodings (VLong, Long, or String) before combining
+----------&gt; set SummingCombiner parameter columns, &lt;col fam&gt;[:&lt;col qual&gt;]{,&lt;col fam&gt;[:&lt;col qual&gt;]} escape non aplhanum chars using %&lt;hex&gt;.: day
+----------&gt; set SummingCombiner parameter type, &lt;VARNUM|LONG|STRING&gt;: STRING
+user@instance perDayCounts&gt; insert foo day 20080101 1
+user@instance perDayCounts&gt; insert foo day 20080101 1
+user@instance perDayCounts&gt; insert foo day 20080103 1
+user@instance perDayCounts&gt; insert bar day 20080101 1
+user@instance perDayCounts&gt; insert bar day 20080101 1
+user@instance perDayCounts&gt; scan
+bar day:20080101 []    2
+foo day:20080101 []    2
+foo day:20080103 []    1
+</pre>
+
+
+<p>Implementing a new Combiner is a snap. Simply write some Java code that
+extends <code>org.apache.accumulo.core.iterators.Combiner</code>. A good place
+to look for examples is the <code>org.apache.accumulo.core.iterators.user</code> package. Also look at the example StatsCombiner.
+
+<p>To deploy a new aggregator, jar it up and put the jar in accumulo/lib/ext. To see an example look at <a href='examples/README.combiner'>README.combiner</a>
+
+<p>If you would like to see what iterators a table has you can use the config command like in the following example.
+
+<p><pre>
+user@instance perDayCounts&gt; config -t perDayCounts -f iterator
+---------+---------------------------------------------+-----------------------------------------------------------
+SCOPE    | NAME                                        | VALUE
+---------+---------------------------------------------+-----------------------------------------------------------
+table    | table.iterator.majc.daycount .............. | 10,org.apache.accumulo.core.iterators.user.SummingCombiner
+table    | table.iterator.majc.daycount.opt.columns .. | day
+table    | table.iterator.majc.daycount.opt.type ..... | STRING
+table    | table.iterator.majc.vers .................. | 20,org.apache.accumulo.core.iterators.VersioningIterator
+table    | table.iterator.majc.vers.opt.maxVersions .. | 1
+table    | table.iterator.minc.daycount .............. | 10,org.apache.accumulo.core.iterators.user.SummingCombiner
+table    | table.iterator.minc.daycount.opt.columns .. | day
+table    | table.iterator.minc.daycount.opt.type ..... | STRING
+table    | table.iterator.minc.vers .................. | 20,org.apache.accumulo.core.iterators.VersioningIterator
+table    | table.iterator.minc.vers.opt.maxVersions .. | 1
+table    | table.iterator.scan.daycount .............. | 10,org.apache.accumulo.core.iterators.user.SummingCombiner
+table    | table.iterator.scan.daycount.opt.columns .. | day
+table    | table.iterator.scan.daycount.opt.type ..... | STRING
+table    | table.iterator.scan.vers .................. | 20,org.apache.accumulo.core.iterators.VersioningIterator
+table    | table.iterator.scan.vers.opt.maxVersions .. | 1
+---------+---------------------------------------------+-----------------------------------------------------------
+</pre>
+
+</body>
+</html>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/docs/src/main/resources/constraints.html
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/constraints.html b/docs/src/main/resources/constraints.html
new file mode 100644
index 0000000..d6e5037
--- /dev/null
+++ b/docs/src/main/resources/constraints.html
@@ -0,0 +1,50 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<html>
+<head>
+<title>Accumulo Constraints</title>
+<link rel='stylesheet' type='text/css' href='documentation.css' media='screen'/>
+</head>
+<body>
+
+<h1>Apache Accumulo Documentation : Constraints</h1>
+
+Accumulo supports constraints. Constraints are applied to mutations at ingest time.
+
+<p>Implementing a new constraint is a snap. Simply write some Java code that
+implements <code>org.apache.accumulo.core.constraints.Constraint</code>.
+
+<p>To deploy a new constraint, jar it up and put the jar in accumulo/lib/ext.
+
+<p>After creating a constraint, set a table specific property to use it. The following example adds two constraints to table foo. In the example com.test.ExampleConstraint and com.test.AnotherConstraint are class names.
+
+<p><pre>
+user@instance:9999 perDayCounts&gt; createtable foo
+user@instance:9999 foo&gt; config -t foo -s table.constraint.1=com.test.ExampleConstraint
+user@instance:9999 foo&gt; config -t foo -s table.constraint.2=com.test.AnotherConstraint
+user@instance:9999 foo&gt; config -t foo -f constraint
+---------+------------------------------------------+-----------------------------------------
+SCOPE    | NAME                                     | VALUE
+---------+------------------------------------------+-----------------------------------------
+table    | table.constraint.1...................... | com.test.ExampleConstraint
+table    | table.constraint.2...................... | com.test.AnotherConstraint
+---------+------------------------------------------+-----------------------------------------
+user@instance:9999 foo&gt;
+</pre>
+
+</body>
+</html>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/docs/src/main/resources/distributedTracing.html
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/distributedTracing.html b/docs/src/main/resources/distributedTracing.html
new file mode 100644
index 0000000..54c9095
--- /dev/null
+++ b/docs/src/main/resources/distributedTracing.html
@@ -0,0 +1,99 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<html>
+<head>
+<title>Accumulo Distributed Tracing</title>
+<link rel='stylesheet' type='text/css' href='documentation.css' media='screen'/>
+</head>
+<body>
+
+<h1>Apache Accumulo Documentation : Distributed Tracing</h1>
+
+<p>It can be difficult to determine why some operations are taking longer than expected. For example, you may be looking up items with
+very low latency, but sometimes the lookups take much longer. Determining the cause of the delay is difficult because the system is
+distributed, and the typical lookup is fast.</p>
+
+<p>To provide insight into what accumulo is doing during your scan, you can turn on tracing before you do your operation:</p>
+
+<pre>
+   DistributedTrace.enable(instance, zooReader, hostname, "myApplication");
+   Trace scanTrace = Trace.on("client:scan");
+   BatchScanner scanner = conn.createBatchScanner(...);
+   // Configure your scanner
+   for (Entry<Key, Value> entry : scanner) {
+   }
+   Trace.off();
+</pre>
+
+
+<p>Accumulo has been instrumented to record the time that various operations take when tracing is turned on. The fact that tracing is
+enabled follows all the requests made on behalf of the user throughout the distributed infrastructure of accumulo, and across all
+threads of execution.</p>
+
+<p>These time spans will be inserted into the trace accumulo table. You can browse recent traces from the accumulo monitor page.
+You can also read the trace table directly.</p>
+
+<p>Tracing is supported in the shell. For example:
+
+<pre>
+root@test&gt; createtable test
+root@test test&gt; insert a b c d
+root@test test&gt; trace on
+root@test test&gt; scan
+a b:c []    d
+root@test test&gt; trace off
+Waiting for trace information
+Waiting for trace information
+Waiting for trace information
+Trace started at 2011/03/16 09:20:31.387
+Time  Start  Service@Location       Name
+ 3355+0      shell@host2 shell:root
+    1+1        shell@host2 client:listUsers
+    1+1434     tserver@host2 getUserAuthorizations
+    1+1434     shell@host2 client:getUserAuthorizations
+   10+1550     shell@host2 scan
+    9+1551       shell@host2 scan:location
+    7+1552         shell@host2 client:startScan
+    6+1553         tserver@host2 startScan
+    5+1553           tserver@host2 tablet read ahead 11
+    1+1559         shell@host2 client:closeScan
+    1+1561     shell@host2 client:listUsers
+</pre>
+
+<p>Here we can see that the shell is getting the list of users (which is used for tab-completion) after every command. While
+unexpected, it is a fast operation. In fact, all the requests are very fast, and most of the time is spent waiting for the user
+to make a request while tracing is turned on.</p>
+
+<p>Spans are added to the trace table asynchronously. The user may have to wait several seconds for all requests to complete before the
+trace information is complete.</p>
+
+<p>You can extract the trace data out of the trace table. Each span is a stored as a column in a row named for the trace id.
+The following code will print out a trace:</p>
+
+<pre>
+String table = AccumuloConfiguration.getSystemConfiguration().get(Property.TRACE_TABLE);
+Scanner scanner = shellState.connector.createScanner(table, auths);
+scanner.setRange(new Range(new Text(Long.toHexString(scanTrace.traceId()))));
+TraceDump.printTrace(scanner, new Printer() {
+    void print(String line) {
+        System.out.println(line);
+    }
+});
+</pre>
+
+</body>
+</html>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/docs/src/main/resources/documentation.css
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/documentation.css b/docs/src/main/resources/documentation.css
new file mode 100644
index 0000000..3457dac
--- /dev/null
+++ b/docs/src/main/resources/documentation.css
@@ -0,0 +1,112 @@
+/*
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements.  See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+html, body {
+    font-size: 10pt;
+    font-family: verdana, arial;
+}
+
+h1 {
+    font-size: 1.7em;
+    font-variant: small-caps;
+    text-align: left;
+}
+
+h2 {
+    font-size: 1.3em;
+    text-align: left;
+}
+
+.highlight {
+    background-color: rgb(206,244,181);
+}
+
+.deprecated {
+    text-decoration: line-through;
+}
+
+table {
+    min-width: 60%;
+    border: 1px #333333 solid;
+    border-spacing-top: 0;
+    border-spacing-bottom: 0;
+    border: 1px #333333 solid;
+    border: 1px #333333 solid;
+}
+
+th {
+    border-top: 0;
+    border-bottom: 3px #333333 solid;
+    border-left: 1px #333333 dotted;
+    border-right: 0;
+    border-spacing-top: 0;
+    border-spacing-bottom: 0;
+    text-align: center;
+    font-variant: small-caps;
+    padding-left: 0.1em;
+    padding-right: 0.1em;
+    padding-top: 0.2em;
+    padding-bottom: 0.2em;
+    vertical-align: bottom;
+}
+
+td {
+    border-top: 0;
+    border-bottom: 0;
+    border-left: 0;
+    border-right: 0;
+    border-spacing-top: 0;
+    border-spacing-bottom: 0;
+    padding-left: 0.05em;
+    padding-right: 0.05em;
+    padding-top: 0.15em;
+    padding-bottom: 0.15em;
+}
+
+thead {
+    color: rgb(66,114,185);
+    text-align: center;
+    text-weight: bold;
+}
+
+td {
+    font-size: 10pt;
+    text-align:left;
+    padding-left:7pt;
+    padding-right:7pt;
+}
+
+pre {
+    font-size: 9pt;
+}
+
+a {
+    text-decoration: none;
+    color: #0000ff;
+    line-height: 1.5em;
+}
+
+a:hover {
+    color: #004400;
+    text-decoration: underline;
+}
+
+.large {
+    font-size: 1.5em;
+    font-variant: small-caps;
+    text-align: left;
+}
+

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/docs/src/main/resources/examples/README
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/examples/README b/docs/src/main/resources/examples/README
new file mode 100644
index 0000000..4211050
--- /dev/null
+++ b/docs/src/main/resources/examples/README
@@ -0,0 +1,95 @@
+Title: Apache Accumulo Examples
+Notice:    Licensed to the Apache Software Foundation (ASF) under one
+           or more contributor license agreements.  See the NOTICE file
+           distributed with this work for additional information
+           regarding copyright ownership.  The ASF licenses this file
+           to you under the Apache License, Version 2.0 (the
+           "License"); you may not use this file except in compliance
+           with the License.  You may obtain a copy of the License at
+           .
+             http://www.apache.org/licenses/LICENSE-2.0
+           .
+           Unless required by applicable law or agreed to in writing,
+           software distributed under the License is distributed on an
+           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+           KIND, either express or implied.  See the License for the
+           specific language governing permissions and limitations
+           under the License.
+
+Before running any of the examples, the following steps must be performed.
+
+1. Install and run Accumulo via the instructions found in $ACCUMULO_HOME/README.
+   Remember the instance name. It will be referred to as "instance" throughout
+   the examples. A comma-separated list of zookeeper servers will be referred
+   to as "zookeepers".
+
+2. Create an Accumulo user (see the [user manual][1]), or use the root user.
+   The "username" Accumulo user name with password "password" is used
+   throughout the examples. This user needs the ability to create tables.
+
+In all commands, you will need to replace "instance", "zookeepers",
+"username", and "password" with the values you set for your Accumulo instance.
+
+Commands intended to be run in bash are prefixed by '$'. These are always
+assumed to be run from the $ACCUMULO_HOME directory.
+
+Commands intended to be run in the Accumulo shell are prefixed by '>'.
+
+Each README in the examples directory highlights the use of particular
+features of Apache Accumulo.
+
+   README.batch:       Using the batch writer and batch scanner.
+
+   README.bloom:       Creating a bloom filter enabled table to increase query
+                       performance.
+
+   README.bulkIngest:  Ingesting bulk data using map/reduce jobs on Hadoop.
+
+   README.classpath:   Using per-table classpaths.
+
+   README.client:      Using table operations, reading and writing data in Java.
+
+   README.combiner:    Using example StatsCombiner to find min, max, sum, and
+                       count.
+
+   README.constraints: Using constraints with tables.
+
+   README.dirlist:     Storing filesystem information.
+
+   README.export:      Exporting and importing tables.
+
+   README.filedata:    Storing file data.
+
+   README.filter:      Using the AgeOffFilter to remove records more than 30
+                       seconds old.
+
+   README.helloworld:  Inserting records both inside map/reduce jobs and
+                       outside. And reading records between two rows.
+
+   README.isolation:   Using the isolated scanner to ensure partial changes
+                       are not seen.
+
+   README.mapred:      Using MapReduce to read from and write to Accumulo
+                       tables.
+
+   README.maxmutation: Limiting mutation size to avoid running out of memory.
+
+   README.regex:       Using MapReduce and Accumulo to find data using regular
+                       expressions.
+
+   README.rowhash:     Using MapReduce to read a table and write to a new
+                       column in the same table.
+
+   README.shard:       Using the intersecting iterator with a term index
+                       partitioned by document.
+
+   README.tabletofile: Using MapReduce to read a table and write one of its
+                       columns to a file in HDFS.
+
+   README.terasort:    Generating random data and sorting it using Accumulo.
+
+   README.visibility:  Using visibilities (or combinations of authorizations).
+                       Also shows user permissions.
+
+
+[1]: /1.5/user_manual/Accumulo_Shell.html#User_Administration

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/docs/src/main/resources/examples/README.batch
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/examples/README.batch b/docs/src/main/resources/examples/README.batch
new file mode 100644
index 0000000..05f2304
--- /dev/null
+++ b/docs/src/main/resources/examples/README.batch
@@ -0,0 +1,55 @@
+Title: Apache Accumulo Batch Writing and Scanning Example
+Notice:    Licensed to the Apache Software Foundation (ASF) under one
+           or more contributor license agreements.  See the NOTICE file
+           distributed with this work for additional information
+           regarding copyright ownership.  The ASF licenses this file
+           to you under the Apache License, Version 2.0 (the
+           "License"); you may not use this file except in compliance
+           with the License.  You may obtain a copy of the License at
+           .
+             http://www.apache.org/licenses/LICENSE-2.0
+           .
+           Unless required by applicable law or agreed to in writing,
+           software distributed under the License is distributed on an
+           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+           KIND, either express or implied.  See the License for the
+           specific language governing permissions and limitations
+           under the License.
+
+This tutorial uses the following Java classes, which can be found in org.apache.accumulo.examples.simple.client in the examples-simple module:
+
+ * SequentialBatchWriter.java - writes mutations with sequential rows and random values
+ * RandomBatchWriter.java - used by SequentialBatchWriter to generate random values
+ * RandomBatchScanner.java - reads random rows and verifies their values
+
+This is an example of how to use the batch writer and batch scanner. To compile
+the example, run maven and copy the produced jar into the accumulo lib dir.
+This is already done in the tar distribution.
+
+Below are commands that add 10000 entries to accumulo and then do 100 random
+queries. The write command generates random 50 byte values.
+
+Be sure to use the name of your instance (given as instance here) and the appropriate
+list of zookeeper nodes (given as zookeepers here).
+
+Before you run this, you must ensure that the user you are running has the
+"exampleVis" authorization. (you can set this in the shell with "setauths -u username -s exampleVis")
+
+    $ ./bin/accumulo shell -u root -e "setauths -u username -s exampleVis"
+
+You must also create the table, batchtest1, ahead of time. (In the shell, use "createtable batchtest1")
+
+    $ ./bin/accumulo shell -u username -e "createtable batchtest1"
+    $ ./bin/accumulo org.apache.accumulo.examples.simple.client.SequentialBatchWriter -i instance -z zookeepers -u username -p password -t batchtest1 --start 0 --num 10000 --size 50 --batchMemory 20M --batchLatency 500 --batchThreads 20 --vis exampleVis
+    $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner -i instance -z zookeepers -u username -p password -t batchtest1 --num 100 --min 0 --max 10000 --size 50 --scanThreads 20 --vis exampleVis
+    07 11:33:11,103 [client.CountingVerifyingReceiver] INFO : Generating 100 random queries...
+    07 11:33:11,112 [client.CountingVerifyingReceiver] INFO : finished
+    07 11:33:11,260 [client.CountingVerifyingReceiver] INFO : 694.44 lookups/sec   0.14 secs
+
+    07 11:33:11,260 [client.CountingVerifyingReceiver] INFO : num results : 100
+
+    07 11:33:11,364 [client.CountingVerifyingReceiver] INFO : Generating 100 random queries...
+    07 11:33:11,370 [client.CountingVerifyingReceiver] INFO : finished
+    07 11:33:11,416 [client.CountingVerifyingReceiver] INFO : 2173.91 lookups/sec   0.05 secs
+
+    07 11:33:11,416 [client.CountingVerifyingReceiver] INFO : num results : 100

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/docs/src/main/resources/examples/README.bloom
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/examples/README.bloom b/docs/src/main/resources/examples/README.bloom
new file mode 100644
index 0000000..6fe4602
--- /dev/null
+++ b/docs/src/main/resources/examples/README.bloom
@@ -0,0 +1,219 @@
+Title: Apache Accumulo Bloom Filter Example
+Notice:    Licensed to the Apache Software Foundation (ASF) under one
+           or more contributor license agreements.  See the NOTICE file
+           distributed with this work for additional information
+           regarding copyright ownership.  The ASF licenses this file
+           to you under the Apache License, Version 2.0 (the
+           "License"); you may not use this file except in compliance
+           with the License.  You may obtain a copy of the License at
+           .
+             http://www.apache.org/licenses/LICENSE-2.0
+           .
+           Unless required by applicable law or agreed to in writing,
+           software distributed under the License is distributed on an
+           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+           KIND, either express or implied.  See the License for the
+           specific language governing permissions and limitations
+           under the License.
+
+This example shows how to create a table with bloom filters enabled.  It also
+shows how bloom filters increase query performance when looking for values that
+do not exist in a table.
+
+Below table named bloom_test is created and bloom filters are enabled.
+
+    $ ./bin/accumulo shell -u username -p password
+    Shell - Apache Accumulo Interactive Shell
+    - version: 1.5.0
+    - instance name: instance
+    - instance id: 00000000-0000-0000-0000-000000000000
+    -
+    - type 'help' for a list of available commands
+    -
+    username@instance> setauths -u username -s exampleVis
+    username@instance> createtable bloom_test
+    username@instance bloom_test> config -t bloom_test -s table.bloom.enabled=true
+    username@instance bloom_test> exit
+
+Below 1 million random values are inserted into accumulo. The randomly
+generated rows range between 0 and 1 billion. The random number generator is
+initialized with the seed 7.
+
+    $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter --seed 7 -i instance -z zookeepers -u username -p password -t bloom_test --num 1000000 -min 0 -max 1000000000 -valueSize 50 -batchMemory 2M -batchLatency 60s -batchThreads 3 --vis exampleVis
+
+Below the table is flushed:
+
+    $ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test -w'
+    05 10:40:06,069 [shell.Shell] INFO : Flush of table bloom_test completed.
+
+After the flush completes, 500 random queries are done against the table. The
+same seed is used to generate the queries, therefore everything is found in the
+table.
+
+    $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 7 -i instance -z zookeepers -u username -p password -t bloom_test --num 500 --min 0 --max 1000000000 --size 50 -batchThreads 20 --vis exampleVis
+    Generating 500 random queries...finished
+    96.19 lookups/sec   5.20 secs
+    num results : 500
+    Generating 500 random queries...finished
+    102.35 lookups/sec   4.89 secs
+    num results : 500
+
+Below another 500 queries are performed, using a different seed which results
+in nothing being found. In this case the lookups are much faster because of
+the bloom filters.
+
+    $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 8 -i instance -z zookeepers -u username -p password -t bloom_test --num 500 --min 0 --max 1000000000 --size 50 -batchThreads 20 -auths exampleVis
+    Generating 500 random queries...finished
+    2212.39 lookups/sec   0.23 secs
+    num results : 0
+    Did not find 500 rows
+    Generating 500 random queries...finished
+    4464.29 lookups/sec   0.11 secs
+    num results : 0
+    Did not find 500 rows
+
+********************************************************************************
+
+Bloom filters can also speed up lookups for entries that exist. In accumulo
+data is divided into tablets and each tablet has multiple map files. Every
+lookup in accumulo goes to a specific tablet where a lookup is done on each
+map file in the tablet. So if a tablet has three map files, lookup performance
+can be three times slower than a tablet with one map file. However if the map
+files contain unique sets of data, then bloom filters can help eliminate map
+files that do not contain the row being looked up. To illustrate this two
+identical tables were created using the following process. One table had bloom
+filters, the other did not. Also the major compaction ratio was increased to
+prevent the files from being compacted into one file.
+
+ * Insert 1 million entries using  RandomBatchWriter with a seed of 7
+ * Flush the table using the shell
+ * Insert 1 million entries using  RandomBatchWriter with a seed of 8
+ * Flush the table using the shell
+ * Insert 1 million entries using  RandomBatchWriter with a seed of 9
+ * Flush the table using the shell
+
+After following the above steps, each table will have a tablet with three map
+files. Flushing the table after each batch of inserts will create a map file.
+Each map file will contain 1 million entries generated with a different seed.
+This is assuming that Accumulo is configured with enough memory to hold 1
+million inserts. If not, then more map files will be created.
+
+The commands for creating the first table without bloom filters are below.
+
+    $ ./bin/accumulo shell -u username -p password
+    Shell - Apache Accumulo Interactive Shell
+    - version: 1.5.0
+    - instance name: instance
+    - instance id: 00000000-0000-0000-0000-000000000000
+    -
+    - type 'help' for a list of available commands
+    -
+    username@instance> setauths -u username -s exampleVis
+    username@instance> createtable bloom_test1
+    username@instance bloom_test1> config -t bloom_test1 -s table.compaction.major.ratio=7
+    username@instance bloom_test1> exit
+
+    $ ARGS="-i instance -z zookeepers -u username -p password -t bloom_test1 --num 1000000 --min 0 --max 1000000000 --size 50 --batchMemory 2M --batchLatency 60s --batchThreads 3 --auths exampleVis"
+    $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter --seed 7 $ARGS
+    $ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test1 -w'
+    $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter --seed 8 $ARGS
+    $ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test1 -w'
+    $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter --seed 9 $ARGS
+    $ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test1 -w'
+
+The commands for creating the second table with bloom filers are below.
+
+    $ ./bin/accumulo shell -u username -p password
+    Shell - Apache Accumulo Interactive Shell
+    - version: 1.5.0
+    - instance name: instance
+    - instance id: 00000000-0000-0000-0000-000000000000
+    -
+    - type 'help' for a list of available commands
+    -
+    username@instance> setauths -u username -s exampleVis
+    username@instance> createtable bloom_test2
+    username@instance bloom_test2> config -t bloom_test2 -s table.compaction.major.ratio=7
+    username@instance bloom_test2> config -t bloom_test2 -s table.bloom.enabled=true
+    username@instance bloom_test2> exit
+
+    $ ARGS="-i instance -z zookeepers -u username -p password -t bloom_test2 --num 1000000 --min 0 --max 1000000000 --size 50 --batchMemory 2M --batchLatency 60s --batchThreads 3 --auths exampleVis"
+    $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter --seed 7 $ARGS
+    $ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test2 -w'
+    $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter --seed 8 $ARGS
+    $ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test2 -w'
+    $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchWriter --seed 9 $ARGS
+    $ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test2 -w'
+
+Below 500 lookups are done against the table without bloom filters using random
+NG seed 7. Even though only one map file will likely contain entries for this
+seed, all map files will be interrogated.
+
+    $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 7 -i instance -z zookeepers -u username -p password -t bloom_test1 --num 500 --min 0 --max 1000000000 --size 50 --scanThreads 20 --auths exampleVis
+    Generating 500 random queries...finished
+    35.09 lookups/sec  14.25 secs
+    num results : 500
+    Generating 500 random queries...finished
+    35.33 lookups/sec  14.15 secs
+    num results : 500
+
+Below the same lookups are done against the table with bloom filters. The
+lookups were 2.86 times faster because only one map file was used, even though three
+map files existed.
+
+    $ ./bin/accumulo org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 7 -i instance -z zookeepers -u username -p password -t bloom_test2 --num 500 --min 0 --max 1000000000 --size 50 -scanThreads 20 --auths exampleVis
+    Generating 500 random queries...finished
+    99.03 lookups/sec   5.05 secs
+    num results : 500
+    Generating 500 random queries...finished
+    101.15 lookups/sec   4.94 secs
+    num results : 500
+
+You can verify the table has three files by looking in HDFS. To look in HDFS
+you will need the table ID, because this is used in HDFS instead of the table
+name. The following command will show table ids.
+
+    $ ./bin/accumulo shell -u username -p password -e 'tables -l'
+    accumulo.metadata    =>        !0
+    accumulo.root        =>        +r
+    bloom_test1          =>        o7
+    bloom_test2          =>        o8
+    trace                =>         1
+
+So the table id for bloom_test2 is o8. The command below shows what files this
+table has in HDFS. This assumes Accumulo is at the default location in HDFS.
+
+    $ hadoop fs -lsr /accumulo/tables/o8
+    drwxr-xr-x   - username supergroup          0 2012-01-10 14:02 /accumulo/tables/o8/default_tablet
+    -rw-r--r--   3 username supergroup   52672650 2012-01-10 14:01 /accumulo/tables/o8/default_tablet/F00000dj.rf
+    -rw-r--r--   3 username supergroup   52436176 2012-01-10 14:01 /accumulo/tables/o8/default_tablet/F00000dk.rf
+    -rw-r--r--   3 username supergroup   52850173 2012-01-10 14:02 /accumulo/tables/o8/default_tablet/F00000dl.rf
+
+Running the rfile-info command shows that one of the files has a bloom filter
+and its 1.5MB.
+
+    $ ./bin/accumulo rfile-info /accumulo/tables/o8/default_tablet/F00000dj.rf
+    Locality group         : <DEFAULT>
+	Start block          : 0
+	Num   blocks         : 752
+	Index level 0        : 43,598 bytes  1 blocks
+	First key            : row_0000001169 foo:1 [exampleVis] 1326222052539 false
+	Last key             : row_0999999421 foo:1 [exampleVis] 1326222052058 false
+	Num entries          : 999,536
+	Column families      : [foo]
+
+    Meta block     : BCFile.index
+      Raw size             : 4 bytes
+      Compressed size      : 12 bytes
+      Compression type     : gz
+
+    Meta block     : RFile.index
+      Raw size             : 43,696 bytes
+      Compressed size      : 15,592 bytes
+      Compression type     : gz
+
+    Meta block     : acu_bloom
+      Raw size             : 1,540,292 bytes
+      Compressed size      : 1,433,115 bytes
+      Compression type     : gz
+

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/docs/src/main/resources/examples/README.bulkIngest
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/examples/README.bulkIngest b/docs/src/main/resources/examples/README.bulkIngest
new file mode 100644
index 0000000..e07dc9b
--- /dev/null
+++ b/docs/src/main/resources/examples/README.bulkIngest
@@ -0,0 +1,33 @@
+Title: Apache Accumulo Bulk Ingest Example
+Notice:    Licensed to the Apache Software Foundation (ASF) under one
+           or more contributor license agreements.  See the NOTICE file
+           distributed with this work for additional information
+           regarding copyright ownership.  The ASF licenses this file
+           to you under the Apache License, Version 2.0 (the
+           "License"); you may not use this file except in compliance
+           with the License.  You may obtain a copy of the License at
+           .
+             http://www.apache.org/licenses/LICENSE-2.0
+           .
+           Unless required by applicable law or agreed to in writing,
+           software distributed under the License is distributed on an
+           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+           KIND, either express or implied.  See the License for the
+           specific language governing permissions and limitations
+           under the License.
+
+This is an example of how to bulk ingest data into accumulo using map reduce.
+
+The following commands show how to run this example. This example creates a
+table called test_bulk which has two initial split points. Then 1000 rows of
+test data are created in HDFS. After that the 1000 rows are ingested into
+accumulo. Then we verify the 1000 rows are in accumulo.
+
+    $ PKG=org.apache.accumulo.examples.simple.mapreduce.bulk
+    $ ARGS="-i instance -z zookeepers -u username -p password"
+    $ ./bin/accumulo $PKG.SetupTable $ARGS -t test_bulk row_00000333 row_00000666
+    $ ./bin/accumulo $PKG.GenerateTestData --start-row 0 --count 1000 --output bulk/test_1.txt
+    $ ./bin/tool.sh lib/accumulo-examples-simple.jar $PKG.BulkIngestExample $ARGS -t test_bulk --inputDir bulk --workDir tmp/bulkWork
+    $ ./bin/accumulo $PKG.VerifyIngest $ARGS -t test_bulk --start-row 0 --count 1000
+
+For a high level discussion of bulk ingest, see the docs dir.

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/docs/src/main/resources/examples/README.classpath
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/examples/README.classpath b/docs/src/main/resources/examples/README.classpath
new file mode 100644
index 0000000..79da239
--- /dev/null
+++ b/docs/src/main/resources/examples/README.classpath
@@ -0,0 +1,68 @@
+Title: Apache Accumulo Classpath Example
+Notice:    Licensed to the Apache Software Foundation (ASF) under one
+           or more contributor license agreements.  See the NOTICE file
+           distributed with this work for additional information
+           regarding copyright ownership.  The ASF licenses this file
+           to you under the Apache License, Version 2.0 (the
+           "License"); you may not use this file except in compliance
+           with the License.  You may obtain a copy of the License at
+           .
+             http://www.apache.org/licenses/LICENSE-2.0
+           .
+           Unless required by applicable law or agreed to in writing,
+           software distributed under the License is distributed on an
+           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+           KIND, either express or implied.  See the License for the
+           specific language governing permissions and limitations
+           under the License.
+
+
+This example shows how to use per table classpaths. The example leverages a
+test jar which contains a Filter that supresses rows containing "foo". The
+example shows copying the FooFilter.jar into HDFS and then making an Accumulo
+table reference that jar.
+
+
+Execute the following command in the shell.
+
+    $ hadoop fs -copyFromLocal $ACCUMULO_HOME/test/src/test/resources/FooFilter.jar /user1/lib
+
+Execute following in Accumulo shell to setup classpath context
+
+    root@test15> config -s general.vfs.context.classpath.cx1=hdfs://<namenode host>:<namenode port>/user1/lib
+
+Create a table
+
+    root@test15> createtable nofoo
+
+The following command makes this table use the configured classpath context
+
+    root@test15 nofoo> config -t nofoo -s table.classpath.context=cx1
+
+The following command configures an iterator thats in FooFilter.jar
+
+    root@test15 nofoo> setiter -n foofilter -p 10 -scan -minc -majc -class org.apache.accumulo.test.FooFilter
+    Filter accepts or rejects each Key/Value pair
+    ----------> set FooFilter parameter negate, default false keeps k/v that pass accept method, true rejects k/v that pass accept method: false
+
+The commands below show the filter is working.
+
+    root@test15 nofoo> insert foo1 f1 q1 v1
+    root@test15 nofoo> insert noo1 f1 q1 v2
+    root@test15 nofoo> scan
+    noo1 f1:q1 []    v2
+    root@test15 nofoo>
+
+Below, an attempt is made to add the FooFilter to a table thats not configured
+to use the clasppath context cx1. This fails util the table is configured to
+use cx1.
+
+    root@test15 nofoo> createtable nofootwo
+    root@test15 nofootwo> setiter -n foofilter -p 10 -scan -minc -majc -class org.apache.accumulo.test.FooFilter
+    2013-05-03 12:49:35,943 [shell.Shell] ERROR: java.lang.IllegalArgumentException: org.apache.accumulo.test.FooFilter
+    root@test15 nofootwo> config -t nofootwo -s table.classpath.context=cx1
+    root@test15 nofootwo> setiter -n foofilter -p 10 -scan -minc -majc -class org.apache.accumulo.test.FooFilter
+    Filter accepts or rejects each Key/Value pair
+    ----------> set FooFilter parameter negate, default false keeps k/v that pass accept method, true rejects k/v that pass accept method: false
+
+

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/docs/src/main/resources/examples/README.client
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/examples/README.client b/docs/src/main/resources/examples/README.client
new file mode 100644
index 0000000..f6b8bcb
--- /dev/null
+++ b/docs/src/main/resources/examples/README.client
@@ -0,0 +1,79 @@
+Title: Apache Accumulo Client Examples
+Notice:    Licensed to the Apache Software Foundation (ASF) under one
+           or more contributor license agreements.  See the NOTICE file
+           distributed with this work for additional information
+           regarding copyright ownership.  The ASF licenses this file
+           to you under the Apache License, Version 2.0 (the
+           "License"); you may not use this file except in compliance
+           with the License.  You may obtain a copy of the License at
+           .
+             http://www.apache.org/licenses/LICENSE-2.0
+           .
+           Unless required by applicable law or agreed to in writing,
+           software distributed under the License is distributed on an
+           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+           KIND, either express or implied.  See the License for the
+           specific language governing permissions and limitations
+           under the License.
+
+This documents how you run the simplest java examples.
+
+This tutorial uses the following Java classes, which can be found in org.apache.accumulo.examples.simple.client in the examples-simple module:
+
+ * Flush.java - flushes a table
+ * RowOperations.java - reads and writes rows
+ * ReadWriteExample.java - creates a table, writes to it, and reads from it
+
+Using the accumulo command, you can run the simple client examples by providing their
+class name, and enough arguments to find your accumulo instance. For example,
+the Flush class will flush a table:
+
+    $ PACKAGE=org.apache.accumulo.examples.simple.client
+    $ bin/accumulo $PACKAGE.Flush -u root -p mypassword -i instance -z zookeeper -t trace
+
+The very simple RowOperations class demonstrates how to read and write rows using the BatchWriter
+and Scanner:
+
+    $ bin/accumulo $PACKAGE.RowOperations -u root -p mypassword -i instance -z zookeeper
+    2013-01-14 14:45:24,738 [client.RowOperations] INFO : This is everything
+    2013-01-14 14:45:24,744 [client.RowOperations] INFO : Key: row1 column:1 [] 1358192724640 false Value: This is the value for this key
+    2013-01-14 14:45:24,744 [client.RowOperations] INFO : Key: row1 column:2 [] 1358192724642 false Value: This is the value for this key
+    2013-01-14 14:45:24,744 [client.RowOperations] INFO : Key: row1 column:3 [] 1358192724642 false Value: This is the value for this key
+    2013-01-14 14:45:24,744 [client.RowOperations] INFO : Key: row1 column:4 [] 1358192724642 false Value: This is the value for this key
+    2013-01-14 14:45:24,746 [client.RowOperations] INFO : Key: row2 column:1 [] 1358192724642 false Value: This is the value for this key
+    2013-01-14 14:45:24,746 [client.RowOperations] INFO : Key: row2 column:2 [] 1358192724642 false Value: This is the value for this key
+    2013-01-14 14:45:24,746 [client.RowOperations] INFO : Key: row2 column:3 [] 1358192724642 false Value: This is the value for this key
+    2013-01-14 14:45:24,746 [client.RowOperations] INFO : Key: row2 column:4 [] 1358192724642 false Value: This is the value for this key
+    2013-01-14 14:45:24,747 [client.RowOperations] INFO : Key: row3 column:1 [] 1358192724642 false Value: This is the value for this key
+    2013-01-14 14:45:24,747 [client.RowOperations] INFO : Key: row3 column:2 [] 1358192724642 false Value: This is the value for this key
+    2013-01-14 14:45:24,747 [client.RowOperations] INFO : Key: row3 column:3 [] 1358192724642 false Value: This is the value for this key
+    2013-01-14 14:45:24,747 [client.RowOperations] INFO : Key: row3 column:4 [] 1358192724642 false Value: This is the value for this key
+    2013-01-14 14:45:24,756 [client.RowOperations] INFO : This is row1 and row3
+    2013-01-14 14:45:24,757 [client.RowOperations] INFO : Key: row1 column:1 [] 1358192724640 false Value: This is the value for this key
+    2013-01-14 14:45:24,757 [client.RowOperations] INFO : Key: row1 column:2 [] 1358192724642 false Value: This is the value for this key
+    2013-01-14 14:45:24,757 [client.RowOperations] INFO : Key: row1 column:3 [] 1358192724642 false Value: This is the value for this key
+    2013-01-14 14:45:24,757 [client.RowOperations] INFO : Key: row1 column:4 [] 1358192724642 false Value: This is the value for this key
+    2013-01-14 14:45:24,761 [client.RowOperations] INFO : Key: row3 column:1 [] 1358192724642 false Value: This is the value for this key
+    2013-01-14 14:45:24,761 [client.RowOperations] INFO : Key: row3 column:2 [] 1358192724642 false Value: This is the value for this key
+    2013-01-14 14:45:24,761 [client.RowOperations] INFO : Key: row3 column:3 [] 1358192724642 false Value: This is the value for this key
+    2013-01-14 14:45:24,761 [client.RowOperations] INFO : Key: row3 column:4 [] 1358192724642 false Value: This is the value for this key
+    2013-01-14 14:45:24,765 [client.RowOperations] INFO : This is just row3
+    2013-01-14 14:45:24,769 [client.RowOperations] INFO : Key: row3 column:1 [] 1358192724642 false Value: This is the value for this key
+    2013-01-14 14:45:24,770 [client.RowOperations] INFO : Key: row3 column:2 [] 1358192724642 false Value: This is the value for this key
+    2013-01-14 14:45:24,770 [client.RowOperations] INFO : Key: row3 column:3 [] 1358192724642 false Value: This is the value for this key
+    2013-01-14 14:45:24,770 [client.RowOperations] INFO : Key: row3 column:4 [] 1358192724642 false Value: This is the value for this key
+
+To create a table, write to it and read from it:
+
+    $ bin/accumulo $PACKAGE.ReadWriteExample -u root -p mypassword -i instance -z zookeeper --createtable --create --read
+    hello%00; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -> world
+    hello%01; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -> world
+    hello%02; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -> world
+    hello%03; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -> world
+    hello%04; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -> world
+    hello%05; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -> world
+    hello%06; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -> world
+    hello%07; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -> world
+    hello%08; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -> world
+    hello%09; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -> world
+

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/docs/src/main/resources/examples/README.combiner
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/examples/README.combiner b/docs/src/main/resources/examples/README.combiner
new file mode 100644
index 0000000..f388e5b
--- /dev/null
+++ b/docs/src/main/resources/examples/README.combiner
@@ -0,0 +1,70 @@
+Title: Apache Accumulo Combiner Example
+Notice:    Licensed to the Apache Software Foundation (ASF) under one
+           or more contributor license agreements.  See the NOTICE file
+           distributed with this work for additional information
+           regarding copyright ownership.  The ASF licenses this file
+           to you under the Apache License, Version 2.0 (the
+           "License"); you may not use this file except in compliance
+           with the License.  You may obtain a copy of the License at
+           .
+             http://www.apache.org/licenses/LICENSE-2.0
+           .
+           Unless required by applicable law or agreed to in writing,
+           software distributed under the License is distributed on an
+           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+           KIND, either express or implied.  See the License for the
+           specific language governing permissions and limitations
+           under the License.
+
+This tutorial uses the following Java class, which can be found in org.apache.accumulo.examples.simple.combiner in the examples-simple module:
+
+ * StatsCombiner.java - a combiner that calculates max, min, sum, and count
+
+This is a simple combiner example. To build this example run maven and then
+copy the produced jar into the accumulo lib dir. This is already done in the
+tar distribution.
+
+    $ bin/accumulo shell -u username
+    Enter current password for 'username'@'instance': ***
+
+    Shell - Apache Accumulo Interactive Shell
+    -
+    - version: 1.5.0
+    - instance name: instance
+    - instance id: 00000000-0000-0000-0000-000000000000
+    -
+    - type 'help' for a list of available commands
+    -
+    username@instance> createtable runners
+    username@instance runners> setiter -t runners -p 10 -scan -minc -majc -n decStats -class org.apache.accumulo.examples.simple.combiner.StatsCombiner
+    Combiner that keeps track of min, max, sum, and count
+    ----------> set StatsCombiner parameter all, set to true to apply Combiner to every column, otherwise leave blank. if true, columns option will be ignored.:
+    ----------> set StatsCombiner parameter columns, <col fam>[:<col qual>]{,<col fam>[:<col qual>]} escape non aplhanum chars using %<hex>.: stat
+    ----------> set StatsCombiner parameter radix, radix/base of the numbers: 10
+    username@instance runners> setiter -t runners -p 11 -scan -minc -majc -n hexStats -class org.apache.accumulo.examples.simple.combiner.StatsCombiner
+    Combiner that keeps track of min, max, sum, and count
+    ----------> set StatsCombiner parameter all, set to true to apply Combiner to every column, otherwise leave blank. if true, columns option will be ignored.:
+    ----------> set StatsCombiner parameter columns, <col fam>[:<col qual>]{,<col fam>[:<col qual>]} escape non aplhanum chars using %<hex>.: hstat
+    ----------> set StatsCombiner parameter radix, radix/base of the numbers: 16
+    username@instance runners> insert 123456 name first Joe
+    username@instance runners> insert 123456 stat marathon 240
+    username@instance runners> scan
+    123456 name:first []    Joe
+    123456 stat:marathon []    240,240,240,1
+    username@instance runners> insert 123456 stat marathon 230
+    username@instance runners> insert 123456 stat marathon 220
+    username@instance runners> scan
+    123456 name:first []    Joe
+    123456 stat:marathon []    220,240,690,3
+    username@instance runners> insert 123456 hstat virtualMarathon 6a
+    username@instance runners> insert 123456 hstat virtualMarathon 6b
+    username@instance runners> scan
+    123456 hstat:virtualMarathon []    6a,6b,d5,2
+    123456 name:first []    Joe
+    123456 stat:marathon []    220,240,690,3
+
+In this example a table is created and the example stats combiner is applied to
+the column family stat and hstat. The stats combiner computes min,max,sum, and
+count. It can be configured to use a different base or radix. In the example
+above the column family stat is configured for base 10 and the column family
+hstat is configured for base 16.

http://git-wip-us.apache.org/repos/asf/accumulo/blob/a20e19fc/docs/src/main/resources/examples/README.constraints
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/examples/README.constraints b/docs/src/main/resources/examples/README.constraints
new file mode 100644
index 0000000..b15b409
--- /dev/null
+++ b/docs/src/main/resources/examples/README.constraints
@@ -0,0 +1,54 @@
+Title: Apache Accumulo Constraints Example
+Notice:    Licensed to the Apache Software Foundation (ASF) under one
+           or more contributor license agreements.  See the NOTICE file
+           distributed with this work for additional information
+           regarding copyright ownership.  The ASF licenses this file
+           to you under the Apache License, Version 2.0 (the
+           "License"); you may not use this file except in compliance
+           with the License.  You may obtain a copy of the License at
+           .
+             http://www.apache.org/licenses/LICENSE-2.0
+           .
+           Unless required by applicable law or agreed to in writing,
+           software distributed under the License is distributed on an
+           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+           KIND, either express or implied.  See the License for the
+           specific language governing permissions and limitations
+           under the License.
+
+This tutorial uses the following Java classes, which can be found in org.apache.accumulo.examples.simple.constraints in the examples-simple module:
+
+ * AlphaNumKeyConstraint.java - a constraint that requires alphanumeric keys
+ * NumericValueConstraint.java - a constraint that requires numeric string values
+
+This an example of how to create a table with constraints. Below a table is
+created with two example constraints. One constraints does not allow non alpha
+numeric keys. The other constraint does not allow non numeric values. Two
+inserts that violate these constraints are attempted and denied. The scan at
+the end shows the inserts were not allowed.
+
+    $ ./bin/accumulo shell -u username -p password
+
+    Shell - Apache Accumulo Interactive Shell
+    -
+    - version: 1.5.0
+    - instance name: instance
+    - instance id: 00000000-0000-0000-0000-000000000000
+    -
+    - type 'help' for a list of available commands
+    -
+    username@instance> createtable testConstraints
+    username@instance testConstraints> constraint -a org.apache.accumulo.examples.simple.constraints.NumericValueConstraint
+    username@instance testConstraints> constraint -a org.apache.accumulo.examples.simple.constraints.AlphaNumKeyConstraint
+    username@instance testConstraints> insert r1 cf1 cq1 1111
+    username@instance testConstraints> insert r1 cf1 cq1 ABC
+      Constraint Failures:
+          ConstraintViolationSummary(constrainClass:org.apache.accumulo.examples.simple.constraints.NumericValueConstraint, violationCode:1, violationDescription:Value is not numeric, numberOfViolatingMutations:1)
+    username@instance testConstraints> insert r1! cf1 cq1 ABC
+      Constraint Failures:
+          ConstraintViolationSummary(constrainClass:org.apache.accumulo.examples.simple.constraints.NumericValueConstraint, violationCode:1, violationDescription:Value is not numeric, numberOfViolatingMutations:1)
+          ConstraintViolationSummary(constrainClass:org.apache.accumulo.examples.simple.constraints.AlphaNumKeyConstraint, violationCode:1, violationDescription:Row was not alpha numeric, numberOfViolatingMutations:1)
+    username@instance testConstraints> scan
+    r1 cf1:cq1 []    1111
+    username@instance testConstraints>
+