You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@accumulo.apache.org by ct...@apache.org on 2016/04/29 05:56:40 UTC

[1/4] accumulo git commit: Favor markdown over html

Repository: accumulo
Updated Branches:
  refs/heads/asf-site 6db455fd0 -> e938fe2bd
  refs/heads/gh-pages d558c0da8 -> 964cf811c


http://git-wip-us.apache.org/repos/asf/accumulo/blob/964cf811/papers/index.md
----------------------------------------------------------------------
diff --git a/papers/index.md b/papers/index.md
index ad2cd00..b4a11a9 100644
--- a/papers/index.md
+++ b/papers/index.md
@@ -5,195 +5,35 @@ nav: nav_papers
 
 ## Papers and Presentations
 
-<table id="citationtable" class="table table-bordered table-striped" style="width:100%">
-  <thead>
-    <tr><th>Title</th><th>Links</th><th>Category</th><th>Venue</th><th>Peer-Reviewed</th><th>Awards</th></tr>
-  </thead>
-  <tbody>
-    <tr><td>Prout, Andrew, et al. "Enabling On-Demand Database Computing with MIT SuperCloud Database Management System." IEEE HPEC 2015.</td>
-      <td><a href="https://arxiv.org/abs/1506.08506">paper</a></td>
-      <td>Architecture</td>
-      <td>IEEE HPEC 2015</td>
-      <td>Yes</td>
-      <td/>
-    </tr>
-    <tr><td>Hubbell, Matthew, et al. "Big Data Strategies for Data Center Infrastructure Management Using a 3D Gaming Platform." IEEE HPEC 2015.</td>
-      <td><a href="https://arxiv.org/abs/1506.08505">paper</a></td>
-      <td>Use Cases</td>
-      <td>IEEE HPEC 2015</td>
-      <td>Yes</td>
-      <td/>
-    </tr>
-    <tr><td>Gadepally, Vijay, et al. "Computing on Masked Data to improve the Security of Big Data." IEEE HST 2015.</td>
-      <td><a href="https://arxiv.org/abs/1504.01287">paper</a></td>
-      <td>Use Cases</td>
-      <td>IEEE HST 2015</td>
-      <td>Yes</td>
-      <td/>
-    </tr>
-    <tr><td>Kepner, Jeremy, et al. "Associative Arrays: Unified Mathematics for Spreadsheets, Databases, Matrices, and Graphs." New England Database Summit 2015.</td>
-      <td><a href="https://arxiv.org/abs/1501.05709">paper</a></td>
-      <td>Use Cases</td>
-      <td>New England Database Summit 2015</td>
-      <td>Yes</td>
-      <td/>
-    </tr>
-    <tr><td>Achieving 100,000,000 database inserts per second using Accumulo and D4M - Kepner et al IEEE HPEC 2014</td>
-      <td><a href="https://arxiv.org/abs/1406.4923">paper</a></td>
-      <td>Performance</td>
-      <td>IEEE HPEC 2014</td>
-      <td>Yes</td>
-      <td></td>
-    </tr>
-    <tr><td>Evaluating Accumulo Performance for a Scalable Cyber Data Processing Pipeline - Sawyer et al IEEE HPEC 2014</td>
-      <td><a href="https://arxiv.org/abs/1407.5661">paper</a></td>
-      <td>Performance</td>
-      <td>IEEE HPEC 2014</td>
-      <td>Yes</td>
-      <td></td>
-    </tr>
-    <tr><td>Understanding Query Performance in Accumulo - Sawyer et al IEEE HPEC 2013</td>
-      <td><a href="http://ieee-hpec.org/2013/index_htm_files/28-2868615.pdf">paper</a></td>
-      <td>Performance</td>
-      <td>IEEE HPEC 2013</td>
-      <td>Yes</td>
-      <td></td>
-    </tr>
-    <tr><td>Benchmarking Apache Accumulo BigData Distributed Table Store Using Its Continuous Test Suite - Sen et al 2013 IEEE International Congress on Big Data.</td>
-      <td><a href="https://sqrrl.com/media/Accumulo-Benchmark-10312013-1.pdf">paper</a></td>
-      <td>Performance</td>
-      <td>IEEE International Congress on Big Data 2013</td>
-      <td>Yes</td>
-      <td></td>
-    </tr>
-    <tr><td>Benchmarking the Apache Accumulo Distributed Key-Value Store - Sen et al 2014 Updated version of the IEEE paper, including edits for clarity and additional results from benchmarking on Amazon EC2.</i></td>
-      <td><a href="accumulo-benchmarking-2.1.pdf">paper</a></td>
-      <td>Performance</td>
-      <td>Online</td>
-      <td>No</td>
-      <td></td>
-    </tr>
-    <tr><td>Computing on Masked Data: a High Performance Method for Improving Big Data Veracity - Kepner et al IEEE HPEC 2014</td>
-      <td><a href="https://arxiv.org/abs/1406.5751">paper</a></td>
-      <td>Security</td>
-      <td>IEEE HPEC 2014</td>
-      <td>Yes</td>
-      <td></td>
-    </tr>
-    <tr><td>D4M 2.0 Schema: A General Purpose High Performance Schema for the Accumulo Database - Kepner et al IEEE HPEC 2013</td>
-      <td><a href="http://ieee-hpec.org/2013/index_htm_files/11-Kepner-D4Mschema-IEEE-HPEC.pdf">paper</a> <a href="http://ieee-hpec.org/2013/index_htm_files/11_130716-D4Mschema.pdf">slides</a></td>
-      <td>Architecture</td>
-      <td>IEEE HPEC 2013</td>
-      <td>Yes</td>
-      <td></td>
-    </tr>
-    <tr><td>Genetic Sequence Matching Using D4M Big Data Approaches - Dodson et al IEEE HPEC 2014</td>
-      <td></td>
-      <td>Use Cases</td>
-      <td>IEEE HPEC 2014</td>
-      <td>Yes</td>
-      <td>Best Paper Finalist</td>
-    </tr>
-    <tr><td>Big Data Dimensional Analysis - Gadepally et al IEEE HPEC 2014</td>
-      <td></td>
-      <td>Use Cases</td>
-      <td>IEEE HPEC 2014</td>
-      <td>Yes</td>
-      <td></td>
-    </tr>
-    <tr><td>LLSuperCloud: Sharing HPC systems for diverse rapid prototyping - Reuther et al IEEE HPEC 2013</td>
-      <td><a href="http://ieee-hpec.org/2013/index_htm_files/26-HPEC13_LLSuperCloud_Reuther_final.pdf">paper</a> <a href="http://ieee-hpec.org/2013/index_htm_files/HPEC+2013+Reuther+SuperCloud+final.pdf">slides</a></td>
-      <td>Architecture</td>
-      <td>IEEE HPEC 2013</td>
-      <td>Yes</td>
-      <td></td>
-    </tr>
-    <tr><td>Spatio-temporal indexing in non-relational distributed databases - Fox et al 2013 IEEE International Congress on Big Data</td>
-      <td><a href="http://geomesa.github.io/assets/outreach/SpatioTemporalIndexing_IEEEcopyright.pdf">paper</a></td>
-      <td>Use Cases</td>
-      <td>IEEE International Congress on Big Data 2013</td>
-      <td>Yes</td>
-      <td></td>
-    </tr>
-    <tr><td>Typograph: Multiscale spatial exploration of text documents - Endert, Alex, et al 2013 IEEE International Congress on Big Data</td>
-      <td><a href="https://people.cs.vt.edu/aendert/Alex_Endert/Research_files/Typograph.pdf">paper</a></td>
-      <td>Use Cases</td>
-      <td>IEEE International Congress on Big Data 2013</td>
-      <td>Yes</td>
-      <td></td>
-    </tr>
-    <tr><td>UxV Data to the Cloud via Widgets - Charles et al 18th International Command &amp; Control Research &amp; Technology Symposium 2013</td>
-      <td><a href="http://www.dodccrp.org/events/18th_iccrts_2013/post_conference/papers/051.pdf">paper</a></td>
-      <td>Use Cases</td>
-      <td>Command &amp; Control Research &amp; Technology Symposium 2013</td>
-      <td>Yes</td>
-      <td></td>
-    </tr>
-    <tr><td>Driving Big Data With Big Compute - Byun et al IEEE HPEC 2012</td>
-      <td><a href="http://www.mit.edu/~kepner/pubs/ByunKepner_2012_BigData_Paper.pdf">paper</a> <a href="http://ieee-hpec.org/2012/index_htm_files/42_ID18.pptx">slides</a></td>
-      <td>Architecture</td>
-      <td>IEEE HPEC 2012</td>
-      <td>Yes</td>
-      <td></td>
-    </tr>
-    <tr><td>Large Scale Network Situational Awareness Via 3D Gaming Technology - Hubbell et al IEEE HPEC 2012</td>
-      <td><a href="http://ieee-hpec.org/2012/index_htm_files/HPEC12_Hubbell.pdf">paper</a> <a href="http://ieee-hpec.org/2012/index_htm_files/31_ID20.pptx">slides</a></td>
-      <td>Use Cases</td>
-      <td>IEEE HPEC 2012</td>
-      <td>Yes</td>
-      <td></td>
-    </tr>
-    <tr><td>Rya: a scalable RDF triple store for the clouds - Punnoose et al 1st International Workshop on Cloud Intelligence, ACM, 2012</td>
-      <td><a href="https://sqrrl.com/media/Rya_CloudI20121.pdf">paper</a></td>
-      <td>Architecture</td>
-      <td>ACM International Workshop on Cloud Intelligence 2012</td>
-      <td>Yes</td>
-      <td></td>
-    </tr>
-    <tr><td>Dynamic distributed dimensional data model (D4M) database and computation system - Kepner et al ICASSP 2012</td>
-      <td><a href="http://www.mit.edu/~kepner/pubs/Kepner_2012_D4M_Paper.pdf">paper</a> <a href="http://www.mit.edu/~kepner/pubs/Kepner_2012_D4M_Slides.pdf">slides</a></td>
-      <td>Architecture</td>
-      <td>ICASSP 2012</td>
-      <td>Yes</td>
-      <td></td>
-    </tr>
-    <tr><td>An NSA Big Graph experiment (Technical Report NSA-RD-2013-056002v1)</td>
-      <td><a href="http://www.pdl.cmu.edu/SDI/2013/slides/big_graph_nsa_rd_2013_56002v1.pdf">slides</a></td>
-      <td>Performance</td>
-      <td>Technical Report</td>
-      <td>No</td>
-      <td></td>
-    </tr>
-    <tr><td>Patil, Swapnil, et al. "YCSB++: benchmarking and performance debugging advanced features in scalable table stores." Proceedings of the 2nd ACM Symposium on Cloud Computing. ACM, 2011.</td>
-      <td><a href="http://www.pdl.cmu.edu/PDL-FTP/Storage/socc2011.pdf">paper</a> <a href="http://www.cercs.gatech.edu/opencirrus/OCsummit11/presentations/patil.pdf">slides</a></td>
-      <td>Performance</td>
-      <td>ACM Symposium on Cloud Computing 2011</td>
-      <td>Yes</td>
-      <td></td>
-    </tr>
-    <tr><td>Turner, Keith "Apache Accumulo 1.4 &amp; 1.5 Features." February, 2012</td>
-      <td><a href="https://home.apache.org/~kturner/accumulo14_15.pdf">slides</a></td>
-      <td>Architecture</td>
-      <td>Meetup</td>
-      <td>No</td>
-      <td></td>
-    </tr>
-    <tr><td>Fuchs, Adam "Accumulo: Extensions to Google's Bigtable Design." March, 2012</td>
-      <td><a href="https://home.apache.org/~afuchs/slides/morgan_state_talk.pdf">slides</a></td>
-      <td>Architecture</td>
-      <td>Meetup</td>
-      <td>No</td>
-      <td></td>
-    </tr>
-    <tr><td>Miner, Donald "An Introduction to Apache Accumulo - how it works, why it exists, and how it is used." August, 2014</td>
-      <td><a href="https://www.slideshare.net/DonaldMiner/an-introduction-to-accumulo">slides</a></td>
-      <td>Architecture</td>
-      <td>Online</td>
-      <td>No</td>
-      <td/>
-    </tr>
-  </tbody>
-</table>
+{: #citationtable .table .table-bordered .table-striped style="width:100%;" }
+| Title                                                                                                                                                                                                | Links                      | Category     | Venue                                                          | Peer-Reviewed | Awards              |
+|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------|--------------|----------------------------------------------------------------|---------------|---------------------|
+| Prout, Andrew, et al. "Enabling On-Demand Database Computing with MIT SuperCloud Database Management System." IEEE HPEC 2015.                                                                        | [paper][p01]               | Architecture | IEEE HPEC 2015                                                 | Yes           |                     |
+| Hubbell, Matthew, et al. "Big Data Strategies for Data Center Infrastructure Management Using a 3D Gaming Platform." IEEE HPEC 2015.                                                                 | [paper][p02]               | Use Cases    | IEEE HPEC 2015                                                 | Yes           |                     |
+| Gadepally, Vijay, et al. "Computing on Masked Data to improve the Security of Big Data." IEEE HST 2015.                                                                                              | [paper][p03]               | Use Cases    | IEEE HST 2015                                                  | Yes           |                     |
+| Kepner, Jeremy, et al. "Associative Arrays: Unified Mathematics for Spreadsheets, Databases, Matrices, and Graphs." New England Database Summit 2015.                                                | [paper][p04]               | Use Cases    | New England Database Summit 2015                               | Yes           |                     |
+| Achieving 100,000,000 database inserts per second using Accumulo and D4M - Kepner et al IEEE HPEC 2014                                                                                               | [paper][p05]               | Performance  | IEEE HPEC 2014                                                 | Yes           |                     |
+| Evaluating Accumulo Performance for a Scalable Cyber Data Processing Pipeline - Sawyer et al IEEE HPEC 2014                                                                                          | [paper][p06]               | Performance  | IEEE HPEC 2014                                                 | Yes           |                     |
+| Understanding Query Performance in Accumulo - Sawyer et al IEEE HPEC 2013                                                                                                                            | [paper][p07]               | Performance  | IEEE HPEC 2013                                                 | Yes           |                     |
+| Benchmarking Apache Accumulo BigData Distributed Table Store Using Its Continuous Test Suite - Sen et al 2013 IEEE International Congress on Big Data.                                               | [paper][p08]               | Performance  | IEEE International Congress on Big Data 2013                   | Yes           |                     |
+| Benchmarking the Apache Accumulo Distributed Key-Value Store - Sen et al 2014 Updated version of the IEEE paper, including edits for clarity and additional results from benchmarking on Amazon EC2. | [paper][p09]               | Performance  | Online                                                         | No            |                     |
+| Computing on Masked Data: a High Performance Method for Improving Big Data Veracity - Kepner et al IEEE HPEC 2014                                                                                    | [paper][p10]               | Security     | IEEE HPEC 2014                                                 | Yes           |                     |
+| D4M 2.0 Schema: A General Purpose High Performance Schema for the Accumulo Database - Kepner et al IEEE HPEC 2013                                                                                    | [paper][p11] [slides][s11] | Architecture | IEEE HPEC 2013                                                 | Yes           |                     |
+| Genetic Sequence Matching Using D4M Big Data Approaches - Dodson et al IEEE HPEC 2014                                                                                                                |                            | Use Cases    | IEEE HPEC 2014                                                 | Yes           | Best Paper Finalist |
+| Big Data Dimensional Analysis - Gadepally et al IEEE HPEC 2014                                                                                                                                       |                            | Use Cases    | IEEE HPEC 2014                                                 | Yes           |                     |
+| LLSuperCloud: Sharing HPC systems for diverse rapid prototyping - Reuther et al IEEE HPEC 2013                                                                                                       | [paper][p12] [slides][s12] | Architecture | IEEE HPEC 2013                                                 | Yes           |                     |
+| Spatio-temporal indexing in non-relational distributed databases - Fox et al 2013 IEEE International Congress on Big Data                                                                            | [paper][p13]               | Use Cases    | IEEE International Congress on Big Data 2013                   | Yes           |                     |
+| Typograph: Multiscale spatial exploration of text documents - Endert, Alex, et al 2013 IEEE International Congress on Big Data                                                                       | [paper][p14]               | Use Cases    | IEEE International Congress on Big Data 2013                   | Yes           |                     |
+| UxV Data to the Cloud via Widgets - Charles et al 18th International Command &amp; Control Research &amp; Technology Symposium 2013                                                                  | [paper][p15]               | Use Cases    | Command &amp; Control Research &amp; Technology Symposium 2013 | Yes           |                     |
+| Driving Big Data With Big Compute - Byun et al IEEE HPEC 2012                                                                                                                                        | [paper][p16] [slides][s16] | Architecture | IEEE HPEC 2012                                                 | Yes           |                     |
+| Large Scale Network Situational Awareness Via 3D Gaming Technology - Hubbell et al IEEE HPEC 2012                                                                                                    | [paper][p17] [slides][s17] | Use Cases    | IEEE HPEC 2012                                                 | Yes           |                     |
+| Rya: a scalable RDF triple store for the clouds - Punnoose et al 1st International Workshop on Cloud Intelligence, ACM, 2012                                                                         | [paper][p18]               | Architecture | ACM International Workshop on Cloud Intelligence 2012          | Yes           |                     |
+| Dynamic distributed dimensional data model (D4M) database and computation system - Kepner et al ICASSP 2012                                                                                          | [paper][p19] [slides][s19] | Architecture | ICASSP 2012                                                    | Yes           |                     |
+| An NSA Big Graph experiment (Technical Report NSA-RD-2013-056002v1)                                                                                                                                  | [slides][s20]              | Performance  | Technical Report                                               | No            |                     |
+| Patil, Swapnil, et al. "YCSB++: benchmarking and performance debugging advanced features in scalable table stores." Proceedings of the 2nd ACM Symposium on Cloud Computing. ACM, 2011.              | [paper][p21] [slides][s21] | Performance  | ACM Symposium on Cloud Computing 2011                          | Yes           |                     |
+| Turner, Keith "Apache Accumulo 1.4 &amp; 1.5 Features." February, 2012                                                                                                                               | [slides][s22]              | Architecture | Meetup                                                         | No            |                     |
+| Fuchs, Adam "Accumulo: Extensions to Google's Bigtable Design." March, 2012                                                                                                                          | [slides][s23]              | Architecture | Meetup                                                         | No            |                     |
+| Miner, Donald "An Introduction to Apache Accumulo - how it works, why it exists, and how it is used." August, 2014                                                                                   | [slides][s24]              | Architecture | Online                                                         | No            |                     |
 
 <script type="text/javascript">
 $(function() {
@@ -239,3 +79,35 @@ $("#citationtable").dataTable();
  - [Chubby](https://research.google.com/archive/chubby.html)
  - [Dapper](https://research.google.com/pubs/pub36356.html)
  - [Bloom Filter](https://en.wikipedia.org/wiki/Bloom_filter)
+
+
+[p01]: https://arxiv.org/abs/1506.08506
+[p02]: https://arxiv.org/abs/1506.08505
+[p03]: https://arxiv.org/abs/1504.01287
+[p04]: https://arxiv.org/abs/1501.05709
+[p05]: https://arxiv.org/abs/1406.4923
+[p06]: https://arxiv.org/abs/1407.5661
+[p07]: http://ieee-hpec.org/2013/index_htm_files/28-2868615.pdf
+[p08]: https://sqrrl.com/media/Accumulo-Benchmark-10312013-1.pdf
+[p09]: accumulo-benchmarking-2.1.pdf
+[p10]: https://arxiv.org/abs/1406.5751
+[p11]: http://ieee-hpec.org/2013/index_htm_files/11-Kepner-D4Mschema-IEEE-HPEC.pdf
+[s11]: http://ieee-hpec.org/2013/index_htm_files/11_130716-D4Mschema.pdf
+[p12]: http://ieee-hpec.org/2013/index_htm_files/26-HPEC13_LLSuperCloud_Reuther_final.pdf
+[s12]: http://ieee-hpec.org/2013/index_htm_files/HPEC+2013+Reuther+SuperCloud+final.pdf
+[p13]: http://geomesa.github.io/assets/outreach/SpatioTemporalIndexing_IEEEcopyright.pdf
+[p14]: https://people.cs.vt.edu/aendert/Alex_Endert/Research_files/Typograph.pdf
+[p15]: http://www.dodccrp.org/events/18th_iccrts_2013/post_conference/papers/051.pdf
+[p16]: http://www.mit.edu/~kepner/pubs/ByunKepner_2012_BigData_Paper.pdf
+[s16]: http://ieee-hpec.org/2012/index_htm_files/42_ID18.pptx
+[p17]: http://ieee-hpec.org/2012/index_htm_files/HPEC12_Hubbell.pdf
+[s17]: http://ieee-hpec.org/2012/index_htm_files/31_ID20.pptx
+[p18]: https://sqrrl.com/media/Rya_CloudI20121.pdf
+[p19]: http://www.mit.edu/~kepner/pubs/Kepner_2012_D4M_Paper.pdf
+[s19]: http://www.mit.edu/~kepner/pubs/Kepner_2012_D4M_Slides.pdf
+[s20]: http://www.pdl.cmu.edu/SDI/2013/slides/big_graph_nsa_rd_2013_56002v1.pdf
+[p21]: http://www.pdl.cmu.edu/PDL-FTP/Storage/socc2011.pdf
+[s21]: http://www.cercs.gatech.edu/opencirrus/OCsummit11/presentations/patil.pdf
+[s22]: https://home.apache.org/~kturner/accumulo14_15.pdf
+[s23]: https://home.apache.org/~afuchs/slides/morgan_state_talk.pdf
+[s24]: https://www.slideshare.net/DonaldMiner/an-introduction-to-accumulo

http://git-wip-us.apache.org/repos/asf/accumulo/blob/964cf811/release_notes/1.5.1.md
----------------------------------------------------------------------
diff --git a/release_notes/1.5.1.md b/release_notes/1.5.1.md
index df5203f..a13de7c 100644
--- a/release_notes/1.5.1.md
+++ b/release_notes/1.5.1.md
@@ -158,56 +158,15 @@ has a set of tests that must be run before the candidate is capable of becoming
 Each unit and functional test only runs on a single node, while the RandomWalk and Continuous Ingest tests run 
 on any number of nodes. *Agitation* refers to randomly restarting Accumulo processes and Hadoop Datanode processes,
 and, in HDFS High-Availability instances, forcing NameNode failover.
-<table id="release_notes_testing">
-  <tr>
-    <th>OS</th>
-    <th>Hadoop</th>
-    <th>Nodes</th>
-    <th>ZooKeeper</th>
-    <th>HDFS High-Availability</th>
-    <th>Tests</th>
-  </tr>
-  <tr>
-    <td>CentOS 6.5</td>
-    <td>HDP 2.0 (Apache 2.2.0)</td>
-    <td>6</td>
-    <td>HDP 2.0 (Apache 3.4.5)</td>
-    <td>Yes (QJM)</td>
-    <td>All required tests</td>
-  </tr>
-  <tr>
-    <td>CentOS 6.4</td>
-    <td>CDH 4.5.0 (2.0.0+cdh4.5.0)</td>
-    <td>7</td>
-    <td>CDH 4.5.0 (3.4.5+cdh4.5.0)</td>
-    <td>Yes (QJM)</td>
-    <td>Unit, functional and 24hr Randomwalk w/ agitation</td>
-  </tr>
-  <tr>
-    <td>CentOS 6.4</td>
-    <td>CDH 4.5.0 (2.0.0+cdh4.5.0)</td>
-    <td>7</td>
-    <td>CDH 4.5.0 (3.4.5+cdh4.5.0)</td>
-    <td>Yes (QJM)</td>
-    <td>2x 24/hr continuous ingest w/ verification</td>
-  </tr>
-  <tr>
-    <td>CentOS 6.3</td>
-    <td>Apache 1.0.4</td>
-    <td>1</td>
-    <td>Apache 3.3.5</td>
-    <td>No</td>
-    <td>Local testing, unit and functional tests</td>
-  </tr>
-  <tr>
-    <td>RHEL 6.4</td>
-    <td>Apache 2.2.0</td>
-    <td>10</td>
-    <td>Apache 3.4.5</td>
-    <td>No</td>
-    <td>Functional tests</td>
-  </tr>
-</table>
+
+{: #release_notes_testing .table }
+| OS         | Hadoop                     | Nodes | ZooKeeper                  | HDFS High-Availability | Tests                                             |
+|------------|----------------------------|-------|----------------------------|------------------------|---------------------------------------------------|
+| CentOS 6.5 | HDP 2.0 (Apache 2.2.0)     | 6     | HDP 2.0 (Apache 3.4.5)     | Yes (QJM)              | All required tests                                |
+| CentOS 6.4 | CDH 4.5.0 (2.0.0+cdh4.5.0) | 7     | CDH 4.5.0 (3.4.5+cdh4.5.0) | Yes (QJM)              | Unit, functional and 24hr Randomwalk w/ agitation |
+| CentOS 6.4 | CDH 4.5.0 (2.0.0+cdh4.5.0) | 7     | CDH 4.5.0 (3.4.5+cdh4.5.0) | Yes (QJM)              | 2x 24/hr continuous ingest w/ verification        |
+| CentOS 6.3 | Apache 1.0.4               | 1     | Apache 3.3.5               | No                     | Local testing, unit and functional tests          |
+| RHEL 6.4   | Apache 2.2.0               | 10    | Apache 3.4.5               | No                     | Functional tests                                  |
 
 [1]: https://issues.apache.org/jira/browse/ACCUMULO-1905?focusedCommentId=13915208&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13915208
 [2]: https://issues.apache.org/jira/browse/ACCUMULO-1950

http://git-wip-us.apache.org/repos/asf/accumulo/blob/964cf811/release_notes/1.5.2.md
----------------------------------------------------------------------
diff --git a/release_notes/1.5.2.md b/release_notes/1.5.2.md
index 6b942ba..e1a47ae 100644
--- a/release_notes/1.5.2.md
+++ b/release_notes/1.5.2.md
@@ -142,32 +142,12 @@ The following documentation updates were made:
 Each unit and functional test only runs on a single node, while the RandomWalk and Continuous Ingest tests run 
 on any number of nodes. *Agitation* refers to randomly restarting Accumulo processes and Hadoop Datanode processes,
 and, in HDFS High-Availability instances, forcing NameNode failover.
-<table id="release_notes_testing">
-  <tr>
-    <th>OS</th>
-    <th>Hadoop</th>
-    <th>Nodes</th>
-    <th>ZooKeeper</th>
-    <th>HDFS High-Availability</th>
-    <th>Tests</th>
-  </tr>
-  <tr>
-    <td>Gentoo</td>
-    <td>Apache 2.6.0-SNAPSHOT</td>
-    <td>1</td>
-    <td>Apache 3.4.5</td>
-    <td>No</td>
-    <td>Unit and Functional Tests, ContinuousIngest w/ verification (1B entries)</td>
-  </tr>
-  <tr>
-    <td>CentOS 6</td>
-    <td>Apache 2.3.0</td>
-    <td>20</td>
-    <td>Apache 3.4.5</td>
-    <td>No</td>
-    <td>24/hr RandomWalk, 24/hr ContinuousIngest w/ verification w/ and w/o agitation (30B and 23B entries)</td>
-  </tr>
-</table>
+
+{: #release_notes_testing .table }
+| OS       | Hadoop                | Nodes | ZooKeeper    | HDFS High-Availability | Tests                                                                                               |
+|----------|-----------------------|-------|--------------|------------------------|-----------------------------------------------------------------------------------------------------|
+| Gentoo   | Apache 2.6.0-SNAPSHOT | 1     | Apache 3.4.5 | No                     | Unit and Functional Tests, ContinuousIngest w/ verification (1B entries)                            |
+| CentOS 6 | Apache 2.3.0          | 20    | Apache 3.4.5 | No                     | 24/hr RandomWalk, 24/hr ContinuousIngest w/ verification w/ and w/o agitation (30B and 23B entries) |
 
 
 [1]: https://issues.apache.org/jira/browse/ACCUMULO-2586

http://git-wip-us.apache.org/repos/asf/accumulo/blob/964cf811/release_notes/1.5.3.md
----------------------------------------------------------------------
diff --git a/release_notes/1.5.3.md b/release_notes/1.5.3.md
index 9e81f5e..9d4956e 100644
--- a/release_notes/1.5.3.md
+++ b/release_notes/1.5.3.md
@@ -92,32 +92,11 @@ of these failures. Users are encouraged to follow
 One possible workaround is to increase the `general.rpc.timeout` in the
 Accumulo configuration from `120s` to `240s`.
 
-<table id="release_notes_testing">
-  <tr>
-    <th>OS</th>
-    <th>Hadoop</th>
-    <th>Nodes</th>
-    <th>ZooKeeper</th>
-    <th>HDFS High-Availability</th>
-    <th>Tests</th>
-  </tr>
-  <tr>
-    <td>Gentoo</td>
-    <td>2.6.0</td>
-    <td>1</td>
-    <td>3.4.5</td>
-    <td>No</td>
-    <td>Unit and Integration Tests</td>
-  </tr>
-  <tr>
-    <td>Centos 6.5</td>
-    <td>2.7.1</td>
-    <td>6</td>
-    <td>3.4.5</td>
-    <td>No</td>
-    <td>Continuous Ingest and Verify</td>
-  </tr>
-</table>
+{: #release_notes_testing .table }
+| OS         | Hadoop | Nodes | ZooKeeper | HDFS High-Availability | Tests                        |
+|------------|--------|-------|-----------|------------------------|------------------------------|
+| Gentoo     | 2.6.0  | 1     | 3.4.5     | No                     | Unit and Integration Tests   |
+| Centos 6.5 | 2.7.1  | 6     | 3.4.5     | No                     | Continuous Ingest and Verify |
 
 [ACCUMULO-3316]: https://issues.apache.org/jira/browse/ACCUMULO-3316
 [ACCUMULO-3317]: https://issues.apache.org/jira/browse/ACCUMULO-3317

http://git-wip-us.apache.org/repos/asf/accumulo/blob/964cf811/release_notes/1.5.4.md
----------------------------------------------------------------------
diff --git a/release_notes/1.5.4.md b/release_notes/1.5.4.md
index b7cf86d..6981e51 100644
--- a/release_notes/1.5.4.md
+++ b/release_notes/1.5.4.md
@@ -54,32 +54,11 @@ and Continuous Ingest tests run on any number of nodes. *Agitation* refers to
 randomly restarting Accumulo processes and Hadoop DataNode processes, and, in
 HDFS High-Availability instances, forcing NameNode fail-over.
 
-<table id="release_notes_testing">
-  <tr>
-    <th>OS</th>
-    <th>Hadoop</th>
-    <th>Nodes</th>
-    <th>ZooKeeper</th>
-    <th>HDFS High-Availability</th>
-    <th>Tests</th>
-  </tr>
-  <tr>
-    <td>OSX</td>
-    <td>2.6.0</td>
-    <td>1</td>
-    <td>3.4.5</td>
-    <td>No</td>
-    <td>Unit and Functional Tests</td>
-  </tr>
-  <tr>
-    <td>Centos 6.5</td>
-    <td>2.7.1</td>
-    <td>6</td>
-    <td>3.4.5</td>
-    <td>No</td>
-    <td>Continuous Ingest and Verify (10B entries), Randomwalk (24hrs)</td>
-  </tr>
-</table>
+{: #release_notes_testing .table }
+| OS         | Hadoop | Nodes | ZooKeeper | HDFS High-Availability | Tests                                                          |
+|------------|--------|-------|-----------|------------------------|----------------------------------------------------------------|
+| OSX        | 2.6.0  | 1     | 3.4.5     | No                     | Unit and Functional Tests                                      |
+| Centos 6.5 | 2.7.1  | 6     | 3.4.5     | No                     | Continuous Ingest and Verify (10B entries), Randomwalk (24hrs) |
 
 [ACCUMULO-3967]: https://issues.apache.org/jira/browse/ACCUMULO-3967
 [ACCUMULO-3939]: https://issues.apache.org/jira/browse/ACCUMULO-3939

http://git-wip-us.apache.org/repos/asf/accumulo/blob/964cf811/release_notes/1.6.0.md
----------------------------------------------------------------------
diff --git a/release_notes/1.6.0.md b/release_notes/1.6.0.md
index 689df9b..1e5d311 100644
--- a/release_notes/1.6.0.md
+++ b/release_notes/1.6.0.md
@@ -110,7 +110,7 @@ One notable change that was made to the binary tarball is the purposeful omissio
 This shared library is used at ingest time to implement an off-JVM-heap sorted map that greatly increases ingest throughput while side-stepping
 issues such as JVM garbage collection pauses. In earlier releases, a pre-built copy of this shared library was included in the binary tarball; however, the decision was made to omit this due to the potential variance in toolchains on the target system.
 
-It is recommended that users invoke the provided build_native_library.sh before running Accumulo:
+It is recommended that users invoke the provided build\_native\_library.sh before running Accumulo:
 
        $ACCUMULO_HOME/bin/build_native_library.sh
 
@@ -247,108 +247,18 @@ The following acronyms are used in the test testing table.
  * IT : Integration test, run w/ `mvn verify`
  * RW : Random Walk
 
-<table id="release_notes_testing">
-  <tr>
-    <th>OS</th>
-    <th>Java</th>
-    <th>Hadoop</th>
-    <th>Nodes</th>
-    <th>ZooKeeper</th>
-    <th>HDFS HA</th>
-    <th>Version/Commit hash</th>
-    <th>Tests</th>
-  </tr>
-  <tr>
-    <td>CentOS 6.5</td>
-    <td>CentOS OpenJDK 1.7</td>
-    <td>Apache 2.2.0</td>
-    <td>20 EC2 nodes</td>
-    <td>Apache 3.4.5</td>
-    <td>No</td>
-    <td>1.6.0 RC1 + ACCUMULO_2668 patch</td>
-    <td>24-hour CI w/o agitation. Verified.</td>
-  </tr>
-  <tr>
-    <td>CentOS 6.5</td>
-    <td>CentOS OpenJDK 1.7</td>
-    <td>Apache 2.2.0</td>
-    <td>20 EC2 nodes</td>
-    <td>Apache 3.4.5</td>
-    <td>No</td>
-    <td>1.6.0 RC2</td>
-    <td>24-hour RW (Conditional.xml module) w/o agitation</td>
-  </tr>
-  <tr>
-    <td>CentOS 6.5</td>
-    <td>CentOS OpenJDK 1.7</td>
-    <td>Apache 2.2.0</td>
-    <td>20 EC2 nodes</td>
-    <td>Apache 3.4.5</td>
-    <td>No</td>
-    <td>1.6.0 RC5</td>
-    <td>24-hour CI w/ agitation. Verified.</td>
-  </tr>
-  <tr>
-    <td>CentOS 6.5</td>
-    <td>CentOS OpenJDK 1.6 and 1.7</td>
-    <td>Apache 1.2.1, 2.2.0</td>
-    <td>Single</td>
-    <td>Apache 3.3.6</td>
-    <td>No</td>
-    <td>1.6.0 RC5</td>
-    <td>All unit and ITs w/  <code>-Dhadoop.profile=2</code> and <code>-Dhadoop.profile=1</code></td>
-  </tr>
-  <tr>
-    <td>Gentoo</td>
-    <td>Sun JDK 1.6.0_45</td>
-    <td>Apache 1.2.1, 2.2.0, 2.3.0, 2.4.0</td>
-    <td>Single</td>
-    <td>Apache 3.4.5</td>
-    <td>No</td>
-    <td>1.6.0 RC5</td>
-    <td>All unit and ITs. 2B entries ingested/verified with CI </td>
-  </tr>
-  <tr>
-    <td>CentOS 6.4</td>
-    <td>Sun JDK 1.6.0_31</td>
-    <td>CDH 4.5.0</td>
-    <td>7</td>
-    <td>CDH 4.5.0</td>
-    <td>Yes</td>
-    <td>1.6.0 RC4 and RC5</td>
-    <td>24-hour RW (LongClean) with and without agitation</td>
-  </tr>
-  <tr>
-    <td>CentOS 6.4</td>
-    <td>Sun JDK 1.6.0_31</td>
-    <td>CDH 4.5.0</td>
-    <td>7</td>
-    <td>CDH 4.5.0</td>
-    <td>Yes</td>
-    <td>3a1b38</td>
-    <td>72-hour CI with and without agitation. Verified.</td>
-  </tr>
-  <tr>
-    <td>CentOS 6.4</td>
-    <td>Sun JDK 1.6.0_31</td>
-    <td>CDH 4.5.0</td>
-    <td>7</td>
-    <td>CDH 4.5.0</td>
-    <td>Yes</td>
-    <td>1.6.0 RC2</td>
-    <td>24-hour CI without agitation. Verified.</td>
-  </tr>
-  <tr>
-    <td>CentOS 6.4</td>
-    <td>Sun JDK 1.6.0_31</td>
-    <td>CDH 4.5.0</td>
-    <td>7</td>
-    <td>CDH 4.5.0</td>
-    <td>Yes</td>
-    <td>1.6.0 RC3</td>
-    <td>24-hour CI with agitation. Verified.</td>
-  </tr>
-</table>
+{: #release_notes_testing .table }
+| OS         | Java                       | Hadoop                            | Nodes        | ZooKeeper    | HDFS HA | Version/Commit hash              | Tests                                                              |
+|------------|----------------------------|-----------------------------------|--------------|--------------|---------|----------------------------------|--------------------------------------------------------------------|
+| CentOS 6.5 | CentOS OpenJDK 1.7         | Apache 2.2.0                      | 20 EC2 nodes | Apache 3.4.5 | No      | 1.6.0 RC1 + ACCUMULO\_2668 patch | 24-hour CI w/o agitation. Verified.                                |
+| CentOS 6.5 | CentOS OpenJDK 1.7         | Apache 2.2.0                      | 20 EC2 nodes | Apache 3.4.5 | No      | 1.6.0 RC2                        | 24-hour RW (Conditional.xml module) w/o agitation                  |
+| CentOS 6.5 | CentOS OpenJDK 1.7         | Apache 2.2.0                      | 20 EC2 nodes | Apache 3.4.5 | No      | 1.6.0 RC5                        | 24-hour CI w/ agitation. Verified.                                 |
+| CentOS 6.5 | CentOS OpenJDK 1.6 and 1.7 | Apache 1.2.1, 2.2.0               | Single       | Apache 3.3.6 | No      | 1.6.0 RC5                        | All unit and ITs w/  `-Dhadoop.profile=2` and `-Dhadoop.profile=1` |
+| Gentoo     | Sun JDK 1.6.0\_45          | Apache 1.2.1, 2.2.0, 2.3.0, 2.4.0 | Single       | Apache 3.4.5 | No      | 1.6.0 RC5                        | All unit and ITs. 2B entries ingested/verified with CI             |
+| CentOS 6.4 | Sun JDK 1.6.0\_31          | CDH 4.5.0                         | 7            | CDH 4.5.0    | Yes     | 1.6.0 RC4 and RC5                | 24-hour RW (LongClean) with and without agitation                  |
+| CentOS 6.4 | Sun JDK 1.6.0\_31          | CDH 4.5.0                         | 7            | CDH 4.5.0    | Yes     | 3a1b38                           | 72-hour CI with and without agitation. Verified.                   |
+| CentOS 6.4 | Sun JDK 1.6.0\_31          | CDH 4.5.0                         | 7            | CDH 4.5.0    | Yes     | 1.6.0 RC2                        | 24-hour CI without agitation. Verified.                            |
+| CentOS 6.4 | Sun JDK 1.6.0\_31          | CDH 4.5.0                         | 7            | CDH 4.5.0    | Yes     | 1.6.0 RC3                        | 24-hour CI with agitation. Verified.                               |
 
 [ACCUMULO-1]: https://issues.apache.org/jira/browse/ACCUMULO-1
 [ACCUMULO-112]: https://issues.apache.org/jira/browse/ACCUMULO-112 "Partition data in memory by locality group"

http://git-wip-us.apache.org/repos/asf/accumulo/blob/964cf811/release_notes/1.6.1.md
----------------------------------------------------------------------
diff --git a/release_notes/1.6.1.md b/release_notes/1.6.1.md
index c4960c9..3949bd8 100644
--- a/release_notes/1.6.1.md
+++ b/release_notes/1.6.1.md
@@ -151,33 +151,12 @@ The following documentation updates were made:
 Each unit and functional test only runs on a single node, while the RandomWalk and Continuous Ingest tests run 
 on any number of nodes. *Agitation* refers to randomly restarting Accumulo processes and Hadoop Datanode processes,
 and, in HDFS High-Availability instances, forcing NameNode failover.
-<table id="release_notes_testing">
-  <tr>
-    <th>OS</th>
-    <th>Hadoop</th>
-    <th>Nodes</th>
-    <th>ZooKeeper</th>
-    <th>HDFS High-Availability</th>
-    <th>Tests</th>
-  </tr>
-  <tr>
-    <td>Gentoo</td>
-    <td>Apache 2.6.0-SNAPSHOT</td>
-    <td>2</td>
-    <td>Apache 3.4.5</td>
-    <td>No</td>
-    <td>Unit and Functional Tests, ContinuousIngest w/ verification (2B entries)</td>
-  </tr>
-  <tr>
-    <td>CentOS 6</td>
-    <td>Apache 2.3.0</td>
-    <td>20</td>
-    <td>Apache 3.4.5</td>
-    <td>No</td>
-    <td>24/hr RandomWalk, ContinuousIngest w/ verification w/ and w/o agitation (17B entries), 24hr Randomwalk test</td>
-  </tr>
-</table>
 
+{: #release_notes_testing .table }
+| OS         | Hadoop                | Nodes | ZooKeeper    | HDFS HA | Tests                                                                                                       |
+|------------|-----------------------|-------|--------------|---------|-------------------------------------------------------------------------------------------------------------|
+| Gentoo     | Apache 2.6.0-SNAPSHOT | 2     | Apache 3.4.5 | No      | Unit and Functional Tests, ContinuousIngest w/ verification (2B entries)                                    |
+| CentOS 6   | Apache 2.3.0          | 20    | Apache 3.4.5 | No      | 24/hr RandomWalk, ContinuousIngest w/ verification w/ and w/o agitation (17B entries), 24hr Randomwalk test |
 
 [1]: https://issues.apache.org/jira/browse/ACCUMULO-2586
 [2]: https://issues.apache.org/jira/browse/ACCUMULO-2658

http://git-wip-us.apache.org/repos/asf/accumulo/blob/964cf811/release_notes/1.6.2.md
----------------------------------------------------------------------
diff --git a/release_notes/1.6.2.md b/release_notes/1.6.2.md
index fadcf00..f45b052 100644
--- a/release_notes/1.6.2.md
+++ b/release_notes/1.6.2.md
@@ -142,49 +142,14 @@ for configuring an instance with native maps.
 Each unit and functional test only runs on a single node, while the RandomWalk and Continuous Ingest tests run 
 on any number of nodes. *Agitation* refers to randomly restarting Accumulo processes and Hadoop Datanode processes,
 and, in HDFS High-Availability instances, forcing NameNode failover.
-<table id="release_notes_testing">
-  <tr>
-    <th>OS</th>
-    <th>Hadoop</th>
-    <th>Nodes</th>
-    <th>ZooKeeper</th>
-    <th>HDFS High-Availability</th>
-    <th>Tests</th>
-  </tr>
-  <tr>
-    <td>Gentoo</td>
-    <td>N/A</td>
-    <td>1</td>
-    <td>N/A</td>
-    <td>No</td>
-    <td>Unit and Integration Tests</td>
-  </tr>
-  <tr>
-    <td>Mac OSX</td>
-    <td>N/A</td>
-    <td>1</td>
-    <td>N/A</td>
-    <td>No</td>
-    <td>Unit and Integration Tests</td>
-  </tr>
-  <tr>
-    <td>Fedora 21</td>
-    <td>N/A</td>
-    <td>1</td>
-    <td>N/A</td>
-    <td>No</td>
-    <td>Unit and Integration Tests</td>
-  </tr>
-  <tr>
-    <td>CentOS 6</td>
-    <td>2.6</td>
-    <td>20</td>
-    <td>3.4.5</td>
-    <td>No</td>
-    <td>ContinuousIngest w/ verification w/ and w/o agitation (31B and 21B entries, respectively)</td>
-  </tr>
-</table>
 
+{: #release_notes_testing .table }
+| OS        | Hadoop | Nodes | ZooKeeper | HDFS HA | Tests                                                                                     |
+|-----------|--------|-------|-----------|---------|-------------------------------------------------------------------------------------------|
+| Gentoo    | N/A    | 1     | N/A       | No      | Unit and Integration Tests                                                                |
+| Mac OSX   | N/A    | 1     | N/A       | No      | Unit and Integration Tests                                                                |
+| Fedora 21 | N/A    | 1     | N/A       | No      | Unit and Integration Tests                                                                |
+| CentOS 6  | 2.6    | 20    | 3.4.5     | No      | ContinuousIngest w/ verification w/ and w/o agitation (31B and 21B entries, respectively) |
 
 [1]: https://semver.org
 [2]: https://github.com/apache/accumulo#api

http://git-wip-us.apache.org/repos/asf/accumulo/blob/964cf811/release_notes/1.6.3.md
----------------------------------------------------------------------
diff --git a/release_notes/1.6.3.md b/release_notes/1.6.3.md
index 0de6fed..4d477e2 100644
--- a/release_notes/1.6.3.md
+++ b/release_notes/1.6.3.md
@@ -80,48 +80,13 @@ and Continuous Ingest tests run on any number of nodes. *Agitation* refers to
 randomly restarting Accumulo processes and Hadoop Datanode processes, and, in
 HDFS High-Availability instances, forcing NameNode failover.
 
-<table id="release_notes_testing">
-  <tr>
-    <th>OS</th>
-    <th>Hadoop</th>
-    <th>Nodes</th>
-    <th>ZooKeeper</th>
-    <th>HDFS High-Availability</th>
-    <th>Tests</th>
-  </tr>
-  <tr>
-    <td>Amazon Linux 2014.09</td>
-    <td>2.6.0</td>
-    <td>20</td>
-    <td>3.4.5</td>
-    <td>No</td>
-    <td>24hr ContinuousIngest w/ verification w/ and w/o agitation</td>
-  </tr>
-  <tr>
-    <td>Amazon Linux 2014.09</td>
-    <td>2.6.0</td>
-    <td>20</td>
-    <td>3.4.5</td>
-    <td>No</td>
-    <td>24hr Randomwalk w/o agitation</td>
-  </tr>
-  <tr>
-    <td>Centos 6.5</td>
-    <td>2.7.1</td>
-    <td>6</td>
-    <td>3.4.5</td>
-    <td>No</td>
-    <td>Continuous Ingest and Verify (6B entries)</td>
-  </tr>
-  <tr>
-    <td>Centos 6.6</td>
-    <td>2.2.0</td>
-    <td>6</td>
-    <td>3.4.5</td>
-    <td>No</td>
-    <td>All integration test passed.  Some needed to be run a 2nd time.</td>
-  </tr>
-</table>
+{: #release_notes_testing .table }
+| OS                   | Hadoop | Nodes | ZooKeeper | HDFS HA | Tests                                                           |
+|----------------------|--------|-------|-----------|---------|-----------------------------------------------------------------|
+| Amazon Linux 2014.09 | 2.6.0  | 20    | 3.4.5     | No      | 24hr ContinuousIngest w/ verification w/ and w/o agitation      |
+| Amazon Linux 2014.09 | 2.6.0  | 20    | 3.4.5     | No      | 24hr Randomwalk w/o agitation                                   |
+| Centos 6.5           | 2.7.1  | 6     | 3.4.5     | No      | Continuous Ingest and Verify (6B entries)                       |
+| Centos 6.6           | 2.2.0  | 6     | 3.4.5     | No      | All integration test passed.  Some needed to be run a 2nd time. |
 
 [1]: https://issues.apache.org/jira/browse/HDFS-8406
 [3]: {{ site.baseurl }}/release_notes/1.6.0

http://git-wip-us.apache.org/repos/asf/accumulo/blob/964cf811/release_notes/1.6.4.md
----------------------------------------------------------------------
diff --git a/release_notes/1.6.4.md b/release_notes/1.6.4.md
index d49cc19..62757cc 100644
--- a/release_notes/1.6.4.md
+++ b/release_notes/1.6.4.md
@@ -49,25 +49,10 @@ and Continuous Ingest tests run on any number of nodes. *Agitation* refers to
 randomly restarting Accumulo processes and Hadoop Datanode processes, and, in
 HDFS High-Availability instances, forcing NameNode failover.
 
-<table id="release_notes_testing">
-  <tr>
-    <th>OS</th>
-    <th>Hadoop</th>
-    <th>Nodes</th>
-    <th>ZooKeeper</th>
-    <th>HDFS High-Availability</th>
-    <th>Tests</th>
-  </tr>
-  <tr>
-    <td>Amazon Linux 2014.09</td>
-    <td>2.6.0</td>
-    <td>20</td>
-    <td>3.4.5</td>
-    <td>No</td>
-    <td>ContinuousIngest w/ verification w/ and w/o agitation (37B entries)</td>
-  </tr>
-</table>
-
+{: #release_notes_testing .table }
+| OS                   | Hadoop | Nodes | ZooKeeper | HDFS HA | Tests                                                               |
+|----------------------|--------|-------|-----------|---------|---------------------------------------------------------------------|
+| Amazon Linux 2014.09 | 2.6.0  | 20    | 3.4.5     | No      | ContinuousIngest w/ verification w/ and w/o agitation (37B entries) |
 
 [ACCUMULO-3979]: https://issues.apache.org/jira/browse/ACCUMULO-3979
 [ACCUMULO-3965]: https://issues.apache.org/jira/browse/ACCUMULO-3965

http://git-wip-us.apache.org/repos/asf/accumulo/blob/964cf811/release_notes/1.6.5.md
----------------------------------------------------------------------
diff --git a/release_notes/1.6.5.md b/release_notes/1.6.5.md
index 69c4726..1b6a121 100644
--- a/release_notes/1.6.5.md
+++ b/release_notes/1.6.5.md
@@ -81,48 +81,13 @@ and Continuous Ingest tests run on any number of nodes. *Agitation* refers to
 randomly restarting Accumulo processes and Hadoop Datanode processes, and, in
 HDFS High-Availability instances, forcing NameNode failover.
 
-<table id="release_notes_testing">
-  <tr>
-    <th>OS</th>
-    <th>Hadoop</th>
-    <th>Nodes</th>
-    <th>ZooKeeper</th>
-    <th>HDFS High-Availability</th>
-    <th>Tests</th>
-  </tr>
-  <tr>
-    <td>CentOS 7.1</td>
-    <td>2.6.3</td>
-    <td>9</td>
-    <td>3.4.6</td>
-    <td>No</td>
-    <td>Random walk (All.xml) 18-hour run (2 failures, both conflicting operations on same table in Concurrent test)</td>
-  </tr>
-  <tr>
-    <td>CentOS 7.1</td>
-    <td>2.6.3</td>
-    <td>6</td>
-    <td>3.4.6</td>
-    <td>No</td>
-    <td>Continuous ingest with agitation (2B entries)</td>
-  </tr>
-  <tr>
-    <td>CentOS 6.7</td>
-    <td>2.2.0 and 1.2.1</td>
-    <td>1</td>
-    <td>3.3.6</td>
-    <td>No</td>
-    <td>All unit and integration tests</td>
-  </tr>
-  <tr>
-    <td>CentOS 7.1 (Oracle JDK8)</td>
-    <td>2.6.3</td>
-    <td>9</td>
-    <td>3.4.6</td>
-    <td>No</td>
-    <td>Continuous ingest with agitation (24hrs, 32B entries verified) on EC2 (1 m3.xlarge leader; 8 d2.xlarge workers)</td>
-  </tr>
-</table>
+{: #release_notes_testing .table }
+| OS                       | Hadoop          | Nodes | ZooKeeper | HDFS HA | Tests                                                                                                           |
+|--------------------------|-----------------|-------|-----------|---------|-----------------------------------------------------------------------------------------------------------------|
+| CentOS 7.1               | 2.6.3           | 9     | 3.4.6     | No      | Random walk (All.xml) 18-hour run (2 failures, both conflicting operations on same table in Concurrent test)    |
+| CentOS 7.1               | 2.6.3           | 6     | 3.4.6     | No      | Continuous ingest with agitation (2B entries)                                                                   |
+| CentOS 6.7               | 2.2.0 and 1.2.1 | 1     | 3.3.6     | No      | All unit and integration tests                                                                                  |
+| CentOS 7.1 (Oracle JDK8) | 2.6.3           | 9     | 3.4.6     | No      | Continuous ingest with agitation (24hrs, 32B entries verified) on EC2 (1 m3.xlarge leader; 8 d2.xlarge workers) |
 
 
 [JIRA_165]: https://issues.apache.org/jira/browse/ACCUMULO/fixforversion/12333674

http://git-wip-us.apache.org/repos/asf/accumulo/blob/964cf811/release_notes/1.7.0.md
----------------------------------------------------------------------
diff --git a/release_notes/1.7.0.md b/release_notes/1.7.0.md
index e0f05e8..5d452ef 100644
--- a/release_notes/1.7.0.md
+++ b/release_notes/1.7.0.md
@@ -360,48 +360,13 @@ of these failures. Users are encouraged to follow
 One possible workaround is to increase the `general.rpc.timeout` in the
 Accumulo configuration from `120s` to `240s`.
 
-<table id="release_notes_testing">
-  <tr>
-    <th>OS</th>
-    <th>Hadoop</th>
-    <th>Nodes</th>
-    <th>ZooKeeper</th>
-    <th>HDFS High-Availability</th>
-    <th>Tests</th>
-  </tr>
-  <tr>
-    <td>Gentoo</td>
-    <td>N/A</td>
-    <td>1</td>
-    <td>N/A</td>
-    <td>No</td>
-    <td>Unit and Integration Tests</td>
-  </tr>
-  <tr>
-    <td>Gentoo</td>
-    <td>2.6.0</td>
-    <td>1 (2 TServers)</td>
-    <td>3.4.5</td>
-    <td>No</td>
-    <td>24hr CI w/ agitation and verification, 24hr RW w/o agitation.</td>
-  </tr>
-  <tr>
-    <td>Centos 6.6</td>
-    <td>2.6.0</td>
-    <td>3</td>
-    <td>3.4.6</td>
-    <td>No</td>
-    <td>24hr RW w/ agitation, 24hr CI w/o agitation, 72hr CI w/ and w/o agitation</td>
-  </tr>
-  <tr>
-    <td>Amazon Linux</td>
-    <td>2.6.0</td>
-    <td>20 m1large</td>
-    <td>3.4.6</td>
-    <td>No</td>
-    <td>24hr CI w/o agitation</td>
-  </tr>
-</table>
+{: #release_notes_testing .table }
+| OS           | Hadoop | Nodes          | ZooKeeper | HDFS HA | Tests                                                                     |
+|--------------|--------|----------------|-----------|---------|---------------------------------------------------------------------------|
+| Gentoo       | N/A    | 1              | N/A       | No      | Unit and Integration Tests                                                |
+| Gentoo       | 2.6.0  | 1 (2 TServers) | 3.4.5     | No      | 24hr CI w/ agitation and verification, 24hr RW w/o agitation.             |
+| Centos 6.6   | 2.6.0  | 3              | 3.4.6     | No      | 24hr RW w/ agitation, 24hr CI w/o agitation, 72hr CI w/ and w/o agitation |
+| Amazon Linux | 2.6.0  | 20 m1large     | 3.4.6     | No      | 24hr CI w/o agitation                                                     |
 
 [ACCUMULO-378]: https://issues.apache.org/jira/browse/ACCUMULO-378
 [ACCUMULO-898]: https://issues.apache.org/jira/browse/ACCUMULO-898

http://git-wip-us.apache.org/repos/asf/accumulo/blob/964cf811/release_notes/1.7.1.md
----------------------------------------------------------------------
diff --git a/release_notes/1.7.1.md b/release_notes/1.7.1.md
index b832b86..f6d3901 100644
--- a/release_notes/1.7.1.md
+++ b/release_notes/1.7.1.md
@@ -116,54 +116,19 @@ and Continuous Ingest tests run on any number of nodes. *Agitation* refers to
 randomly restarting Accumulo processes and Hadoop Datanode processes, and, in
 HDFS High-Availability instances, forcing NameNode failover.
 
-<table id="release_notes_testing">
-  <tr>
-    <th>OS/Environment</th>
-    <th>Hadoop</th>
-    <th>Nodes</th>
-    <th>ZooKeeper</th>
-    <th>HDFS High-Availability</th>
-    <th>Tests</th>
-  </tr>
-  <tr>
-    <td>CentOS 7.1 w/Oracle JDK8 on EC2 (1 m3.xlarge, 8 d2.xlarge)</td>
-    <td>2.6.3</td>
-    <td>9</td>
-    <td>3.4.6</td>
-    <td>No</td>
-    <td>Random walk (All.xml) 24-hour run, saw <a href="https://issues.apache.org/jira/browse/ACCUMULO-3794">ACCUMULO-3794</a> and <a href="https://issues.apache.org/jira/browse/ACCUMULO-4151">ACCUMULO-4151</a>.</td>
-  </tr>
-  <tr>
-    <td>CentOS 7.1 w/Oracle JDK8 on EC2 (1 m3.xlarge, 8 d2.xlarge)</td>
-    <td>2.6.3</td>
-    <td>9</td>
-    <td>3.4.6</td>
-    <td>No</td>
-    <td>21 hr run of CI w/ agitation, 23.1B entries verified.</td>
-  </tr>
-  <tr>
-    <td>CentOS 7.1 w/Oracle JDK8 on EC2 (1 m3.xlarge, 8 d2.xlarge)</td>
-    <td>2.6.3</td>
-    <td>9</td>
-    <td>3.4.6</td>
-    <td>No</td>
-    <td>24 hr run of CI w/o agitation, 23.0B entries verified; saw performance issues outlined in comment on <a href="https://issues.apache.org/jira/browse/ACCUMULO-4146">ACCUMULO-4146</a>.</td>
-  </tr>
-  <tr>
-    <td>CentOS 6.7 (OpenJDK 7), Fedora 23 (OpenJDK 8), and CentOS 7.2 (OpenJDK 7)</td>
-    <td>2.6.1</td>
-    <td>1</td>
-    <td>3.4.6</td>
-    <td>No</td>
-    <td>All unit tests and ITs pass with -Dhadoop.version=2.6.1; Kerberos ITs had a problem with earlier versions of Hadoop</td>
-  </tr>
-</table>
-
+{: #release_notes_testing .table }
+| OS/Environment                                                            | Hadoop | Nodes | ZooKeeper | HDFS HA | Tests                                                                                                                                |
+|---------------------------------------------------------------------------|--------|-------|-----------|---------|--------------------------------------------------------------------------------------------------------------------------------------|
+| CentOS 7.1 w/Oracle JDK8 on EC2 (1 m3.xlarge, 8 d2.xlarge)                | 2.6.3  | 9     | 3.4.6     | No      | Random walk (All.xml) 24-hour run, saw [ACCUMULO-3794][ACCUMULO-3794] and [ACCUMULO-4151][ACCUMULO-4151].                            |
+| CentOS 7.1 w/Oracle JDK8 on EC2 (1 m3.xlarge, 8 d2.xlarge)                | 2.6.3  | 9     | 3.4.6     | No      | 21 hr run of CI w/ agitation, 23.1B entries verified.                                                                                |
+| CentOS 7.1 w/Oracle JDK8 on EC2 (1 m3.xlarge, 8 d2.xlarge)                | 2.6.3  | 9     | 3.4.6     | No      | 24 hr run of CI w/o agitation, 23.0B entries verified; saw performance issues outlined in comment on [ACCUMULO-4146][ACCUMULO-4146]. |
+| CentOS 6.7 (OpenJDK 7), Fedora 23 (OpenJDK 8), and CentOS 7.2 (OpenJDK 7) | 2.6.1  | 1     | 3.4.6     | No      | All unit tests and ITs pass with -Dhadoop.version=2.6.1; Kerberos ITs had a problem with earlier versions of Hadoop                  |
 
 [JIRA_171]: https://issues.apache.org/jira/browse/ACCUMULO/fixforversion/12329940
 
 [ACCUMULO-3509]: https://issues.apache.org/jira/browse/ACCUMULO-3509
 [ACCUMULO-3734]: https://issues.apache.org/jira/browse/ACCUMULO-3734
+[ACCUMULO-3794]: https://issues.apache.org/jira/browse/ACCUMULO-3794
 [ACCUMULO-3859]: https://issues.apache.org/jira/browse/ACCUMULO-3859
 [ACCUMULO-3967]: https://issues.apache.org/jira/browse/ACCUMULO-3967
 [ACCUMULO-4016]: https://issues.apache.org/jira/browse/ACCUMULO-4016
@@ -179,4 +144,6 @@ HDFS High-Availability instances, forcing NameNode failover.
 [ACCUMULO-4080]: https://issues.apache.org/jira/browse/ACCUMULO-4080
 [ACCUMULO-4098]: https://issues.apache.org/jira/browse/ACCUMULO-4098
 [ACCUMULO-4113]: https://issues.apache.org/jira/browse/ACCUMULO-4113
+[ACCUMULO-4146]: https://issues.apache.org/jira/browse/ACCUMULO-4146
+[ACCUMULO-4151]: https://issues.apache.org/jira/browse/ACCUMULO-4151
 


[3/4] accumulo git commit: Jekyll build from gh-pages:964cf81

Posted by ct...@apache.org.
http://git-wip-us.apache.org/repos/asf/accumulo/blob/e938fe2b/papers/index.html
----------------------------------------------------------------------
diff --git a/papers/index.html b/papers/index.html
index 405bbfe..88b8580 100644
--- a/papers/index.html
+++ b/papers/index.html
@@ -240,192 +240,225 @@
         
         <h2 id="papers-and-presentations">Papers and Presentations</h2>
 
-<table id="citationtable" class="table table-bordered table-striped" style="width:100%">
+<table id="citationtable" class="table table-bordered table-striped" style="width:100%;">
   <thead>
-    <tr><th>Title</th><th>Links</th><th>Category</th><th>Venue</th><th>Peer-Reviewed</th><th>Awards</th></tr>
+    <tr>
+      <th>Title</th>
+      <th>Links</th>
+      <th>Category</th>
+      <th>Venue</th>
+      <th>Peer-Reviewed</th>
+      <th>Awards</th>
+    </tr>
   </thead>
   <tbody>
-    <tr><td>Prout, Andrew, et al. "Enabling On-Demand Database Computing with MIT SuperCloud Database Management System." IEEE HPEC 2015.</td>
+    <tr>
+      <td>Prout, Andrew, et al. “Enabling On-Demand Database Computing with MIT SuperCloud Database Management System.” IEEE HPEC 2015.</td>
       <td><a href="https://arxiv.org/abs/1506.08506">paper</a></td>
       <td>Architecture</td>
       <td>IEEE HPEC 2015</td>
       <td>Yes</td>
-      <td />
+      <td> </td>
     </tr>
-    <tr><td>Hubbell, Matthew, et al. "Big Data Strategies for Data Center Infrastructure Management Using a 3D Gaming Platform." IEEE HPEC 2015.</td>
+    <tr>
+      <td>Hubbell, Matthew, et al. “Big Data Strategies for Data Center Infrastructure Management Using a 3D Gaming Platform.” IEEE HPEC 2015.</td>
       <td><a href="https://arxiv.org/abs/1506.08505">paper</a></td>
       <td>Use Cases</td>
       <td>IEEE HPEC 2015</td>
       <td>Yes</td>
-      <td />
+      <td> </td>
     </tr>
-    <tr><td>Gadepally, Vijay, et al. "Computing on Masked Data to improve the Security of Big Data." IEEE HST 2015.</td>
+    <tr>
+      <td>Gadepally, Vijay, et al. “Computing on Masked Data to improve the Security of Big Data.” IEEE HST 2015.</td>
       <td><a href="https://arxiv.org/abs/1504.01287">paper</a></td>
       <td>Use Cases</td>
       <td>IEEE HST 2015</td>
       <td>Yes</td>
-      <td />
+      <td> </td>
     </tr>
-    <tr><td>Kepner, Jeremy, et al. "Associative Arrays: Unified Mathematics for Spreadsheets, Databases, Matrices, and Graphs." New England Database Summit 2015.</td>
+    <tr>
+      <td>Kepner, Jeremy, et al. “Associative Arrays: Unified Mathematics for Spreadsheets, Databases, Matrices, and Graphs.” New England Database Summit 2015.</td>
       <td><a href="https://arxiv.org/abs/1501.05709">paper</a></td>
       <td>Use Cases</td>
       <td>New England Database Summit 2015</td>
       <td>Yes</td>
-      <td />
+      <td> </td>
     </tr>
-    <tr><td>Achieving 100,000,000 database inserts per second using Accumulo and D4M - Kepner et al IEEE HPEC 2014</td>
+    <tr>
+      <td>Achieving 100,000,000 database inserts per second using Accumulo and D4M - Kepner et al IEEE HPEC 2014</td>
       <td><a href="https://arxiv.org/abs/1406.4923">paper</a></td>
       <td>Performance</td>
       <td>IEEE HPEC 2014</td>
       <td>Yes</td>
-      <td></td>
+      <td> </td>
     </tr>
-    <tr><td>Evaluating Accumulo Performance for a Scalable Cyber Data Processing Pipeline - Sawyer et al IEEE HPEC 2014</td>
+    <tr>
+      <td>Evaluating Accumulo Performance for a Scalable Cyber Data Processing Pipeline - Sawyer et al IEEE HPEC 2014</td>
       <td><a href="https://arxiv.org/abs/1407.5661">paper</a></td>
       <td>Performance</td>
       <td>IEEE HPEC 2014</td>
       <td>Yes</td>
-      <td></td>
+      <td> </td>
     </tr>
-    <tr><td>Understanding Query Performance in Accumulo - Sawyer et al IEEE HPEC 2013</td>
+    <tr>
+      <td>Understanding Query Performance in Accumulo - Sawyer et al IEEE HPEC 2013</td>
       <td><a href="http://ieee-hpec.org/2013/index_htm_files/28-2868615.pdf">paper</a></td>
       <td>Performance</td>
       <td>IEEE HPEC 2013</td>
       <td>Yes</td>
-      <td></td>
+      <td> </td>
     </tr>
-    <tr><td>Benchmarking Apache Accumulo BigData Distributed Table Store Using Its Continuous Test Suite - Sen et al 2013 IEEE International Congress on Big Data.</td>
+    <tr>
+      <td>Benchmarking Apache Accumulo BigData Distributed Table Store Using Its Continuous Test Suite - Sen et al 2013 IEEE International Congress on Big Data.</td>
       <td><a href="https://sqrrl.com/media/Accumulo-Benchmark-10312013-1.pdf">paper</a></td>
       <td>Performance</td>
       <td>IEEE International Congress on Big Data 2013</td>
       <td>Yes</td>
-      <td></td>
+      <td> </td>
     </tr>
-    <tr><td>Benchmarking the Apache Accumulo Distributed Key-Value Store - Sen et al 2014 Updated version of the IEEE paper, including edits for clarity and additional results from benchmarking on Amazon EC2.</td>
+    <tr>
+      <td>Benchmarking the Apache Accumulo Distributed Key-Value Store - Sen et al 2014 Updated version of the IEEE paper, including edits for clarity and additional results from benchmarking on Amazon EC2.</td>
       <td><a href="accumulo-benchmarking-2.1.pdf">paper</a></td>
       <td>Performance</td>
       <td>Online</td>
       <td>No</td>
-      <td></td>
+      <td> </td>
     </tr>
-    <tr><td>Computing on Masked Data: a High Performance Method for Improving Big Data Veracity - Kepner et al IEEE HPEC 2014</td>
+    <tr>
+      <td>Computing on Masked Data: a High Performance Method for Improving Big Data Veracity - Kepner et al IEEE HPEC 2014</td>
       <td><a href="https://arxiv.org/abs/1406.5751">paper</a></td>
       <td>Security</td>
       <td>IEEE HPEC 2014</td>
       <td>Yes</td>
-      <td></td>
+      <td> </td>
     </tr>
-    <tr><td>D4M 2.0 Schema: A General Purpose High Performance Schema for the Accumulo Database - Kepner et al IEEE HPEC 2013</td>
+    <tr>
+      <td>D4M 2.0 Schema: A General Purpose High Performance Schema for the Accumulo Database - Kepner et al IEEE HPEC 2013</td>
       <td><a href="http://ieee-hpec.org/2013/index_htm_files/11-Kepner-D4Mschema-IEEE-HPEC.pdf">paper</a> <a href="http://ieee-hpec.org/2013/index_htm_files/11_130716-D4Mschema.pdf">slides</a></td>
       <td>Architecture</td>
       <td>IEEE HPEC 2013</td>
       <td>Yes</td>
-      <td></td>
+      <td> </td>
     </tr>
-    <tr><td>Genetic Sequence Matching Using D4M Big Data Approaches - Dodson et al IEEE HPEC 2014</td>
-      <td></td>
+    <tr>
+      <td>Genetic Sequence Matching Using D4M Big Data Approaches - Dodson et al IEEE HPEC 2014</td>
+      <td> </td>
       <td>Use Cases</td>
       <td>IEEE HPEC 2014</td>
       <td>Yes</td>
       <td>Best Paper Finalist</td>
     </tr>
-    <tr><td>Big Data Dimensional Analysis - Gadepally et al IEEE HPEC 2014</td>
-      <td></td>
+    <tr>
+      <td>Big Data Dimensional Analysis - Gadepally et al IEEE HPEC 2014</td>
+      <td> </td>
       <td>Use Cases</td>
       <td>IEEE HPEC 2014</td>
       <td>Yes</td>
-      <td></td>
+      <td> </td>
     </tr>
-    <tr><td>LLSuperCloud: Sharing HPC systems for diverse rapid prototyping - Reuther et al IEEE HPEC 2013</td>
+    <tr>
+      <td>LLSuperCloud: Sharing HPC systems for diverse rapid prototyping - Reuther et al IEEE HPEC 2013</td>
       <td><a href="http://ieee-hpec.org/2013/index_htm_files/26-HPEC13_LLSuperCloud_Reuther_final.pdf">paper</a> <a href="http://ieee-hpec.org/2013/index_htm_files/HPEC+2013+Reuther+SuperCloud+final.pdf">slides</a></td>
       <td>Architecture</td>
       <td>IEEE HPEC 2013</td>
       <td>Yes</td>
-      <td></td>
+      <td> </td>
     </tr>
-    <tr><td>Spatio-temporal indexing in non-relational distributed databases - Fox et al 2013 IEEE International Congress on Big Data</td>
+    <tr>
+      <td>Spatio-temporal indexing in non-relational distributed databases - Fox et al 2013 IEEE International Congress on Big Data</td>
       <td><a href="http://geomesa.github.io/assets/outreach/SpatioTemporalIndexing_IEEEcopyright.pdf">paper</a></td>
       <td>Use Cases</td>
       <td>IEEE International Congress on Big Data 2013</td>
       <td>Yes</td>
-      <td></td>
+      <td> </td>
     </tr>
-    <tr><td>Typograph: Multiscale spatial exploration of text documents - Endert, Alex, et al 2013 IEEE International Congress on Big Data</td>
+    <tr>
+      <td>Typograph: Multiscale spatial exploration of text documents - Endert, Alex, et al 2013 IEEE International Congress on Big Data</td>
       <td><a href="https://people.cs.vt.edu/aendert/Alex_Endert/Research_files/Typograph.pdf">paper</a></td>
       <td>Use Cases</td>
       <td>IEEE International Congress on Big Data 2013</td>
       <td>Yes</td>
-      <td></td>
+      <td> </td>
     </tr>
-    <tr><td>UxV Data to the Cloud via Widgets - Charles et al 18th International Command &amp; Control Research &amp; Technology Symposium 2013</td>
+    <tr>
+      <td>UxV Data to the Cloud via Widgets - Charles et al 18th International Command &amp; Control Research &amp; Technology Symposium 2013</td>
       <td><a href="http://www.dodccrp.org/events/18th_iccrts_2013/post_conference/papers/051.pdf">paper</a></td>
       <td>Use Cases</td>
       <td>Command &amp; Control Research &amp; Technology Symposium 2013</td>
       <td>Yes</td>
-      <td></td>
+      <td> </td>
     </tr>
-    <tr><td>Driving Big Data With Big Compute - Byun et al IEEE HPEC 2012</td>
+    <tr>
+      <td>Driving Big Data With Big Compute - Byun et al IEEE HPEC 2012</td>
       <td><a href="http://www.mit.edu/~kepner/pubs/ByunKepner_2012_BigData_Paper.pdf">paper</a> <a href="http://ieee-hpec.org/2012/index_htm_files/42_ID18.pptx">slides</a></td>
       <td>Architecture</td>
       <td>IEEE HPEC 2012</td>
       <td>Yes</td>
-      <td></td>
+      <td> </td>
     </tr>
-    <tr><td>Large Scale Network Situational Awareness Via 3D Gaming Technology - Hubbell et al IEEE HPEC 2012</td>
+    <tr>
+      <td>Large Scale Network Situational Awareness Via 3D Gaming Technology - Hubbell et al IEEE HPEC 2012</td>
       <td><a href="http://ieee-hpec.org/2012/index_htm_files/HPEC12_Hubbell.pdf">paper</a> <a href="http://ieee-hpec.org/2012/index_htm_files/31_ID20.pptx">slides</a></td>
       <td>Use Cases</td>
       <td>IEEE HPEC 2012</td>
       <td>Yes</td>
-      <td></td>
+      <td> </td>
     </tr>
-    <tr><td>Rya: a scalable RDF triple store for the clouds - Punnoose et al 1st International Workshop on Cloud Intelligence, ACM, 2012</td>
+    <tr>
+      <td>Rya: a scalable RDF triple store for the clouds - Punnoose et al 1st International Workshop on Cloud Intelligence, ACM, 2012</td>
       <td><a href="https://sqrrl.com/media/Rya_CloudI20121.pdf">paper</a></td>
       <td>Architecture</td>
       <td>ACM International Workshop on Cloud Intelligence 2012</td>
       <td>Yes</td>
-      <td></td>
+      <td> </td>
     </tr>
-    <tr><td>Dynamic distributed dimensional data model (D4M) database and computation system - Kepner et al ICASSP 2012</td>
+    <tr>
+      <td>Dynamic distributed dimensional data model (D4M) database and computation system - Kepner et al ICASSP 2012</td>
       <td><a href="http://www.mit.edu/~kepner/pubs/Kepner_2012_D4M_Paper.pdf">paper</a> <a href="http://www.mit.edu/~kepner/pubs/Kepner_2012_D4M_Slides.pdf">slides</a></td>
       <td>Architecture</td>
       <td>ICASSP 2012</td>
       <td>Yes</td>
-      <td></td>
+      <td> </td>
     </tr>
-    <tr><td>An NSA Big Graph experiment (Technical Report NSA-RD-2013-056002v1)</td>
+    <tr>
+      <td>An NSA Big Graph experiment (Technical Report NSA-RD-2013-056002v1)</td>
       <td><a href="http://www.pdl.cmu.edu/SDI/2013/slides/big_graph_nsa_rd_2013_56002v1.pdf">slides</a></td>
       <td>Performance</td>
       <td>Technical Report</td>
       <td>No</td>
-      <td></td>
+      <td> </td>
     </tr>
-    <tr><td>Patil, Swapnil, et al. "YCSB++: benchmarking and performance debugging advanced features in scalable table stores." Proceedings of the 2nd ACM Symposium on Cloud Computing. ACM, 2011.</td>
+    <tr>
+      <td>Patil, Swapnil, et al. “YCSB++: benchmarking and performance debugging advanced features in scalable table stores.” Proceedings of the 2nd ACM Symposium on Cloud Computing. ACM, 2011.</td>
       <td><a href="http://www.pdl.cmu.edu/PDL-FTP/Storage/socc2011.pdf">paper</a> <a href="http://www.cercs.gatech.edu/opencirrus/OCsummit11/presentations/patil.pdf">slides</a></td>
       <td>Performance</td>
       <td>ACM Symposium on Cloud Computing 2011</td>
       <td>Yes</td>
-      <td></td>
+      <td> </td>
     </tr>
-    <tr><td>Turner, Keith "Apache Accumulo 1.4 &amp; 1.5 Features." February, 2012</td>
+    <tr>
+      <td>Turner, Keith “Apache Accumulo 1.4 &amp; 1.5 Features.” February, 2012</td>
       <td><a href="https://home.apache.org/~kturner/accumulo14_15.pdf">slides</a></td>
       <td>Architecture</td>
       <td>Meetup</td>
       <td>No</td>
-      <td></td>
+      <td> </td>
     </tr>
-    <tr><td>Fuchs, Adam "Accumulo: Extensions to Google's Bigtable Design." March, 2012</td>
+    <tr>
+      <td>Fuchs, Adam “Accumulo: Extensions to Google’s Bigtable Design.” March, 2012</td>
       <td><a href="https://home.apache.org/~afuchs/slides/morgan_state_talk.pdf">slides</a></td>
       <td>Architecture</td>
       <td>Meetup</td>
       <td>No</td>
-      <td></td>
+      <td> </td>
     </tr>
-    <tr><td>Miner, Donald "An Introduction to Apache Accumulo - how it works, why it exists, and how it is used." August, 2014</td>
+    <tr>
+      <td>Miner, Donald “An Introduction to Apache Accumulo - how it works, why it exists, and how it is used.” August, 2014</td>
       <td><a href="https://www.slideshare.net/DonaldMiner/an-introduction-to-accumulo">slides</a></td>
       <td>Architecture</td>
       <td>Online</td>
       <td>No</td>
-      <td />
+      <td> </td>
     </tr>
   </tbody>
 </table>
@@ -499,6 +532,7 @@ $("#citationtable").dataTable();
   <li><a href="https://en.wikipedia.org/wiki/Bloom_filter">Bloom Filter</a></li>
 </ul>
 
+
       </div>
 
       

http://git-wip-us.apache.org/repos/asf/accumulo/blob/e938fe2b/release_notes/1.5.1.html
----------------------------------------------------------------------
diff --git a/release_notes/1.5.1.html b/release_notes/1.5.1.html
index 00791a3..68715cd 100644
--- a/release_notes/1.5.1.html
+++ b/release_notes/1.5.1.html
@@ -392,55 +392,60 @@ has a set of tests that must be run before the candidate is capable of becoming
 <p>Each unit and functional test only runs on a single node, while the RandomWalk and Continuous Ingest tests run 
 on any number of nodes. <em>Agitation</em> refers to randomly restarting Accumulo processes and Hadoop Datanode processes,
 and, in HDFS High-Availability instances, forcing NameNode failover.</p>
-<table id="release_notes_testing">
-  <tr>
-    <th>OS</th>
-    <th>Hadoop</th>
-    <th>Nodes</th>
-    <th>ZooKeeper</th>
-    <th>HDFS High-Availability</th>
-    <th>Tests</th>
-  </tr>
-  <tr>
-    <td>CentOS 6.5</td>
-    <td>HDP 2.0 (Apache 2.2.0)</td>
-    <td>6</td>
-    <td>HDP 2.0 (Apache 3.4.5)</td>
-    <td>Yes (QJM)</td>
-    <td>All required tests</td>
-  </tr>
-  <tr>
-    <td>CentOS 6.4</td>
-    <td>CDH 4.5.0 (2.0.0+cdh4.5.0)</td>
-    <td>7</td>
-    <td>CDH 4.5.0 (3.4.5+cdh4.5.0)</td>
-    <td>Yes (QJM)</td>
-    <td>Unit, functional and 24hr Randomwalk w/ agitation</td>
-  </tr>
-  <tr>
-    <td>CentOS 6.4</td>
-    <td>CDH 4.5.0 (2.0.0+cdh4.5.0)</td>
-    <td>7</td>
-    <td>CDH 4.5.0 (3.4.5+cdh4.5.0)</td>
-    <td>Yes (QJM)</td>
-    <td>2x 24/hr continuous ingest w/ verification</td>
-  </tr>
-  <tr>
-    <td>CentOS 6.3</td>
-    <td>Apache 1.0.4</td>
-    <td>1</td>
-    <td>Apache 3.3.5</td>
-    <td>No</td>
-    <td>Local testing, unit and functional tests</td>
-  </tr>
-  <tr>
-    <td>RHEL 6.4</td>
-    <td>Apache 2.2.0</td>
-    <td>10</td>
-    <td>Apache 3.4.5</td>
-    <td>No</td>
-    <td>Functional tests</td>
-  </tr>
+
+<table id="release_notes_testing" class="table">
+  <thead>
+    <tr>
+      <th>OS</th>
+      <th>Hadoop</th>
+      <th>Nodes</th>
+      <th>ZooKeeper</th>
+      <th>HDFS High-Availability</th>
+      <th>Tests</th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>CentOS 6.5</td>
+      <td>HDP 2.0 (Apache 2.2.0)</td>
+      <td>6</td>
+      <td>HDP 2.0 (Apache 3.4.5)</td>
+      <td>Yes (QJM)</td>
+      <td>All required tests</td>
+    </tr>
+    <tr>
+      <td>CentOS 6.4</td>
+      <td>CDH 4.5.0 (2.0.0+cdh4.5.0)</td>
+      <td>7</td>
+      <td>CDH 4.5.0 (3.4.5+cdh4.5.0)</td>
+      <td>Yes (QJM)</td>
+      <td>Unit, functional and 24hr Randomwalk w/ agitation</td>
+    </tr>
+    <tr>
+      <td>CentOS 6.4</td>
+      <td>CDH 4.5.0 (2.0.0+cdh4.5.0)</td>
+      <td>7</td>
+      <td>CDH 4.5.0 (3.4.5+cdh4.5.0)</td>
+      <td>Yes (QJM)</td>
+      <td>2x 24/hr continuous ingest w/ verification</td>
+    </tr>
+    <tr>
+      <td>CentOS 6.3</td>
+      <td>Apache 1.0.4</td>
+      <td>1</td>
+      <td>Apache 3.3.5</td>
+      <td>No</td>
+      <td>Local testing, unit and functional tests</td>
+    </tr>
+    <tr>
+      <td>RHEL 6.4</td>
+      <td>Apache 2.2.0</td>
+      <td>10</td>
+      <td>Apache 3.4.5</td>
+      <td>No</td>
+      <td>Functional tests</td>
+    </tr>
+  </tbody>
 </table>
 
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/e938fe2b/release_notes/1.5.2.html
----------------------------------------------------------------------
diff --git a/release_notes/1.5.2.html b/release_notes/1.5.2.html
index d524b71..55bc57e 100644
--- a/release_notes/1.5.2.html
+++ b/release_notes/1.5.2.html
@@ -373,31 +373,36 @@ constraint no longer hangs the entire system.</p>
 <p>Each unit and functional test only runs on a single node, while the RandomWalk and Continuous Ingest tests run 
 on any number of nodes. <em>Agitation</em> refers to randomly restarting Accumulo processes and Hadoop Datanode processes,
 and, in HDFS High-Availability instances, forcing NameNode failover.</p>
-<table id="release_notes_testing">
-  <tr>
-    <th>OS</th>
-    <th>Hadoop</th>
-    <th>Nodes</th>
-    <th>ZooKeeper</th>
-    <th>HDFS High-Availability</th>
-    <th>Tests</th>
-  </tr>
-  <tr>
-    <td>Gentoo</td>
-    <td>Apache 2.6.0-SNAPSHOT</td>
-    <td>1</td>
-    <td>Apache 3.4.5</td>
-    <td>No</td>
-    <td>Unit and Functional Tests, ContinuousIngest w/ verification (1B entries)</td>
-  </tr>
-  <tr>
-    <td>CentOS 6</td>
-    <td>Apache 2.3.0</td>
-    <td>20</td>
-    <td>Apache 3.4.5</td>
-    <td>No</td>
-    <td>24/hr RandomWalk, 24/hr ContinuousIngest w/ verification w/ and w/o agitation (30B and 23B entries)</td>
-  </tr>
+
+<table id="release_notes_testing" class="table">
+  <thead>
+    <tr>
+      <th>OS</th>
+      <th>Hadoop</th>
+      <th>Nodes</th>
+      <th>ZooKeeper</th>
+      <th>HDFS High-Availability</th>
+      <th>Tests</th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>Gentoo</td>
+      <td>Apache 2.6.0-SNAPSHOT</td>
+      <td>1</td>
+      <td>Apache 3.4.5</td>
+      <td>No</td>
+      <td>Unit and Functional Tests, ContinuousIngest w/ verification (1B entries)</td>
+    </tr>
+    <tr>
+      <td>CentOS 6</td>
+      <td>Apache 2.3.0</td>
+      <td>20</td>
+      <td>Apache 3.4.5</td>
+      <td>No</td>
+      <td>24/hr RandomWalk, 24/hr ContinuousIngest w/ verification w/ and w/o agitation (30B and 23B entries)</td>
+    </tr>
+  </tbody>
 </table>
 
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/e938fe2b/release_notes/1.5.3.html
----------------------------------------------------------------------
diff --git a/release_notes/1.5.3.html b/release_notes/1.5.3.html
index c6ed66c..90b0e89 100644
--- a/release_notes/1.5.3.html
+++ b/release_notes/1.5.3.html
@@ -323,31 +323,35 @@ of these failures. Users are encouraged to follow
 One possible workaround is to increase the <code class="highlighter-rouge">general.rpc.timeout</code> in the
 Accumulo configuration from <code class="highlighter-rouge">120s</code> to <code class="highlighter-rouge">240s</code>.</p>
 
-<table id="release_notes_testing">
-  <tr>
-    <th>OS</th>
-    <th>Hadoop</th>
-    <th>Nodes</th>
-    <th>ZooKeeper</th>
-    <th>HDFS High-Availability</th>
-    <th>Tests</th>
-  </tr>
-  <tr>
-    <td>Gentoo</td>
-    <td>2.6.0</td>
-    <td>1</td>
-    <td>3.4.5</td>
-    <td>No</td>
-    <td>Unit and Integration Tests</td>
-  </tr>
-  <tr>
-    <td>Centos 6.5</td>
-    <td>2.7.1</td>
-    <td>6</td>
-    <td>3.4.5</td>
-    <td>No</td>
-    <td>Continuous Ingest and Verify</td>
-  </tr>
+<table id="release_notes_testing" class="table">
+  <thead>
+    <tr>
+      <th>OS</th>
+      <th>Hadoop</th>
+      <th>Nodes</th>
+      <th>ZooKeeper</th>
+      <th>HDFS High-Availability</th>
+      <th>Tests</th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>Gentoo</td>
+      <td>2.6.0</td>
+      <td>1</td>
+      <td>3.4.5</td>
+      <td>No</td>
+      <td>Unit and Integration Tests</td>
+    </tr>
+    <tr>
+      <td>Centos 6.5</td>
+      <td>2.7.1</td>
+      <td>6</td>
+      <td>3.4.5</td>
+      <td>No</td>
+      <td>Continuous Ingest and Verify</td>
+    </tr>
+  </tbody>
 </table>
 
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/e938fe2b/release_notes/1.5.4.html
----------------------------------------------------------------------
diff --git a/release_notes/1.5.4.html b/release_notes/1.5.4.html
index b06cb77..cc2ddc2 100644
--- a/release_notes/1.5.4.html
+++ b/release_notes/1.5.4.html
@@ -285,31 +285,35 @@ and Continuous Ingest tests run on any number of nodes. <em>Agitation</em> refer
 randomly restarting Accumulo processes and Hadoop DataNode processes, and, in
 HDFS High-Availability instances, forcing NameNode fail-over.</p>
 
-<table id="release_notes_testing">
-  <tr>
-    <th>OS</th>
-    <th>Hadoop</th>
-    <th>Nodes</th>
-    <th>ZooKeeper</th>
-    <th>HDFS High-Availability</th>
-    <th>Tests</th>
-  </tr>
-  <tr>
-    <td>OSX</td>
-    <td>2.6.0</td>
-    <td>1</td>
-    <td>3.4.5</td>
-    <td>No</td>
-    <td>Unit and Functional Tests</td>
-  </tr>
-  <tr>
-    <td>Centos 6.5</td>
-    <td>2.7.1</td>
-    <td>6</td>
-    <td>3.4.5</td>
-    <td>No</td>
-    <td>Continuous Ingest and Verify (10B entries), Randomwalk (24hrs)</td>
-  </tr>
+<table id="release_notes_testing" class="table">
+  <thead>
+    <tr>
+      <th>OS</th>
+      <th>Hadoop</th>
+      <th>Nodes</th>
+      <th>ZooKeeper</th>
+      <th>HDFS High-Availability</th>
+      <th>Tests</th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>OSX</td>
+      <td>2.6.0</td>
+      <td>1</td>
+      <td>3.4.5</td>
+      <td>No</td>
+      <td>Unit and Functional Tests</td>
+    </tr>
+    <tr>
+      <td>Centos 6.5</td>
+      <td>2.7.1</td>
+      <td>6</td>
+      <td>3.4.5</td>
+      <td>No</td>
+      <td>Continuous Ingest and Verify (10B entries), Randomwalk (24hrs)</td>
+    </tr>
+  </tbody>
 </table>
 
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/e938fe2b/release_notes/1.6.0.html
----------------------------------------------------------------------
diff --git a/release_notes/1.6.0.html b/release_notes/1.6.0.html
index 248f26c..9d027cb 100644
--- a/release_notes/1.6.0.html
+++ b/release_notes/1.6.0.html
@@ -502,107 +502,111 @@ and, in HDFS High-Availability instances, forcing NameNode failover.</p>
   <li>RW : Random Walk</li>
 </ul>
 
-<table id="release_notes_testing">
-  <tr>
-    <th>OS</th>
-    <th>Java</th>
-    <th>Hadoop</th>
-    <th>Nodes</th>
-    <th>ZooKeeper</th>
-    <th>HDFS HA</th>
-    <th>Version/Commit hash</th>
-    <th>Tests</th>
-  </tr>
-  <tr>
-    <td>CentOS 6.5</td>
-    <td>CentOS OpenJDK 1.7</td>
-    <td>Apache 2.2.0</td>
-    <td>20 EC2 nodes</td>
-    <td>Apache 3.4.5</td>
-    <td>No</td>
-    <td>1.6.0 RC1 + ACCUMULO_2668 patch</td>
-    <td>24-hour CI w/o agitation. Verified.</td>
-  </tr>
-  <tr>
-    <td>CentOS 6.5</td>
-    <td>CentOS OpenJDK 1.7</td>
-    <td>Apache 2.2.0</td>
-    <td>20 EC2 nodes</td>
-    <td>Apache 3.4.5</td>
-    <td>No</td>
-    <td>1.6.0 RC2</td>
-    <td>24-hour RW (Conditional.xml module) w/o agitation</td>
-  </tr>
-  <tr>
-    <td>CentOS 6.5</td>
-    <td>CentOS OpenJDK 1.7</td>
-    <td>Apache 2.2.0</td>
-    <td>20 EC2 nodes</td>
-    <td>Apache 3.4.5</td>
-    <td>No</td>
-    <td>1.6.0 RC5</td>
-    <td>24-hour CI w/ agitation. Verified.</td>
-  </tr>
-  <tr>
-    <td>CentOS 6.5</td>
-    <td>CentOS OpenJDK 1.6 and 1.7</td>
-    <td>Apache 1.2.1, 2.2.0</td>
-    <td>Single</td>
-    <td>Apache 3.3.6</td>
-    <td>No</td>
-    <td>1.6.0 RC5</td>
-    <td>All unit and ITs w/  <code>-Dhadoop.profile=2</code> and <code>-Dhadoop.profile=1</code></td>
-  </tr>
-  <tr>
-    <td>Gentoo</td>
-    <td>Sun JDK 1.6.0_45</td>
-    <td>Apache 1.2.1, 2.2.0, 2.3.0, 2.4.0</td>
-    <td>Single</td>
-    <td>Apache 3.4.5</td>
-    <td>No</td>
-    <td>1.6.0 RC5</td>
-    <td>All unit and ITs. 2B entries ingested/verified with CI </td>
-  </tr>
-  <tr>
-    <td>CentOS 6.4</td>
-    <td>Sun JDK 1.6.0_31</td>
-    <td>CDH 4.5.0</td>
-    <td>7</td>
-    <td>CDH 4.5.0</td>
-    <td>Yes</td>
-    <td>1.6.0 RC4 and RC5</td>
-    <td>24-hour RW (LongClean) with and without agitation</td>
-  </tr>
-  <tr>
-    <td>CentOS 6.4</td>
-    <td>Sun JDK 1.6.0_31</td>
-    <td>CDH 4.5.0</td>
-    <td>7</td>
-    <td>CDH 4.5.0</td>
-    <td>Yes</td>
-    <td>3a1b38</td>
-    <td>72-hour CI with and without agitation. Verified.</td>
-  </tr>
-  <tr>
-    <td>CentOS 6.4</td>
-    <td>Sun JDK 1.6.0_31</td>
-    <td>CDH 4.5.0</td>
-    <td>7</td>
-    <td>CDH 4.5.0</td>
-    <td>Yes</td>
-    <td>1.6.0 RC2</td>
-    <td>24-hour CI without agitation. Verified.</td>
-  </tr>
-  <tr>
-    <td>CentOS 6.4</td>
-    <td>Sun JDK 1.6.0_31</td>
-    <td>CDH 4.5.0</td>
-    <td>7</td>
-    <td>CDH 4.5.0</td>
-    <td>Yes</td>
-    <td>1.6.0 RC3</td>
-    <td>24-hour CI with agitation. Verified.</td>
-  </tr>
+<table id="release_notes_testing" class="table">
+  <thead>
+    <tr>
+      <th>OS</th>
+      <th>Java</th>
+      <th>Hadoop</th>
+      <th>Nodes</th>
+      <th>ZooKeeper</th>
+      <th>HDFS HA</th>
+      <th>Version/Commit hash</th>
+      <th>Tests</th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>CentOS 6.5</td>
+      <td>CentOS OpenJDK 1.7</td>
+      <td>Apache 2.2.0</td>
+      <td>20 EC2 nodes</td>
+      <td>Apache 3.4.5</td>
+      <td>No</td>
+      <td>1.6.0 RC1 + ACCUMULO_2668 patch</td>
+      <td>24-hour CI w/o agitation. Verified.</td>
+    </tr>
+    <tr>
+      <td>CentOS 6.5</td>
+      <td>CentOS OpenJDK 1.7</td>
+      <td>Apache 2.2.0</td>
+      <td>20 EC2 nodes</td>
+      <td>Apache 3.4.5</td>
+      <td>No</td>
+      <td>1.6.0 RC2</td>
+      <td>24-hour RW (Conditional.xml module) w/o agitation</td>
+    </tr>
+    <tr>
+      <td>CentOS 6.5</td>
+      <td>CentOS OpenJDK 1.7</td>
+      <td>Apache 2.2.0</td>
+      <td>20 EC2 nodes</td>
+      <td>Apache 3.4.5</td>
+      <td>No</td>
+      <td>1.6.0 RC5</td>
+      <td>24-hour CI w/ agitation. Verified.</td>
+    </tr>
+    <tr>
+      <td>CentOS 6.5</td>
+      <td>CentOS OpenJDK 1.6 and 1.7</td>
+      <td>Apache 1.2.1, 2.2.0</td>
+      <td>Single</td>
+      <td>Apache 3.3.6</td>
+      <td>No</td>
+      <td>1.6.0 RC5</td>
+      <td>All unit and ITs w/  <code class="highlighter-rouge">-Dhadoop.profile=2</code> and <code class="highlighter-rouge">-Dhadoop.profile=1</code></td>
+    </tr>
+    <tr>
+      <td>Gentoo</td>
+      <td>Sun JDK 1.6.0_45</td>
+      <td>Apache 1.2.1, 2.2.0, 2.3.0, 2.4.0</td>
+      <td>Single</td>
+      <td>Apache 3.4.5</td>
+      <td>No</td>
+      <td>1.6.0 RC5</td>
+      <td>All unit and ITs. 2B entries ingested/verified with CI</td>
+    </tr>
+    <tr>
+      <td>CentOS 6.4</td>
+      <td>Sun JDK 1.6.0_31</td>
+      <td>CDH 4.5.0</td>
+      <td>7</td>
+      <td>CDH 4.5.0</td>
+      <td>Yes</td>
+      <td>1.6.0 RC4 and RC5</td>
+      <td>24-hour RW (LongClean) with and without agitation</td>
+    </tr>
+    <tr>
+      <td>CentOS 6.4</td>
+      <td>Sun JDK 1.6.0_31</td>
+      <td>CDH 4.5.0</td>
+      <td>7</td>
+      <td>CDH 4.5.0</td>
+      <td>Yes</td>
+      <td>3a1b38</td>
+      <td>72-hour CI with and without agitation. Verified.</td>
+    </tr>
+    <tr>
+      <td>CentOS 6.4</td>
+      <td>Sun JDK 1.6.0_31</td>
+      <td>CDH 4.5.0</td>
+      <td>7</td>
+      <td>CDH 4.5.0</td>
+      <td>Yes</td>
+      <td>1.6.0 RC2</td>
+      <td>24-hour CI without agitation. Verified.</td>
+    </tr>
+    <tr>
+      <td>CentOS 6.4</td>
+      <td>Sun JDK 1.6.0_31</td>
+      <td>CDH 4.5.0</td>
+      <td>7</td>
+      <td>CDH 4.5.0</td>
+      <td>Yes</td>
+      <td>1.6.0 RC3</td>
+      <td>24-hour CI with agitation. Verified.</td>
+    </tr>
+  </tbody>
 </table>
 
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/e938fe2b/release_notes/1.6.1.html
----------------------------------------------------------------------
diff --git a/release_notes/1.6.1.html b/release_notes/1.6.1.html
index 3be3560..fe3b69d 100644
--- a/release_notes/1.6.1.html
+++ b/release_notes/1.6.1.html
@@ -383,31 +383,36 @@ environments, this is very problematic as there is no means to stop the Scanner
 <p>Each unit and functional test only runs on a single node, while the RandomWalk and Continuous Ingest tests run 
 on any number of nodes. <em>Agitation</em> refers to randomly restarting Accumulo processes and Hadoop Datanode processes,
 and, in HDFS High-Availability instances, forcing NameNode failover.</p>
-<table id="release_notes_testing">
-  <tr>
-    <th>OS</th>
-    <th>Hadoop</th>
-    <th>Nodes</th>
-    <th>ZooKeeper</th>
-    <th>HDFS High-Availability</th>
-    <th>Tests</th>
-  </tr>
-  <tr>
-    <td>Gentoo</td>
-    <td>Apache 2.6.0-SNAPSHOT</td>
-    <td>2</td>
-    <td>Apache 3.4.5</td>
-    <td>No</td>
-    <td>Unit and Functional Tests, ContinuousIngest w/ verification (2B entries)</td>
-  </tr>
-  <tr>
-    <td>CentOS 6</td>
-    <td>Apache 2.3.0</td>
-    <td>20</td>
-    <td>Apache 3.4.5</td>
-    <td>No</td>
-    <td>24/hr RandomWalk, ContinuousIngest w/ verification w/ and w/o agitation (17B entries), 24hr Randomwalk test</td>
-  </tr>
+
+<table id="release_notes_testing" class="table">
+  <thead>
+    <tr>
+      <th>OS</th>
+      <th>Hadoop</th>
+      <th>Nodes</th>
+      <th>ZooKeeper</th>
+      <th>HDFS HA</th>
+      <th>Tests</th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>Gentoo</td>
+      <td>Apache 2.6.0-SNAPSHOT</td>
+      <td>2</td>
+      <td>Apache 3.4.5</td>
+      <td>No</td>
+      <td>Unit and Functional Tests, ContinuousIngest w/ verification (2B entries)</td>
+    </tr>
+    <tr>
+      <td>CentOS 6</td>
+      <td>Apache 2.3.0</td>
+      <td>20</td>
+      <td>Apache 3.4.5</td>
+      <td>No</td>
+      <td>24/hr RandomWalk, ContinuousIngest w/ verification w/ and w/o agitation (17B entries), 24hr Randomwalk test</td>
+    </tr>
+  </tbody>
 </table>
 
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/e938fe2b/release_notes/1.6.2.html
----------------------------------------------------------------------
diff --git a/release_notes/1.6.2.html b/release_notes/1.6.2.html
index f34b650..de57f09 100644
--- a/release_notes/1.6.2.html
+++ b/release_notes/1.6.2.html
@@ -372,47 +372,52 @@ for configuring an instance with native maps.</p>
 <p>Each unit and functional test only runs on a single node, while the RandomWalk and Continuous Ingest tests run 
 on any number of nodes. <em>Agitation</em> refers to randomly restarting Accumulo processes and Hadoop Datanode processes,
 and, in HDFS High-Availability instances, forcing NameNode failover.</p>
-<table id="release_notes_testing">
-  <tr>
-    <th>OS</th>
-    <th>Hadoop</th>
-    <th>Nodes</th>
-    <th>ZooKeeper</th>
-    <th>HDFS High-Availability</th>
-    <th>Tests</th>
-  </tr>
-  <tr>
-    <td>Gentoo</td>
-    <td>N/A</td>
-    <td>1</td>
-    <td>N/A</td>
-    <td>No</td>
-    <td>Unit and Integration Tests</td>
-  </tr>
-  <tr>
-    <td>Mac OSX</td>
-    <td>N/A</td>
-    <td>1</td>
-    <td>N/A</td>
-    <td>No</td>
-    <td>Unit and Integration Tests</td>
-  </tr>
-  <tr>
-    <td>Fedora 21</td>
-    <td>N/A</td>
-    <td>1</td>
-    <td>N/A</td>
-    <td>No</td>
-    <td>Unit and Integration Tests</td>
-  </tr>
-  <tr>
-    <td>CentOS 6</td>
-    <td>2.6</td>
-    <td>20</td>
-    <td>3.4.5</td>
-    <td>No</td>
-    <td>ContinuousIngest w/ verification w/ and w/o agitation (31B and 21B entries, respectively)</td>
-  </tr>
+
+<table id="release_notes_testing" class="table">
+  <thead>
+    <tr>
+      <th>OS</th>
+      <th>Hadoop</th>
+      <th>Nodes</th>
+      <th>ZooKeeper</th>
+      <th>HDFS HA</th>
+      <th>Tests</th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>Gentoo</td>
+      <td>N/A</td>
+      <td>1</td>
+      <td>N/A</td>
+      <td>No</td>
+      <td>Unit and Integration Tests</td>
+    </tr>
+    <tr>
+      <td>Mac OSX</td>
+      <td>N/A</td>
+      <td>1</td>
+      <td>N/A</td>
+      <td>No</td>
+      <td>Unit and Integration Tests</td>
+    </tr>
+    <tr>
+      <td>Fedora 21</td>
+      <td>N/A</td>
+      <td>1</td>
+      <td>N/A</td>
+      <td>No</td>
+      <td>Unit and Integration Tests</td>
+    </tr>
+    <tr>
+      <td>CentOS 6</td>
+      <td>2.6</td>
+      <td>20</td>
+      <td>3.4.5</td>
+      <td>No</td>
+      <td>ContinuousIngest w/ verification w/ and w/o agitation (31B and 21B entries, respectively)</td>
+    </tr>
+  </tbody>
 </table>
 
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/e938fe2b/release_notes/1.6.3.html
----------------------------------------------------------------------
diff --git a/release_notes/1.6.3.html b/release_notes/1.6.3.html
index 833665d..b63fe88 100644
--- a/release_notes/1.6.3.html
+++ b/release_notes/1.6.3.html
@@ -317,47 +317,51 @@ and Continuous Ingest tests run on any number of nodes. <em>Agitation</em> refer
 randomly restarting Accumulo processes and Hadoop Datanode processes, and, in
 HDFS High-Availability instances, forcing NameNode failover.</p>
 
-<table id="release_notes_testing">
-  <tr>
-    <th>OS</th>
-    <th>Hadoop</th>
-    <th>Nodes</th>
-    <th>ZooKeeper</th>
-    <th>HDFS High-Availability</th>
-    <th>Tests</th>
-  </tr>
-  <tr>
-    <td>Amazon Linux 2014.09</td>
-    <td>2.6.0</td>
-    <td>20</td>
-    <td>3.4.5</td>
-    <td>No</td>
-    <td>24hr ContinuousIngest w/ verification w/ and w/o agitation</td>
-  </tr>
-  <tr>
-    <td>Amazon Linux 2014.09</td>
-    <td>2.6.0</td>
-    <td>20</td>
-    <td>3.4.5</td>
-    <td>No</td>
-    <td>24hr Randomwalk w/o agitation</td>
-  </tr>
-  <tr>
-    <td>Centos 6.5</td>
-    <td>2.7.1</td>
-    <td>6</td>
-    <td>3.4.5</td>
-    <td>No</td>
-    <td>Continuous Ingest and Verify (6B entries)</td>
-  </tr>
-  <tr>
-    <td>Centos 6.6</td>
-    <td>2.2.0</td>
-    <td>6</td>
-    <td>3.4.5</td>
-    <td>No</td>
-    <td>All integration test passed.  Some needed to be run a 2nd time.</td>
-  </tr>
+<table id="release_notes_testing" class="table">
+  <thead>
+    <tr>
+      <th>OS</th>
+      <th>Hadoop</th>
+      <th>Nodes</th>
+      <th>ZooKeeper</th>
+      <th>HDFS HA</th>
+      <th>Tests</th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>Amazon Linux 2014.09</td>
+      <td>2.6.0</td>
+      <td>20</td>
+      <td>3.4.5</td>
+      <td>No</td>
+      <td>24hr ContinuousIngest w/ verification w/ and w/o agitation</td>
+    </tr>
+    <tr>
+      <td>Amazon Linux 2014.09</td>
+      <td>2.6.0</td>
+      <td>20</td>
+      <td>3.4.5</td>
+      <td>No</td>
+      <td>24hr Randomwalk w/o agitation</td>
+    </tr>
+    <tr>
+      <td>Centos 6.5</td>
+      <td>2.7.1</td>
+      <td>6</td>
+      <td>3.4.5</td>
+      <td>No</td>
+      <td>Continuous Ingest and Verify (6B entries)</td>
+    </tr>
+    <tr>
+      <td>Centos 6.6</td>
+      <td>2.2.0</td>
+      <td>6</td>
+      <td>3.4.5</td>
+      <td>No</td>
+      <td>All integration test passed.  Some needed to be run a 2nd time.</td>
+    </tr>
+  </tbody>
 </table>
 
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/e938fe2b/release_notes/1.6.4.html
----------------------------------------------------------------------
diff --git a/release_notes/1.6.4.html b/release_notes/1.6.4.html
index d7ebad3..a41fb47 100644
--- a/release_notes/1.6.4.html
+++ b/release_notes/1.6.4.html
@@ -282,23 +282,27 @@ and Continuous Ingest tests run on any number of nodes. <em>Agitation</em> refer
 randomly restarting Accumulo processes and Hadoop Datanode processes, and, in
 HDFS High-Availability instances, forcing NameNode failover.</p>
 
-<table id="release_notes_testing">
-  <tr>
-    <th>OS</th>
-    <th>Hadoop</th>
-    <th>Nodes</th>
-    <th>ZooKeeper</th>
-    <th>HDFS High-Availability</th>
-    <th>Tests</th>
-  </tr>
-  <tr>
-    <td>Amazon Linux 2014.09</td>
-    <td>2.6.0</td>
-    <td>20</td>
-    <td>3.4.5</td>
-    <td>No</td>
-    <td>ContinuousIngest w/ verification w/ and w/o agitation (37B entries)</td>
-  </tr>
+<table id="release_notes_testing" class="table">
+  <thead>
+    <tr>
+      <th>OS</th>
+      <th>Hadoop</th>
+      <th>Nodes</th>
+      <th>ZooKeeper</th>
+      <th>HDFS HA</th>
+      <th>Tests</th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>Amazon Linux 2014.09</td>
+      <td>2.6.0</td>
+      <td>20</td>
+      <td>3.4.5</td>
+      <td>No</td>
+      <td>ContinuousIngest w/ verification w/ and w/o agitation (37B entries)</td>
+    </tr>
+  </tbody>
 </table>
 
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/e938fe2b/release_notes/1.6.5.html
----------------------------------------------------------------------
diff --git a/release_notes/1.6.5.html b/release_notes/1.6.5.html
index 50bdc5a..545b5de 100644
--- a/release_notes/1.6.5.html
+++ b/release_notes/1.6.5.html
@@ -314,47 +314,51 @@ and Continuous Ingest tests run on any number of nodes. <em>Agitation</em> refer
 randomly restarting Accumulo processes and Hadoop Datanode processes, and, in
 HDFS High-Availability instances, forcing NameNode failover.</p>
 
-<table id="release_notes_testing">
-  <tr>
-    <th>OS</th>
-    <th>Hadoop</th>
-    <th>Nodes</th>
-    <th>ZooKeeper</th>
-    <th>HDFS High-Availability</th>
-    <th>Tests</th>
-  </tr>
-  <tr>
-    <td>CentOS 7.1</td>
-    <td>2.6.3</td>
-    <td>9</td>
-    <td>3.4.6</td>
-    <td>No</td>
-    <td>Random walk (All.xml) 18-hour run (2 failures, both conflicting operations on same table in Concurrent test)</td>
-  </tr>
-  <tr>
-    <td>CentOS 7.1</td>
-    <td>2.6.3</td>
-    <td>6</td>
-    <td>3.4.6</td>
-    <td>No</td>
-    <td>Continuous ingest with agitation (2B entries)</td>
-  </tr>
-  <tr>
-    <td>CentOS 6.7</td>
-    <td>2.2.0 and 1.2.1</td>
-    <td>1</td>
-    <td>3.3.6</td>
-    <td>No</td>
-    <td>All unit and integration tests</td>
-  </tr>
-  <tr>
-    <td>CentOS 7.1 (Oracle JDK8)</td>
-    <td>2.6.3</td>
-    <td>9</td>
-    <td>3.4.6</td>
-    <td>No</td>
-    <td>Continuous ingest with agitation (24hrs, 32B entries verified) on EC2 (1 m3.xlarge leader; 8 d2.xlarge workers)</td>
-  </tr>
+<table id="release_notes_testing" class="table">
+  <thead>
+    <tr>
+      <th>OS</th>
+      <th>Hadoop</th>
+      <th>Nodes</th>
+      <th>ZooKeeper</th>
+      <th>HDFS HA</th>
+      <th>Tests</th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>CentOS 7.1</td>
+      <td>2.6.3</td>
+      <td>9</td>
+      <td>3.4.6</td>
+      <td>No</td>
+      <td>Random walk (All.xml) 18-hour run (2 failures, both conflicting operations on same table in Concurrent test)</td>
+    </tr>
+    <tr>
+      <td>CentOS 7.1</td>
+      <td>2.6.3</td>
+      <td>6</td>
+      <td>3.4.6</td>
+      <td>No</td>
+      <td>Continuous ingest with agitation (2B entries)</td>
+    </tr>
+    <tr>
+      <td>CentOS 6.7</td>
+      <td>2.2.0 and 1.2.1</td>
+      <td>1</td>
+      <td>3.3.6</td>
+      <td>No</td>
+      <td>All unit and integration tests</td>
+    </tr>
+    <tr>
+      <td>CentOS 7.1 (Oracle JDK8)</td>
+      <td>2.6.3</td>
+      <td>9</td>
+      <td>3.4.6</td>
+      <td>No</td>
+      <td>Continuous ingest with agitation (24hrs, 32B entries verified) on EC2 (1 m3.xlarge leader; 8 d2.xlarge workers)</td>
+    </tr>
+  </tbody>
 </table>
 
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/e938fe2b/release_notes/1.7.0.html
----------------------------------------------------------------------
diff --git a/release_notes/1.7.0.html b/release_notes/1.7.0.html
index 5a2f260..71b7562 100644
--- a/release_notes/1.7.0.html
+++ b/release_notes/1.7.0.html
@@ -599,47 +599,51 @@ of these failures. Users are encouraged to follow
 One possible workaround is to increase the <code class="highlighter-rouge">general.rpc.timeout</code> in the
 Accumulo configuration from <code class="highlighter-rouge">120s</code> to <code class="highlighter-rouge">240s</code>.</p>
 
-<table id="release_notes_testing">
-  <tr>
-    <th>OS</th>
-    <th>Hadoop</th>
-    <th>Nodes</th>
-    <th>ZooKeeper</th>
-    <th>HDFS High-Availability</th>
-    <th>Tests</th>
-  </tr>
-  <tr>
-    <td>Gentoo</td>
-    <td>N/A</td>
-    <td>1</td>
-    <td>N/A</td>
-    <td>No</td>
-    <td>Unit and Integration Tests</td>
-  </tr>
-  <tr>
-    <td>Gentoo</td>
-    <td>2.6.0</td>
-    <td>1 (2 TServers)</td>
-    <td>3.4.5</td>
-    <td>No</td>
-    <td>24hr CI w/ agitation and verification, 24hr RW w/o agitation.</td>
-  </tr>
-  <tr>
-    <td>Centos 6.6</td>
-    <td>2.6.0</td>
-    <td>3</td>
-    <td>3.4.6</td>
-    <td>No</td>
-    <td>24hr RW w/ agitation, 24hr CI w/o agitation, 72hr CI w/ and w/o agitation</td>
-  </tr>
-  <tr>
-    <td>Amazon Linux</td>
-    <td>2.6.0</td>
-    <td>20 m1large</td>
-    <td>3.4.6</td>
-    <td>No</td>
-    <td>24hr CI w/o agitation</td>
-  </tr>
+<table id="release_notes_testing" class="table">
+  <thead>
+    <tr>
+      <th>OS</th>
+      <th>Hadoop</th>
+      <th>Nodes</th>
+      <th>ZooKeeper</th>
+      <th>HDFS HA</th>
+      <th>Tests</th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>Gentoo</td>
+      <td>N/A</td>
+      <td>1</td>
+      <td>N/A</td>
+      <td>No</td>
+      <td>Unit and Integration Tests</td>
+    </tr>
+    <tr>
+      <td>Gentoo</td>
+      <td>2.6.0</td>
+      <td>1 (2 TServers)</td>
+      <td>3.4.5</td>
+      <td>No</td>
+      <td>24hr CI w/ agitation and verification, 24hr RW w/o agitation.</td>
+    </tr>
+    <tr>
+      <td>Centos 6.6</td>
+      <td>2.6.0</td>
+      <td>3</td>
+      <td>3.4.6</td>
+      <td>No</td>
+      <td>24hr RW w/ agitation, 24hr CI w/o agitation, 72hr CI w/ and w/o agitation</td>
+    </tr>
+    <tr>
+      <td>Amazon Linux</td>
+      <td>2.6.0</td>
+      <td>20 m1large</td>
+      <td>3.4.6</td>
+      <td>No</td>
+      <td>24hr CI w/o agitation</td>
+    </tr>
+  </tbody>
 </table>
 
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/e938fe2b/release_notes/1.7.1.html
----------------------------------------------------------------------
diff --git a/release_notes/1.7.1.html b/release_notes/1.7.1.html
index e6cffb3..639d22e 100644
--- a/release_notes/1.7.1.html
+++ b/release_notes/1.7.1.html
@@ -349,47 +349,51 @@ and Continuous Ingest tests run on any number of nodes. <em>Agitation</em> refer
 randomly restarting Accumulo processes and Hadoop Datanode processes, and, in
 HDFS High-Availability instances, forcing NameNode failover.</p>
 
-<table id="release_notes_testing">
-  <tr>
-    <th>OS/Environment</th>
-    <th>Hadoop</th>
-    <th>Nodes</th>
-    <th>ZooKeeper</th>
-    <th>HDFS High-Availability</th>
-    <th>Tests</th>
-  </tr>
-  <tr>
-    <td>CentOS 7.1 w/Oracle JDK8 on EC2 (1 m3.xlarge, 8 d2.xlarge)</td>
-    <td>2.6.3</td>
-    <td>9</td>
-    <td>3.4.6</td>
-    <td>No</td>
-    <td>Random walk (All.xml) 24-hour run, saw <a href="https://issues.apache.org/jira/browse/ACCUMULO-3794">ACCUMULO-3794</a> and <a href="https://issues.apache.org/jira/browse/ACCUMULO-4151">ACCUMULO-4151</a>.</td>
-  </tr>
-  <tr>
-    <td>CentOS 7.1 w/Oracle JDK8 on EC2 (1 m3.xlarge, 8 d2.xlarge)</td>
-    <td>2.6.3</td>
-    <td>9</td>
-    <td>3.4.6</td>
-    <td>No</td>
-    <td>21 hr run of CI w/ agitation, 23.1B entries verified.</td>
-  </tr>
-  <tr>
-    <td>CentOS 7.1 w/Oracle JDK8 on EC2 (1 m3.xlarge, 8 d2.xlarge)</td>
-    <td>2.6.3</td>
-    <td>9</td>
-    <td>3.4.6</td>
-    <td>No</td>
-    <td>24 hr run of CI w/o agitation, 23.0B entries verified; saw performance issues outlined in comment on <a href="https://issues.apache.org/jira/browse/ACCUMULO-4146">ACCUMULO-4146</a>.</td>
-  </tr>
-  <tr>
-    <td>CentOS 6.7 (OpenJDK 7), Fedora 23 (OpenJDK 8), and CentOS 7.2 (OpenJDK 7)</td>
-    <td>2.6.1</td>
-    <td>1</td>
-    <td>3.4.6</td>
-    <td>No</td>
-    <td>All unit tests and ITs pass with -Dhadoop.version=2.6.1; Kerberos ITs had a problem with earlier versions of Hadoop</td>
-  </tr>
+<table id="release_notes_testing" class="table">
+  <thead>
+    <tr>
+      <th>OS/Environment</th>
+      <th>Hadoop</th>
+      <th>Nodes</th>
+      <th>ZooKeeper</th>
+      <th>HDFS HA</th>
+      <th>Tests</th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>CentOS 7.1 w/Oracle JDK8 on EC2 (1 m3.xlarge, 8 d2.xlarge)</td>
+      <td>2.6.3</td>
+      <td>9</td>
+      <td>3.4.6</td>
+      <td>No</td>
+      <td>Random walk (All.xml) 24-hour run, saw <a href="https://issues.apache.org/jira/browse/ACCUMULO-3794">ACCUMULO-3794</a> and <a href="https://issues.apache.org/jira/browse/ACCUMULO-4151">ACCUMULO-4151</a>.</td>
+    </tr>
+    <tr>
+      <td>CentOS 7.1 w/Oracle JDK8 on EC2 (1 m3.xlarge, 8 d2.xlarge)</td>
+      <td>2.6.3</td>
+      <td>9</td>
+      <td>3.4.6</td>
+      <td>No</td>
+      <td>21 hr run of CI w/ agitation, 23.1B entries verified.</td>
+    </tr>
+    <tr>
+      <td>CentOS 7.1 w/Oracle JDK8 on EC2 (1 m3.xlarge, 8 d2.xlarge)</td>
+      <td>2.6.3</td>
+      <td>9</td>
+      <td>3.4.6</td>
+      <td>No</td>
+      <td>24 hr run of CI w/o agitation, 23.0B entries verified; saw performance issues outlined in comment on <a href="https://issues.apache.org/jira/browse/ACCUMULO-4146">ACCUMULO-4146</a>.</td>
+    </tr>
+    <tr>
+      <td>CentOS 6.7 (OpenJDK 7), Fedora 23 (OpenJDK 8), and CentOS 7.2 (OpenJDK 7)</td>
+      <td>2.6.1</td>
+      <td>1</td>
+      <td>3.4.6</td>
+      <td>No</td>
+      <td>All unit tests and ITs pass with -Dhadoop.version=2.6.1; Kerberos ITs had a problem with earlier versions of Hadoop</td>
+    </tr>
+  </tbody>
 </table>
 
 


[2/4] accumulo git commit: Favor markdown over html

Posted by ct...@apache.org.
Favor markdown over html

Use markdown tables whenever possible
Fix broken html tables with markdown in wikisearch example
Use markdown for glossary
Fix sections with markdown instead of html in bylaws
Other minor fixes using markdown over html


Project: http://git-wip-us.apache.org/repos/asf/accumulo/repo
Commit: http://git-wip-us.apache.org/repos/asf/accumulo/commit/964cf811
Tree: http://git-wip-us.apache.org/repos/asf/accumulo/tree/964cf811
Diff: http://git-wip-us.apache.org/repos/asf/accumulo/diff/964cf811

Branch: refs/heads/gh-pages
Commit: 964cf811c421e215804e8c72f575902acf9daa91
Parents: d558c0d
Author: Christopher Tubbs <ct...@apache.org>
Authored: Thu Apr 28 23:54:38 2016 -0400
Committer: Christopher Tubbs <ct...@apache.org>
Committed: Thu Apr 28 23:54:38 2016 -0400

----------------------------------------------------------------------
 bylaws.md              | 141 +++---------
 css/accumulo.css       |   4 +-
 downloads/index.md     |  72 +++---
 example/wikisearch.md  | 528 +++++++++++++++-----------------------------
 glossary.md            | 170 +++++++++-----
 index.md               |  23 +-
 mailing_list.md        |  83 ++++---
 notable_features.md    |  47 ++--
 old_documentation.md   |  30 ++-
 papers/index.md        | 250 +++++----------------
 release_notes/1.5.1.md |  59 +----
 release_notes/1.5.2.md |  32 +--
 release_notes/1.5.3.md |  31 +--
 release_notes/1.5.4.md |  31 +--
 release_notes/1.6.0.md | 116 ++--------
 release_notes/1.6.1.md |  31 +--
 release_notes/1.6.2.md |  49 +---
 release_notes/1.6.3.md |  49 +---
 release_notes/1.6.4.md |  23 +-
 release_notes/1.6.5.md |  49 +---
 release_notes/1.7.0.md |  49 +---
 release_notes/1.7.1.md |  53 +----
 22 files changed, 619 insertions(+), 1301 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/accumulo/blob/964cf811/bylaws.md
----------------------------------------------------------------------
diff --git a/bylaws.md b/bylaws.md
index 53458bd..291ed70 100644
--- a/bylaws.md
+++ b/bylaws.md
@@ -97,34 +97,18 @@ Within the Accumulo project, different types of decisions require different form
 
 Decisions regarding the project are made by votes on the primary project development mailing list: dev@accumulo.apache.org. Where necessary, PMC voting may take place on the private Accumulo PMC mailing list: private@accumulo.apache.org. Votes are clearly indicated by a subject line starting with [VOTE]. A vote message may only pertain to a single item’s approval; multiple items should be separated into multiple messages. Voting is carried out by replying to the vote mail. A vote may take on one of four forms, defined below.
 
-<table class="table">
-  <tr>
-    <th>Vote</th>
-    <th>Meaning</th>
-  </tr>
-  <tr>
-    <td>+1</td>
-    <td>'Yes,' 'Agree,' or 'The action should be performed.' In general, this vote also indicates a willingness on the behalf of the voter to 'make it happen'.</td>
-  </tr>
-  <tr>
-    <td>+0</td>
-    <td>This vote indicates a willingness for the action under consideration to go ahead. The voter, however, will not be able to help.</td>
-  </tr>
-  <tr>
-    <td>-0</td>
-    <td>This vote indicates that the voter does not, in general, agree with the proposed action but is not concerned enough to prevent the action going ahead.</td>
-  </tr>
-  <tr>
-    <td>-1</td>
-    <td>'No', 'Disagree', or 'The action should not be performed.' On issues where consensus is required, this vote counts as a veto. All vetoes must contain an explanation of why the veto is appropriate. Vetoes with no explanation are void. It may also be appropriate for a -1 vote to include an alternative course of action.</td>
-  </tr>
-</table>
+{: .table }
+| Vote | Meaning |
+|------|---------|
+| +1   | *Yes*, *Agree*, or *The action should be performed*. In general, this vote also indicates a willingness on the behalf of the voter to *make it happen*. |
+| +0   | This vote indicates a willingness for the action under consideration to go ahead. The voter, however, will not be able to help.                         |
+| -0   | This vote indicates that the voter does not, in general, agree with the proposed action but is not concerned enough to prevent the action going ahead.  |
+| -1   | *No*, *Disagree*, or *The action should not be performed*. On issues where consensus is required, this vote counts as a veto. All vetoes must contain an explanation of why the veto is appropriate. Vetoes with no explanation are void. It may also be appropriate for a -1 vote to include an alternative course of action. |
 
 All participants in the Accumulo project are encouraged to vote. For technical decisions, only the votes of active committers are binding. Non-binding votes are still useful for those with binding votes to understand the perception of an action across the wider Accumulo community. For PMC decisions, only the votes of active PMC members are binding.
 
 See the [voting page][voting] for more details on the mechanics of voting.
 
-<a id="CTR"></a>
 ## Commit Then Review (CTR)
 
 Voting can also be applied to changes to the Accumulo codebase. Under the Commit Then Review policy, committers can make changes to the codebase without seeking approval beforehand, and the changes are assumed to be approved unless an objection is raised. Only if an objection is raised must a vote take place on the code change.
@@ -135,24 +119,12 @@ For some code changes, committers may wish to get feedback from the community be
 
 These are the types of approvals that can be sought. Different actions require different types of approvals.
 
-<table class="table">
-  <tr>
-    <th>Approval Type</th>
-    <th>Definition</th>
-  </tr>
-  <tr>
-    <td>Consensus Approval</td>
-    <td>A consensus approval vote passes with 3 binding +1 votes and no binding vetoes.</td>
-  </tr>
-  <tr>
-    <td>Majority Approval</td>
-    <td>A majority approval vote passes with 3 binding +1 votes and more binding +1 votes than -1 votes.</td>
-  </tr>
-  <tr>
-    <td>Lazy Approval (or Lazy Consensus)</td>
-    <td>An action with lazy approval is implicitly allowed unless a -1 vote is received, at which time, depending on the type of action, either majority approval or consensus approval must be obtained.  Lazy Approval can be either <em>stated</em> or <em>assumed</em>, as detailed on the <a href="governance/lazyConsensus">lazy consensus page</a>.</td>
-  </tr>
-</table>
+{: .table }
+| Approval&nbsp;Type                | Definition                                                                                       |
+|-----------------------------------|--------------------------------------------------------------------------------------------------|
+| Consensus Approval                | A consensus approval vote passes with 3 binding +1 votes and no binding vetoes.                  |
+| Majority Approval                 | A majority approval vote passes with 3 binding +1 votes and more binding +1 votes than -1 votes. |
+| Lazy Approval (or Lazy Consensus) | An action with lazy approval is implicitly allowed unless a -1 vote is received, at which time, depending on the type of action, either majority approval or consensus approval must be obtained.  Lazy Approval can be either *stated* or *assumed*, as detailed on the [lazy consensus page][lazy]. |
 
 ## Vetoes
 
@@ -166,78 +138,18 @@ This section describes the various actions which are undertaken within the proje
 
 For Code Change actions, a committer may choose to employ assumed or stated Lazy Approval under the [CTR][ctr] policy. Assumed Lazy Approval has no minimum length of time before the change can be made.
 
-<table class="table">
-  <tr>
-    <th>Action</th>
-    <th>Description</th>
-    <th>Approval</th>
-    <th>Binding Votes</th>
-    <th>Min. Length (days)</th>
-  </tr>
-  <tr>
-    <td>Code Change</td>
-    <td>A change made to a codebase of the project. This includes source code, documentation, website content, etc.</td>
-    <td>Lazy approval, moving to consensus approval upon veto</td>
-    <td>Active committers</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>Release Plan</td>
-    <td>Defines the timetable and actions for an upcoming release. The plan also nominates a Release Manager.</td>
-    <td>Lazy approval, moving to majority approval upon veto</td>
-    <td>Active committers</td>
-    <td>3</td>
-  </tr>
-  <tr>
-    <td>Release Plan Cancellation</td>
-    <td>Cancels an active release plan, due to a need to re-plan (e.g., discovery of a major issue).</td>
-    <td>Majority approval</td>
-    <td>Active committers</td>
-    <td>3</td>
-  </tr>
-  <tr>
-    <td>Product Release</td>
-    <td>Accepts or rejects a release candidate as an official release of the project.</td>
-    <td>Majority approval</td>
-    <td>Active PMC members</td>
-    <td>3</td>
-  </tr>
-  <tr>
-    <td>Adoption of New Codebase</td>
-    <td>When the codebase for an existing, released product is to be replaced with an alternative codebase. If such a vote fails to gain approval, the existing code base will continue. This also covers the creation of new sub-projects within the project.</td>
-    <td>Consensus approval</td>
-    <td>Active PMC members</td>
-    <td>7</td>
-  </tr>
-  <tr>
-    <td>New Committer</td>
-    <td>When a new committer is proposed for the project.</td>
-    <td>Consensus approval</td>
-    <td>Active PMC members</td>
-    <td>3</td>
-  </tr>
-  <tr>
-    <td>New PMC Member</td>
-    <td>When a committer is proposed for the PMC.</td>
-    <td>Consensus approval</td>
-    <td>Active PMC members</td>
-    <td>3</td>
-  </tr>
-  <tr>
-    <td>New PMC Chair</td>
-    <td>When a new PMC chair is chosen to succeed an outgoing chair.</td>
-    <td>Consensus approval</td>
-    <td>Active PMC members</td>
-    <td>3</td>
-  </tr>
-  <tr>
-    <td>Modifying Bylaws</td>
-    <td>Modifying this document.</td>
-    <td>Consensus approval</td>
-    <td>Active PMC members</td>
-    <td>7</td>
-  </tr>
-</table>
+{: .table }
+| Action                    | Description                                                                                                 | Approval                                              | Binding Votes      | Min. Length (days) |
+|---------------------------|-------------------------------------------------------------------------------------------------------------|-------------------------------------------------------|--------------------|--------------------|
+| Code Change               | A change made to a codebase of the project. This includes source code, documentation, website content, etc. | Lazy approval, moving to consensus approval upon veto | Active committers  | 1                  |
+| Release Plan              | Defines the timetable and actions for an upcoming release. The plan also nominates a Release Manager.       | Lazy approval, moving to majority approval upon veto  | Active committers  | 3                  |
+| Release Plan Cancellation | Cancels an active release plan, due to a need to re-plan (e.g., discovery of a major issue).                | Majority approval                                     | Active committers  | 3                  |
+| Product Release           | Accepts or rejects a release candidate as an official release of the project.                               | Majority approval                                     | Active PMC members | 3                  |
+| Adoption of New Codebase  | When the codebase for an existing, released product is to be replaced with an alternative codebase. If such a vote fails to gain approval, the existing code base will continue. This also covers the creation of new sub-projects within the project. | Consensus approval | Active PMC members | 7 |
+| New Committer             | When a new committer is proposed for the project.                                                           | Consensus approval                                    | Active PMC members | 3                  |
+| New PMC Member            | When a committer is proposed for the PMC.                                                                   | Consensus approval                                    | Active PMC members | 3                  |
+| New PMC Chair             | When a new PMC chair is chosen to succeed an outgoing chair.                                                | Consensus approval                                    | Active PMC members | 3                  |
+| Modifying Bylaws          | Modifying this document.                                                                                    | Consensus approval                                    | Active PMC members | 7                  |
 
 No other voting actions are defined; all other actions should presume Lazy Approval (defaulting to Consensus Approval upon veto). If an action is voted on multiple times, or if a different approval type is desired, these bylaws should be amended to include the action.
 
@@ -275,6 +187,7 @@ All dates in a plan are estimates, as unforeseen issues may require delays. The
 [release-guidelines]: governance/releasing
 [release-mechanics]: releasing
 [voting]: governance/voting
-[ctr]: #CTR
+[ctr]: #commit-then-review-ctr
 [committer-terms]: https://www.apache.org/dev/committers#committer-set-term
 [pmc-removal]: https://www.apache.org/dev/pmc#pmc-removal
+[lazy]: governance/lazyConsensus

http://git-wip-us.apache.org/repos/asf/accumulo/blob/964cf811/css/accumulo.css
----------------------------------------------------------------------
diff --git a/css/accumulo.css b/css/accumulo.css
index 5f1969c..a87a2c0 100644
--- a/css/accumulo.css
+++ b/css/accumulo.css
@@ -74,11 +74,11 @@ footer > p {
     border-collapse:collapse;
 }
 
-#release_notes_testing, #release_notes_testing tbody tr, #release_notes_testing tbody tr th, #release_notes_testing tbody tr td {
+#release_notes_testing, #release_notes_testing tr, #release_notes_testing th, #release_notes_testing td {
     border: 2px solid black;
 }
 
-#release_notes_testing tbody tr th, #release_notes_testing tbody tr td {
+#release_notes_testing th, #release_notes_testing td {
     padding: 5px;
 }
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/964cf811/downloads/index.md
----------------------------------------------------------------------
diff --git a/downloads/index.md b/downloads/index.md
index 068db12..380ade0 100644
--- a/downloads/index.md
+++ b/downloads/index.md
@@ -79,7 +79,7 @@ var mirrorsCallback = function(json) {
 };
 
 // get mirrors when page is ready
-var mirrorURL = "/mirrors.cgi"; // http[s]://accumulo.apache.org/mirrors.cgi
+var mirrorURL = "http://accumulo.apache.org/mirrors.cgi"; // http[s]://accumulo.apache.org/mirrors.cgi
 $(function() { $.getJSON(mirrorURL + "?as_json", mirrorsCallback); });
 
 </script>
@@ -91,30 +91,21 @@ Be sure to verify your downloads by these [procedures][VERIFY_PROCEDURES] using
 
 ## Current Releases
 
-### 1.7.1 <span class="label label-primary">latest</span>
+### 1.7.1 **latest**{: .label .label-primary }
 
 The most recent Apache Accumulo&trade; release is version 1.7.1. See the [release notes][REL_NOTES_17] and [CHANGES][CHANGES_17].
 
 For convenience, [MD5][MD5SUM_17] and [SHA1][SHA1SUM_17] hashes are also available.
 
-<table class="table">
-<tr>
-<th>Generic Binaries</th>
-<td><a href="https://www.apache.org/dyn/closer.lua/accumulo/1.7.1/accumulo-1.7.1-bin.tar.gz" link-suffix="/accumulo/1.7.1/accumulo-1.7.1-bin.tar.gz" class="download_external" id="/downloads/accumulo-1.7.1-bin.tar.gz">accumulo-1.7.1-bin.tar.gz</a></td>
-<td><a href="https://www.apache.org/dist/accumulo/1.7.1/accumulo-1.7.1-bin.tar.gz.asc">ASC</a></td>
-</tr>
-<tr>
-<th>Source</th>
-<td><a href="https://www.apache.org/dyn/closer.lua/accumulo/1.7.1/accumulo-1.7.1-src.tar.gz" link-suffix="/accumulo/1.7.1/accumulo-1.7.1-src.tar.gz" class="download_external" id="/downloads/accumulo-1.7.1-src.tar.gz">accumulo-1.7.1-src.tar.gz</a></td>
-<td><a href="https://www.apache.org/dist/accumulo/1.7.1/accumulo-1.7.1-src.tar.gz.asc">ASC</a></td>
-</tr>
-</table>
+{: .table }
+| **Generic Binaries** | [accumulo-1.7.1-bin.tar.gz][BIN_17] | [ASC][ASC_BIN_17] |
+| **Source**           | [accumulo-1.7.1-src.tar.gz][SRC_17] | [ASC][ASC_SRC_17] |
 
 #### 1.7 Documentation
-* <a href="https://github.com/apache/accumulo/blob/rel/1.7.1/README.md" class="download_external" id="/1.7/README">README</a>
+* [README][README_17]
 * [HTML User Manual][MANUAL_HTML_17]
 * [Examples][EXAMPLES_17]
-* <a href="{{ site.baseurl }}/1.7/apidocs" class="download_external" id="/1.7/apidocs">Javadoc</a>
+* [Javadoc][JAVADOC_17]
 
 ### 1.6.5
 
@@ -122,25 +113,16 @@ The most recent 1.6.x release of Apache Accumulo&trade; is version 1.6.5. See th
 
 For convenience, [MD5][MD5SUM_16] and [SHA1][SHA1SUM_16] hashes are also available.
 
-<table class="table">
-<tr>
-<th>Generic Binaries</th>
-<td><a href="https://www.apache.org/dyn/closer.lua/accumulo/1.6.5/accumulo-1.6.5-bin.tar.gz" link-suffix="/accumulo/1.6.5/accumulo-1.6.5-bin.tar.gz" class="download_external" id="/downloads/accumulo-1.6.5-bin.tar.gz">accumulo-1.6.5-bin.tar.gz</a></td>
-<td><a href="https://www.apache.org/dist/accumulo/1.6.5/accumulo-1.6.5-bin.tar.gz.asc">ASC</a></td>
-</tr>
-<tr>
-<th>Source</th>
-<td><a href="https://www.apache.org/dyn/closer.lua/accumulo/1.6.5/accumulo-1.6.5-src.tar.gz" link-suffix="/accumulo/1.6.5/accumulo-1.6.5-src.tar.gz" class="download_external" id="/downloads/accumulo-1.6.5-src.tar.gz">accumulo-1.6.5-src.tar.gz</a></td>
-<td><a href="https://www.apache.org/dist/accumulo/1.6.5/accumulo-1.6.5-src.tar.gz.asc">ASC</a></td>
-</tr>
-</table>
+{: .table }
+| **Generic Binaries** | [accumulo-1.6.5-bin.tar.gz][BIN_16] | [ASC][ASC_BIN_16] |
+| **Source**           | [accumulo-1.6.5-src.tar.gz][SRC_16] | [ASC][ASC_SRC_16] |
 
 #### 1.6 Documentation
-* <a href="https://git-wip-us.apache.org/repos/asf?p=accumulo.git;a=blob_plain;f=README;hb=rel/1.6.5" class="download_external" id="/1.6/README">README</a>
-* <a href="https://search.maven.org/remotecontent?filepath=org/apache/accumulo/accumulo-docs/1.6.5/accumulo-docs-1.6.5-user-manual.pdf" class="download_external" id="/1.6/accumulo_user_manual.pdf">PDF manual</a>
+* [README][README_16]
+* [PDF manual][MANUAL_PDF_16]
 * [html manual][MANUAL_HTML_16]
 * [examples][EXAMPLES_16]
-* <a href="{{ site.baseurl }}/1.6/apidocs" class="download_external" id="/1.6/apidocs">Javadoc</a>
+* [Javadoc][JAVADOC_16]
 
 ## Older releases
 
@@ -151,6 +133,34 @@ Older releases can be found in the [archives][ARCHIVES].
 [GPG_KEYS]: https://www.apache.org/dist/accumulo/KEYS "KEYS"
 [ARCHIVES]: https://archive.apache.org/dist/accumulo
 
+[ASC_BIN_16]: https://www.apache.org/dist/accumulo/1.7.1/accumulo-1.7.1-bin.tar.gz.asc
+[ASC_SRC_16]: https://www.apache.org/dist/accumulo/1.7.1/accumulo-1.7.1-src.tar.gz.asc
+
+[ASC_BIN_17]: https://www.apache.org/dist/accumulo/1.7.1/accumulo-1.7.1-bin.tar.gz.asc
+[ASC_SRC_17]: https://www.apache.org/dist/accumulo/1.7.1/accumulo-1.7.1-src.tar.gz.asc
+
+[BIN_16]: https://www.apache.org/dyn/closer.lua/accumulo/1.6.5/accumulo-1.6.5-bin.tar.gz
+{: .download_external link-suffix="/accumulo/1.6.5/accumulo-1.6.5-bin.tar.gz" id="/downloads/accumulo-1.6.5-bin.tar.gz" }
+[SRC_16]: https://www.apache.org/dyn/closer.lua/accumulo/1.6.5/accumulo-1.6.5-src.tar.gz
+{: .download_external link-suffix="/accumulo/1.6.5/accumulo-1.6.5-src.tar.gz" id="/downloads/accumulo-1.6.5-src.tar.gz" }
+
+[BIN_17]: https://www.apache.org/dyn/closer.lua/accumulo/1.7.1/accumulo-1.7.1-bin.tar.gz
+{: .download_external link-suffix="/accumulo/1.7.1/accumulo-1.7.1-bin.tar.gz" id="/downloads/accumulo-1.7.1-bin.tar.gz" }
+[SRC_17]: https://www.apache.org/dyn/closer.lua/accumulo/1.7.1/accumulo-1.7.1-src.tar.gz
+{: .download_external link-suffix="/accumulo/1.7.1/accumulo-1.7.1-src.tar.gz" id="/downloads/accumulo-1.7.1-src.tar.gz" }
+
+[README_16]: https://git-wip-us.apache.org/repos/asf?p=accumulo.git;a=blob_plain;f=README;hb=rel/1.6.5
+{: .download_external id="/1.6/README" }
+[README_17]: https://github.com/apache/accumulo/blob/rel/1.7.1/README.md
+{: .download_external id="/1.7/README" }
+
+[JAVADOC_16]: {{ site.baseurl }}/1.6/apidocs
+{: .download_external id="/1.6/apidocs" }
+[JAVADOC_17]: {{ site.baseurl }}/1.7/apidocs
+{: .download_external id="/1.7/apidocs" }
+
+[MANUAL_PDF_16]: https://search.maven.org/remotecontent?filepath=org/apache/accumulo/accumulo-docs/1.6.5/accumulo-docs-1.6.5-user-manual.pdf
+{: .download_external id="/1.6/accumulo_user_manual.pdf" }
 [MANUAL_HTML_16]: {{ site.baseurl }}/1.6/accumulo_user_manual "1.6 user manual"
 [MANUAL_HTML_17]: {{ site.baseurl }}/1.7/accumulo_user_manual "1.7 user manual"
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/964cf811/example/wikisearch.md
----------------------------------------------------------------------
diff --git a/example/wikisearch.md b/example/wikisearch.md
index c084ec1..51631f0 100644
--- a/example/wikisearch.md
+++ b/example/wikisearch.md
@@ -2,133 +2,110 @@
 title: The wikipedia search example explained, with performance numbers.
 ---
 
-Apache Accumulo Query Performance
---------------------------
+## Apache Accumulo Query Performance
 
-Sample Application
-------------------
+## Sample Application
 
-Starting with release 1.4, Accumulo includes an example web application that provides a flexible,  scalable search over the articles of Wikipedia, a widely available medium-sized corpus.
+Starting with release 1.4, Accumulo includes an example web application that
+provides a flexible, scalable search over the articles of Wikipedia, a widely
+available medium-sized corpus.
 
-The example uses an indexing technique helpful for doing multiple logical tests against content.  In this case, we can perform a word search on Wikipedia articles.   The sample application takes advantage of 3 unique capabilities of Accumulo:
+The example uses an indexing technique helpful for doing multiple logical tests
+against content. In this case, we can perform a word search on Wikipedia
+articles. The sample application takes advantage of 3 unique capabilities of
+Accumulo:
 
- 1. Extensible iterators that operate within the distributed tablet servers of the key-value store
- 2. Custom aggregators which can efficiently condense information during the various life-cycles of the log-structured merge tree 
- 3. Custom load balancing, which ensures that a table is evenly distributed on all tablet servers
+1. Extensible iterators that operate within the distributed tablet servers of
+   the key-value store
+1. Custom aggregators which can efficiently condense information during the
+   various life-cycles of the log-structured merge tree 
+1. Custom load balancing, which ensures that a table is evenly distributed on
+   all tablet servers
 
-In the example, Accumulo tracks the cardinality of all terms as elements are ingested.  If the cardinality is small enough, it will track the set of documents by term directly.  For example:
+In the example, Accumulo tracks the cardinality of all terms as elements are
+ingested. If the cardinality is small enough, it will track the set of
+documents by term directly. For example:
 
 <style type="text/css">
-table, td, th {
+table.wiki, table.wiki td, table.wiki th {
   padding-right: 5px;
   padding-left: 5px;
   border: 1px solid black;
   border-collapse: collapse;
 }
-td {
+table.wiki td {
   text-align: right;
 }
-.lt {
-  text-align: left;
-}
 </style>
 
-<table>
-<tr>
-<th>Row (word)</th>
-<th colspan="2">Value (count, document list)</th>
-</tr><tr>
-<td>Octopus
-<td>2
-<td class='lt'>[Document 57, Document 220]
-</tr><tr>
-<td>Other
-<td>172,849
-<td class='lt'>[]
-</tr><tr>
-<td>Ostrich
-<td>1
-<td class='lt'>[Document 901]
-</tr>
-</table>
-
-Searches can be optimized to focus on low-cardinality terms.  To create these counts, the example installs “aggregators” which are used to combine inserted values.  The ingester just writes simple  “(Octopus, 1, Document 57)” tuples.  The tablet servers then used the installed aggregators to merge the cells as the data is re-written, or queried.  This reduces the in-memory locking required to update high-cardinality terms, and defers aggregation to a later time, where it can be done more efficiently.
-
-The example also creates a reverse word index to map each word to the document in which it appears. But it does this by choosing an arbitrary partition for the document.  The article, and the word index for the article are grouped together into the same partition.  For example:
-
-<table>
-<tr>
-<th>Row (partition)
-<th>Column Family
-<th>Column Qualifier
-<th>Value
-<tr>
-<td>1
-<td>D
-<td>Document 57
-<td>“smart Octopus”
-<tr>
-<td>1
-<td>Word, Octopus
-<td>Document 57
-<td>
-<tr>
-<td>1
-<td>Word, smart
-<td>Document 57
-<td>
-<tr>
-<td>...
-<td>
-<td>
-<td>
-<tr>
-<td>2
-<td>D
-<td>Document 220
-<td>“big Octopus”
-<tr>
-<td>2
-<td>Word, big
-<td>Document 220
-<td>
-<tr>
-<td>2
-<td>Word, Octopus
-<td>Document 220
-<td>
-</table>
-
-Of course, there would be large numbers of documents in each partition, and the elements of those documents would be interlaced according to their sort order.
-
-By dividing the index space into partitions, the multi-word searches can be performed in parallel across all the nodes.  Also, by grouping the document together with its index, a document can be retrieved without a second request from the client.  The query “octopus” and “big” will be performed on all the servers, but only those partitions for which the low-cardinality term “octopus” can be found by using the aggregated reverse index information.  The query for a document is performed by extensions provided in the example.  These extensions become part of the tablet server's iterator stack.  By cloning the underlying iterators, the query extensions can seek to specific words within the index, and when it finds a matching document, it can then seek to the document location and retrieve the contents.
-
-We loaded the example on a  cluster of 10 servers, each with 12 cores, and 32G RAM, 6 500G drives.  Accumulo tablet servers were allowed a maximum of 3G of working memory, of which 2G was dedicated to caching file data.
-
-Following the instructions in the example, the Wikipedia XML data for articles was loaded for English, Spanish and German languages into 10 partitions.  The data is not partitioned by language: multiple languages were used to get a larger set of test data.  The data load took around 8 hours, and has not been optimized for scale.  Once the data was loaded, the content was compacted which took about 35 minutes.
-
-The example uses the language-specific tokenizers available from the Apache Lucene project for Wikipedia data.
+{: .wiki }
+| Row (word) | Value (count) | Value (document list)       |
+|------------|--------------:|:----------------------------|
+| Octopus    | 2             | [Document 57, Document 220] |
+| Other      | 172,849       | []                          |
+| Ostrich    | 1             | [Document 901]              |
+
+Searches can be optimized to focus on low-cardinality terms. To create these
+counts, the example installs "aggregators" which are used to combine inserted
+values. The ingester just writes simple "(Octopus, 1, Document 57)" tuples.
+The tablet servers then used the installed aggregators to merge the cells as
+the data is re-written, or queried. This reduces the in-memory locking
+required to update high-cardinality terms, and defers aggregation to a later
+time, where it can be done more efficiently.
+
+The example also creates a reverse word index to map each word to the document
+in which it appears. But it does this by choosing an arbitrary partition for
+the document. The article, and the word index for the article are grouped
+together into the same partition. For example:
+
+{: .wiki }
+| Row (partition) | Column Family | Column Qualifier | Value           |
+|-----------------|---------------|------------------|-----------------|
+| 1               | D             | Document 57      | "smart Octopus" |
+| 1               | Word, Octopus | Document 57      |                 |
+| 1               | Word, smart   | Document 57      |                 |
+| ...             |               |                  |                 |
+| 2               | D             | Document 220     | "big Octopus"   |
+| 2               | Word, big     | Document 220     |                 |
+| 2               | Word, Octopus | Document 220     |                 |
+
+Of course, there would be large numbers of documents in each partition, and the
+elements of those documents would be interlaced according to their sort order.
+
+By dividing the index space into partitions, the multi-word searches can be
+performed in parallel across all the nodes. Also, by grouping the document
+together with its index, a document can be retrieved without a second request
+from the client. The query "octopus" and "big" will be performed on all the
+servers, but only those partitions for which the low-cardinality term "octopus"
+can be found by using the aggregated reverse index information. The query for a
+document is performed by extensions provided in the example. These extensions
+become part of the tablet server's iterator stack. By cloning the underlying
+iterators, the query extensions can seek to specific words within the index,
+and when it finds a matching document, it can then seek to the document
+location and retrieve the contents.
+
+We loaded the example on a cluster of 10 servers, each with 12 cores, and 32G
+RAM, 6 500G drives. Accumulo tablet servers were allowed a maximum of 3G of
+working memory, of which 2G was dedicated to caching file data.
+
+Following the instructions in the example, the Wikipedia XML data for articles
+was loaded for English, Spanish and German languages into 10 partitions. The
+data is not partitioned by language: multiple languages were used to get a
+larger set of test data. The data load took around 8 hours, and has not been
+optimized for scale. Once the data was loaded, the content was compacted which
+took about 35 minutes.
+
+The example uses the language-specific tokenizers available from the Apache
+Lucene project for Wikipedia data.
 
 Original files:
 
-<table>
-<tr>
-<th>Articles
-<th>Compressed size
-<th>Filename
-<tr>
-<td>1.3M
-<td>2.5G
-<td>dewiki-20111120-pages-articles.xml.bz2
-<tr>
-<td>3.8M
-<td>7.9G
-<td>enwiki-20111115-pages-articles.xml.bz2
-<tr>
-<td>0.8M
-<td>1.4G
-<td>eswiki-20111112-pages-articles.xml.bz2
-</table>
+{: .wiki }
+| Articles | Compressed size | Filename                               |
+|----------|-----------------|----------------------------------------|
+| 1.3M     | 2.5G            | dewiki-20111120-pages-articles.xml.bz2 |
+| 3.8M     | 7.9G            | enwiki-20111115-pages-articles.xml.bz2 |
+| 0.8M     | 1.4G            | eswiki-20111112-pages-articles.xml.bz2 |
 
 The resulting tables:
 
@@ -140,246 +117,107 @@ The resulting tables:
 
 Roughly a 6:1 increase in size.
 
-We performed the following queries, and repeated the set 5 times.  The query language is much more expressive than what is shown below.  The actual query specified that these words were to be found in the body of the article.  Regular expressions, searches within titles, negative tests, etc are available.
-
-<table>
-<tr>
-<th>Query
-<th colspan="5">Samples (seconds)
-<th>Matches
-<th>Result Size
-<tr>
-<td>“old” and “man” and “sea”
-<td>4.07
-<td>3.79
-<td>3.65
-<td>3.85
-<td>3.67
-<td>22,956
-<td>3,830,102
-<tr>
-<td>“paris” and “in” and “the” and “spring”
-<td>3.06
-<td>3.06
-<td>2.78
-<td>3.02
-<td>2.92
-<td>10,755
-<td>1,757,293
-<tr>
-<td>“rubber” and “ducky” and “ernie”
-<td>0.08
-<td>0.08
-<td>0.1
-<td>0.11
-<td>0.1
-<td>6
-<td>808
-<tr>
-<td>“fast”  and ( “furious” or “furriest”) 
-<td>1.34
-<td>1.33
-<td>1.3
-<td>1.31
-<td>1.31
-<td>2,973
-<td>493,800
-<tr>
-<td>“slashdot” and “grok”
-<td>0.06
-<td>0.06
-<td>0.06
-<td>0.06
-<td>0.06
-<td>14
-<td>2,371
-<tr>
-<td>“three” and “little” and “pigs”
-<td>0.92
-<td>0.91
-<td>0.9
-<td>1.08
-<td>0.88
-<td>2,742
-<td>481,531
-</table>
-
-Because the terms are tested together within the tablet server, even fairly high-cardinality terms such as “old,” “man,” and “sea” can be tested efficiently, without needing to return to the client, or make distributed calls between servers to perform the intersection between terms.
-
-For reference, here are the cardinalities for all the terms in the query (remember, this is across all languages loaded):
-
-<table>
-<tr> <th>Term <th> Cardinality
-<tr> <td> ducky <td> 795
-<tr> <td> ernie <td> 13,433
-<tr> <td> fast <td> 166,813
-<tr> <td> furious <td> 10,535
-<tr> <td> furriest <td> 45
-<tr> <td> grok <td> 1,168
-<tr> <td> in <td> 1,884,638
-<tr> <td> little <td> 320,748
-<tr> <td> man <td> 548,238
-<tr> <td> old <td> 720,795
-<tr> <td> paris <td> 232,464
-<tr> <td> pigs <td> 8,356
-<tr> <td> rubber <td> 17,235
-<tr> <td> sea <td> 247,231
-<tr> <td> slashdot <td> 2,343
-<tr> <td> spring <td> 125,605
-<tr> <td> the <td> 3,509,498
-<tr> <td> three <td> 718,810
-</table>
-
-
-Accumulo supports caching index information, which is turned on by default, and for the non-index blocks of a file, which is not. After turning on data block caching for the wiki table:
-
-<table>
-<tr>
-<th>Query
-<th colspan="5">Samples (seconds)
-<tr>
-<td>“old” and “man” and “sea”
-<td>2.47
-<td>2.48
-<td>2.51
-<td>2.48
-<td>2.49
-</tr><tr>
-<td>“paris” and “in” and “the” and “spring”
-<td>1.33
-<td>1.42
-<td>1.6
-<td>1.61
-<td>1.47
-</tr><tr>
-<td>“rubber” and “ducky” and “ernie”
-<td>0.07
-<td>0.08
-<td>0.07
-<td>0.07
-<td>0.07
-</tr><tr>
-<td>“fast” and ( “furious” or “furriest”) 
-<td>1.28
-<td>0.78
-<td>0.77
-<td>0.79
-<td>0.78
-</tr><tr>
-<td>“slashdot” and “grok”
-<td>0.04
-<td>0.04
-<td>0.04
-<td>0.04
-<td>0.04
-</tr><tr>
-<td>“three” and “little” and “pigs”
-<td>0.55
-<td>0.32
-<td>0.32
-<td>0.31
-<td>0.27
-</tr>
-<table>
-<p>
-For comparison, these are the cold start lookup times (restart Accumulo, and drop the operating system disk cache):
-
-<table>
-<tr>
-<th>Query
-<th>Sample
-<tr>
-<td>“old” and “man” and “sea”
-<td>13.92
-<tr>
-<td>“paris” and “in” and “the” and “spring”
-<td>8.46
-<tr>
-<td>“rubber” and “ducky” and “ernie”
-<td>2.96
-<tr>
-<td>“fast” and ( “furious” or “furriest”) 
-<td>6.77
-<tr>
-<td>“slashdot” and “grok”
-<td>4.06
-<tr>
-<td>“three” and “little” and “pigs”
-<td>8.13
-</table>
+We performed the following queries, and repeated the set 5 times. The query
+language is much more expressive than what is shown below. The actual query
+specified that these words were to be found in the body of the article. Regular
+expressions, searches within titles, negative tests, etc are available.
+
+{: .wiki }
+| Query                                   | Sample 1 (seconds) | Sample 2 (seconds) | Sample 3 (seconds) | Sample 4 (seconds) | Sample 5 (seconds) | Matches | Result Size |
+|-----------------------------------------|------|------|------|------|------|--------|-----------|
+| "old" and "man" and "sea"               | 4.07 | 3.79 | 3.65 | 3.85 | 3.67 | 22,956 | 3,830,102 |
+| "paris" and "in" and "the" and "spring" | 3.06 | 3.06 | 2.78 | 3.02 | 2.92 | 10,755 | 1,757,293 |
+| "rubber" and "ducky" and "ernie"        | 0.08 | 0.08 | 0.1  | 0.11 | 0.1  | 6      | 808       |
+| "fast" and ( "furious" or "furriest")   | 1.34 | 1.33 | 1.3  | 1.31 | 1.31 | 2,973  | 493,800   |
+| "slashdot" and "grok"                   | 0.06 | 0.06 | 0.06 | 0.06 | 0.06 | 14     | 2,371     |
+| "three" and "little" and "pigs"         | 0.92 | 0.91 | 0.9  | 1.08 | 0.88 | 2,742  | 481,531   |
+
+Because the terms are tested together within the tablet server, even fairly
+high-cardinality terms such as "old," "man," and "sea" can be tested
+efficiently, without needing to return to the client, or make distributed calls
+between servers to perform the intersection between terms.
+
+For reference, here are the cardinalities for all the terms in the query
+(remember, this is across all languages loaded):
+
+{: .wiki }
+| Term     | Cardinality |
+|----------|-------------|
+| ducky    | 795         |
+| ernie    | 13,433      |
+| fast     | 166,813     |
+| furious  | 10,535      |
+| furriest | 45          |
+| grok     | 1,168       |
+| in       | 1,884,638   |
+| little   | 320,748     |
+| man      | 548,238     |
+| old      | 720,795     |
+| paris    | 232,464     |
+| pigs     | 8,356       |
+| rubber   | 17,235      |
+| sea      | 247,231     |
+| slashdot | 2,343       |
+| spring   | 125,605     |
+| the      | 3,509,498   |
+| three    | 718,810     |
+
+Accumulo supports caching index information, which is turned on by default, and
+for the non-index blocks of a file, which is not. After turning on data block
+  caching for the wiki table:
+
+{: .wiki }
+| Query                                   | Sample 1 (seconds) | Sample 2 (seconds) | Sample 3 (seconds) | Sample 4 (seconds) | Sample 5 (seconds) |
+|-----------------------------------------|------|------|------|------|------|
+| "old" and "man" and "sea"               | 2.47 | 2.48 | 2.51 | 2.48 | 2.49 |
+| "paris" and "in" and "the" and "spring" | 1.33 | 1.42 | 1.6  | 1.61 | 1.47 |
+| "rubber" and "ducky" and "ernie"        | 0.07 | 0.08 | 0.07 | 0.07 | 0.07 |
+| "fast" and ( "furious" or "furriest")   | 1.28 | 0.78 | 0.77 | 0.79 | 0.78 |
+| "slashdot" and "grok"                   | 0.04 | 0.04 | 0.04 | 0.04 | 0.04 |
+| "three" and "little" and "pigs"         | 0.55 | 0.32 | 0.32 | 0.31 | 0.27 |
+
+For comparison, these are the cold start lookup times (restart Accumulo, and
+drop the operating system disk cache):
+
+{: .wiki }
+| Query                                   | Sample |
+|-----------------------------------------|--------|
+| "old" and "man" and "sea"               | 13.92  |
+| "paris" and "in" and "the" and "spring" | 8.46   |
+| "rubber" and "ducky" and "ernie"        | 2.96   |
+| "fast" and ( "furious" or "furriest")   | 6.77   |
+| "slashdot" and "grok"                   | 4.06   |
+| "three" and "little" and "pigs"         | 8.13   |
 
 ### Random Query Load
 
-Random queries were generated using common english words.  A uniform random sample of 3 to 5 words taken from the 10000 most common words in the Project Gutenberg's online text collection were joined with “and”.  Words containing anything other than letters (such as contractions) were not used.  A client was started simultaneously on each of the 10 servers and each ran 100 random queries (1000 queries total).
-
-
-<table>
-<tr>
-<th>Time
-<th>Count
-<tr>
-<td>41.97
-<td>440,743
-<tr>
-<td>41.61
-<td>320,522
-<tr>
-<td>42.11
-<td>347,969
-<tr>
-<td>38.32
-<td>275,655
-</table>
+Random queries were generated using common english words. A uniform random
+sample of 3 to 5 words taken from the 10000 most common words in the Project
+Gutenberg's online text collection were joined with "and". Words containing
+anything other than letters (such as contractions) were not used. A client was
+started simultaneously on each of the 10 servers and each ran 100 random
+queries (1000 queries total).
+
+{: .wiki }
+| Time  | Count   |
+|-------|---------|
+| 41.97 | 440,743 |
+| 41.61 | 320,522 |
+| 42.11 | 347,969 |
+| 38.32 | 275,655 |
 
 ### Query Load During Ingest
 
-The English wikipedia data was re-ingested on top of the existing, compacted data. The following  query samples were taken in 5 minute intervals while ingesting 132 articles/second:
-
-
-<table>
-<tr>
-<th>Query
-<th colspan="5">Samples (seconds)
-<tr>
-<td>“old” and “man” and “sea”
-<td>4.91
-<td>3.92
-<td>11.58
-<td>9.86
-<td>10.21
-<tr>
-<td>“paris” and “in” and “the” and “spring”
-<td>5.03
-<td>3.37
-<td>12.22
-<td>3.29
-<td>9.46
-<tr>
-<td>“rubber” and “ducky” and “ernie”
-<td>4.21
-<td>2.04
-<td>8.57
-<td>1.54
-<td>1.68
-<tr>
-<td>“fast”  and ( “furious” or “furriest”) 
-<td>5.84
-<td>2.83
-<td>2.56
-<td>3.12
-<td>3.09
-<tr>
-<td>“slashdot” and “grok”
-<td>5.68
-<td>2.62
-<td>2.2
-<td>2.78
-<td>2.8
-<tr>
-<td>“three” and “little” and “pigs”
-<td>7.82
-<td>3.42
-<td>2.79
-<td>3.29
-<td>3.3
-</table>
+The English wikipedia data was re-ingested on top of the existing, compacted
+data. The following query samples were taken in 5 minute intervals while
+ingesting 132 articles/second:
+
+{: .wiki }
+| Query                                   | Sample 1 (seconds)  | Sample 2 (seconds) | Sample 3 (seconds) | Sample 4 (seconds) | Sample 5 (seconds) |
+|-----------------------------------------|------|------|-------|------|-------|
+| "old" and "man" and "sea"               | 4.91 | 3.92 | 11.58 | 9.86 | 10.21 |
+| "paris" and "in" and "the" and "spring" | 5.03 | 3.37 | 12.22 | 3.29 | 9.46  |
+| "rubber" and "ducky" and "ernie"        | 4.21 | 2.04 | 8.57  | 1.54 | 1.68  |
+| "fast" and ( "furious" or "furriest")   | 5.84 | 2.83 | 2.56  | 3.12 | 3.09  |
+| "slashdot" and "grok"                   | 5.68 | 2.62 | 2.2   | 2.78 | 2.8   |
+| "three" and "little" and "pigs"         | 7.82 | 3.42 | 2.79  | 3.29 | 3.3   |

http://git-wip-us.apache.org/repos/asf/accumulo/blob/964cf811/glossary.md
----------------------------------------------------------------------
diff --git a/glossary.md b/glossary.md
index 67330eb..4b8761e 100644
--- a/glossary.md
+++ b/glossary.md
@@ -3,56 +3,122 @@ title: Glossary
 nav: nav_glossary
 ---
 
-<dl>
-<dt>authorizations</dt>
-<dd>a set of strings associated with a user or with a particular scan that will be used to determine which key/value pairs are visible to the user.</dd>
-<dt>cell</dt>
-<dd>a set of key/value pairs whose keys differ only in timestamp.</dd>
-<dt>column</dt>
-<dd>the portion of the key that sorts after the row and is divided into family, qualifier, and visibility.</dd>
-<dt>column family</dt>
-<dd>the portion of the key that sorts second and controls locality groups, the row/column hybrid nature of accumulo.</dd>
-<dt>column qualifier</dt>
-<dd>the portion of the key that sorts third and provides additional key uniqueness.</dd>
-<dt>column visibility</dt>
-<dd>the portion of the key that sorts fourth and controls user access to individual key/value pairs. Visibilities are boolean AND (&) and OR (|) combinations of authorization strings with parentheses required to determine ordering, e.g. (AB&C)|DEF.</dd>
-<dt>iterator</dt>
-<dd>a mechanism for modifying tablet-local portions of the key/value space. Iterators are used for standard administrative tasks as well as for custom processing.</dd>
-<dt>iterator priority</dt>
-<dd>an iterator must be configured with a particular scope and priority.  When a tablet server enters that scope, it will instantiate iterators in priority order starting from the smallest priority and ending with the largest, and apply each to the data read before rewriting the data or sending the data to the user.</dd>
-<dt>iterator scopes</dt>
-<dd>the possible scopes for iterators are where the tablet server is already reading and/or writing data: minor compaction / flush time (<em>minc</em> scope), major compaction / file merging time (<em>majc</em> scope), and query time (<em>scan</em> scope).</dd>
-<dt>gc</dt>
-<dd>process that identifies temporary files in HDFS that are no longer needed by any process, and deletes them.</dd>
-<dt>key</dt>
-<dd>the key into the distributed sorted map which is accumulo.  The key is subdivided into row, column, and timestamp.  The column is further divided into  family, qualifier, and visibility.</dd>
-<dt>locality group</dt>
-<dd>a set of column families that will be grouped together on disk.  With no locality groups configured, data is stored on disk in row order.  If each column family were configured to be its own locality group, the data for each column would be stored separately, in row order.  Configuring sets of columns into locality groups is a compromise between the two approaches and will improve performance when multiple columns are accessed in the same scan.</dd>
-<dt>log-structured merge-tree</dt>
-<dd>the sorting / flushing / merging scheme on which BigTable's design is based.</dd>
-<dt>logger</dt>
-<dd>in 1.4 and older, process that accepts updates to tablet servers and writes them to local on-disk storage for redundancy. in 1.5 the functionality was subsumed by the tablet server and datanode with HDFS writes.</dd>
-<dt>major compaction</dt>
-<dd>merging multiple files into a single file.  If all of a tablet's files are merged into a single file, it is called a <em>full major compaction</em>.</dd>
-<dt>master</dt>
-<dd>process that detects and responds to tablet failures, balances load across tablet servers by assigning and migrating tablets when required, coordinates table operations, and handles tablet server logistics (startup, shutdown, recovery).</dd>
-<dt>minor compaction</dt>
-<dd>flushing data from memory to disk.  Usually this creates a new file for a tablet, but if the memory flushed is merge-sorted in with data from an existing file (replacing that file), it is called a <em>merging minor compaction</em>.</dd>
-<dt>monitor</dt>
-<dd>process that displays status and usage information for all Accumulo components.</dd>
-<dt>permissions</dt>
-<dd>administrative abilities that must be given to a user such as creating tables or users and changing permissions or configuration parameters.</dd>
-<dt>row</dt>
-<dd>the portion of the key that controls atomicity.  Keys with the same row are guaranteed to remain on a single tablet hosted by a single tablet server, therefore multiple key/value pairs can be added to or removed from a row at the same time. The row is used for the primary sorting of the key.</dd>
-<dt>scan</dt>
-<dd>reading a range of key/value pairs.</dd>
-<dt>tablet</dt>
-<dd>a contiguous key range; the unit of work for a tablet server.</dd>
-<dt>tablet servers</dt>
-<dd>a set of servers that hosts reads and writes for tablets.  Each server hosts a distinct set of tablets at any given time, but the tablets may be hosted by different servers over time.</dd>
-<dt>timestamp</dt>
-<dd>the portion of the key that controls versioning.  Otherwise identical keys with differing timestamps are considered to be versions of a single <em>cell</em>.  Accumulo can be configured to keep the <em>N</em> newest versions of each <em>cell</em>.  When a deletion entry is inserted, it deletes all earlier versions for its cell.</dd>
-<dt>value</dt>
-<dd>immutable bytes associated with a particular key.</dd>
-</dl>
+
+authorizations
+: a set of strings associated with a user or with a particular scan that will
+be used to determine which key/value pairs are visible to the user.
+
+cell
+: a set of key/value pairs whose keys differ only in timestamp.
+
+column
+: the portion of the key that sorts after the row and is divided into family,
+qualifier, and visibility.
+
+column family
+: the portion of the key that sorts second and controls locality groups, the
+row/column hybrid nature of accumulo.
+
+column qualifier
+: the portion of the key that sorts third and provides additional key
+uniqueness.
+
+column visibility
+: the portion of the key that sorts fourth and controls user access to
+individual key/value pairs. Visibilities are boolean AND (&amp;) and OR (|)
+combinations of authorization strings with parentheses required to determine
+ordering, e.g. (AB&amp;C)|DEF.
+
+iterator
+: a mechanism for modifying tablet-local portions of the key/value space.
+Iterators are used for standard administrative tasks as well as for custom
+processing.
+
+iterator priority
+: an iterator must be configured with a particular scope and priority. When a
+tablet server enters that scope, it will instantiate iterators in priority
+order starting from the smallest priority and ending with the largest, and
+apply each to the data read before rewriting the data or sending the data to
+the user.
+
+iterator scopes
+: the possible scopes for iterators are where the tablet server is already
+reading and/or writing data: minor compaction / flush time (*minc*
+scope), major compaction / file merging time (*majc* scope), and query
+time (*scan* scope).
+
+gc
+: process that identifies temporary files in HDFS that are no longer needed by
+any process, and deletes them.
+
+key
+: the key into the distributed sorted map which is accumulo. The key is
+subdivided into row, column, and timestamp. The column is further divided into
+family, qualifier, and visibility.
+
+locality group
+: a set of column families that will be grouped together on disk. With no
+locality groups configured, data is stored on disk in row order. If each
+column family were configured to be its own locality group, the data for each
+column would be stored separately, in row order. Configuring sets of columns
+into locality groups is a compromise between the two approaches and will
+improve performance when multiple columns are accessed in the same scan.
+
+log-structured merge-tree
+: the sorting / flushing / merging scheme on which BigTable's design is based.
+
+logger
+: in 1.4 and older, process that accepts updates to tablet servers and writes
+them to local on-disk storage for redundancy. in 1.5 the functionality was
+subsumed by the tablet server and datanode with HDFS writes.
+
+major compaction
+: merging multiple files into a single file. If all of a tablet's files are
+merged into a single file, it is called a *full major compaction*.
+
+master
+: process that detects and responds to tablet failures, balances load across
+tablet servers by assigning and migrating tablets when required, coordinates
+table operations, and handles tablet server logistics (startup, shutdown,
+recovery).
+
+minor compaction
+: flushing data from memory to disk. Usually this creates a new file for a
+tablet, but if the memory flushed is merge-sorted in with data from an existing
+file (replacing that file), it is called a *merging minor compaction*.
+
+monitor
+: process that displays status and usage information for all Accumulo
+components.
+
+permissions
+: administrative abilities that must be given to a user such as creating tables
+or users and changing permissions or configuration parameters.
+
+row
+: the portion of the key that controls atomicity. Keys with the same row are
+guaranteed to remain on a single tablet hosted by a single tablet server,
+therefore multiple key/value pairs can be added to or removed from a row at the
+same time. The row is used for the primary sorting of the key.
+
+scan
+: reading a range of key/value pairs.
+
+tablet
+: a contiguous key range; the unit of work for a tablet server.
+
+tablet servers
+: a set of servers that hosts reads and writes for tablets. Each server hosts
+a distinct set of tablets at any given time, but the tablets may be hosted by
+different servers over time.
+
+timestamp
+: the portion of the key that controls versioning. Otherwise identical keys
+with differing timestamps are considered to be versions of a single
+*cell*. Accumulo can be configured to keep the *N* newest
+versions of each *cell*. When a deletion entry is inserted, it deletes
+all earlier versions for its cell.
+
+value
+: immutable bytes associated with a particular key.
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/964cf811/index.md
----------------------------------------------------------------------
diff --git a/index.md b/index.md
index b18caf1..d9146d7 100644
--- a/index.md
+++ b/index.md
@@ -6,22 +6,21 @@ nav: nav_index
 
 <br>
 <div class="jumbotron" style="text-align: center">
-<p>
-The Apache Accumulo&trade; sorted, distributed key/value store is a robust, scalable, high performance data storage and retrieval system.  
-</p>
+<p>The Apache Accumulo&trade; sorted, distributed key/value store is a robust, scalable, high performance data storage and retrieval system.</p>
 <a class="btn btn-success btn-lg" href="downloads/" role="button"><span class="glyphicon glyphicon-download"></span> Download</a>
 </div>
 
-Apache Accumulo is based on Google's [BigTable][1] design and is built on
-top of [Apache Hadoop][2], [Apache Zookeeper][3], and [Apache Thrift][4].  Apache Accumulo features a few novel 
-improvements on the BigTable design in the form of cell-based access control and a server-side
-programming mechanism that can modify key/value pairs at various points in the
-data management process.  Other notable improvements and feature are outlined
-[here](notable_features).
+Apache Accumulo is based on Google's [BigTable][1] design and is built on top
+of [Apache Hadoop][2], [Apache Zookeeper][3], and [Apache Thrift][4]. Apache
+Accumulo features a few novel improvements on the BigTable design in the form
+of cell-based access control and a server-side programming mechanism that can
+modify key/value pairs at various points in the data management process. Other
+notable improvements and feature are outlined [here](notable_features).
 
-Google published the design of BigTable in 2006.  Several other open source
-projects have implemented aspects of this design including [Apache HBase][5], [Hypertable][6],
-and [Apache Cassandra][7].  Accumulo began its development in 2008 and joined the [Apache community][8] in 2011.
+Google published the design of BigTable in 2006. Several other open source
+projects have implemented aspects of this design including [Apache HBase][5],
+[Hypertable][6], and [Apache Cassandra][7]. Accumulo began its development in
+2008 and joined the [Apache community][8] in 2011.
 
 [1]: https://research.google.com/archive/bigtable.html "BigTable"
 [2]: https://hadoop.apache.org                   "Apache Hadoop"

http://git-wip-us.apache.org/repos/asf/accumulo/blob/964cf811/mailing_list.md
----------------------------------------------------------------------
diff --git a/mailing_list.md b/mailing_list.md
index 6b939cc..09fbb42 100644
--- a/mailing_list.md
+++ b/mailing_list.md
@@ -7,42 +7,13 @@ All Accumulo mailing lists are in the accumulo.apache.org domain. Please note
 that search providers linked on this page are not part of the [official Apache
 mailing list archives][5].
 
-<table class="table">
-
-<tr>
-<th>user</th>
-<td>For general user questions, help, and announcements.</td>
-<td><a href="https://mail-archives.apache.org/mod_mbox/accumulo-user" class="btn btn-primary btn-xs"><span class="glyphicon glyphicon-book"></span> Archive</a> <a href="https://www.mail-archive.com/user@accumulo.apache.org/" class="btn btn-info btn-xs"><span class="glyphicon glyphicon-search"></span> Search</a></td>
-<td><a href="mailto:user-subscribe@accumulo.apache.org" class="btn btn-success btn-xs"><span class="glyphicon glyphicon-plus"></span> Subscribe</a> <a href="mailto:user-unsubscribe@accumulo.apache.org" class="btn btn-danger btn-xs"><span class="glyphicon glyphicon-remove"></span> Unsubscribe</a></td>
-<td><a href="mailto:user@accumulo.apache.org" class="btn btn-warning btn-xs"><span class="glyphicon glyphicon-envelope"></span> Post</a></td>
-</tr>
-
-<tr>
-<th>dev</th>
-<td>For anyone interested in contributing or following development activities.
-It is recommended that you also subscribe to <b>notifications</b>.</td>
-<td><a href="https://mail-archives.apache.org/mod_mbox/accumulo-dev" class="btn btn-primary btn-xs"><span class="glyphicon glyphicon-book"></span> Archive</a> <a href="https://www.mail-archive.com/dev@accumulo.apache.org/" class="btn btn-info btn-xs"><span class="glyphicon glyphicon-search"></span> Search</a></td>
-<td><a href="mailto:dev-subscribe@accumulo.apache.org" class="btn btn-success btn-xs"><span class="glyphicon glyphicon-plus"></span> Subscribe</a> <a href="mailto:dev-unsubscribe@accumulo.apache.org" class="btn btn-danger btn-xs"><span class="glyphicon glyphicon-remove"></span> Unsubscribe</a></td>
-<td><a href="mailto:dev@accumulo.apache.org" class="btn btn-warning btn-xs"><span class="glyphicon glyphicon-envelope"></span> Post</a></td>
-</tr>
-
-<tr>
-<th>commits</th>
-<td>For following commits.</td>
-<td><a href="https://mail-archives.apache.org/mod_mbox/accumulo-commits" class="btn btn-primary btn-xs"><span class="glyphicon glyphicon-book"></span> Archive</a> <a href="https://www.mail-archive.com/commits@accumulo.apache.org/" class="btn btn-info btn-xs"><span class="glyphicon glyphicon-search"></span> Search</a></td>
-<td><a href="mailto:commits-subscribe@accumulo.apache.org" class="btn btn-success btn-xs"><span class="glyphicon glyphicon-plus"></span> Subscribe</a> <a href="mailto:commits-unsubscribe@accumulo.apache.org" class="btn btn-danger btn-xs"><span class="glyphicon glyphicon-remove"></span> Unsubscribe</a></td>
-<td></td>
-</tr>
-
-<tr>
-<th>notifications</th>
-<td>For following JIRA notifications.</td>
-<td><a href="https://mail-archives.apache.org/mod_mbox/accumulo-notifications" class="btn btn-primary btn-xs"><span class="glyphicon glyphicon-book"></span> Archive</a> <a href="https://www.mail-archive.com/notifications@accumulo.apache.org/" class="btn btn-info btn-xs"><span class="glyphicon glyphicon-search"></span> Search</a></td>
-<td><a href="mailto:notifications-subscribe@accumulo.apache.org" class="btn btn-success btn-xs"><span class="glyphicon glyphicon-plus"></span> Subscribe</a> <a href="mailto:notifications-unsubscribe@accumulo.apache.org" class="btn btn-danger btn-xs"><span class="glyphicon glyphicon-remove"></span> Unsubscribe</a></td>
-<td></td>
-</tr>
-
-</table>
+{: .table }
+| Name              | Description                                      | Read | Follow | Post |
+|-------------------|--------------------------------------------------|------|--------|------|
+| **user**          | General user questions, help, and announcements  | [<span class="glyphicon glyphicon-book"/> Archive][U_A] [<span class="glyphicon glyphicon-search"/> Search][U_S] | [<span class="glyphicon glyphicon-plus"/> Subscribe][U_SU] [<span class="glyphicon glyphicon-remove"/> Unsubscribe][U_UN] | [<span class="glyphicon glyphicon-envelope"/> Post][U_P] |
+| **dev**           | Contributor discussions and development activity | [<span class="glyphicon glyphicon-book"/> Archive][D_A] [<span class="glyphicon glyphicon-search"/> Search][D_S] | [<span class="glyphicon glyphicon-plus"/> Subscribe][D_SU] [<span class="glyphicon glyphicon-remove"/> Unsubscribe][D_UN] | [<span class="glyphicon glyphicon-envelope"/> Post][D_P] |
+| **commits**       | Code changes                                     | [<span class="glyphicon glyphicon-book"/> Archive][C_A] [<span class="glyphicon glyphicon-search"/> Search][C_S] | [<span class="glyphicon glyphicon-plus"/> Subscribe][C_SU] [<span class="glyphicon glyphicon-remove"/> Unsubscribe][C_UN] | |
+| **notifications** | Automated notifications (JIRA, etc.)             | [<span class="glyphicon glyphicon-book"/> Archive][N_A] [<span class="glyphicon glyphicon-search"/> Search][N_S] | [<span class="glyphicon glyphicon-plus"/> Subscribe][N_SU] [<span class="glyphicon glyphicon-remove"/> Unsubscribe][N_UN] | |
 
 ## Mailing List Search Providers
 
@@ -51,6 +22,46 @@ It is recommended that you also subscribe to <b>notifications</b>.</td>
 * [Nabble][3]
 * [Search-Hadoop][4]
 
+[U_A]: https://mail-archives.apache.org/mod_mbox/accumulo-user
+{: .btn .btn-primary .btn-xs }
+[U_S]: https://www.mail-archive.com/user@accumulo.apache.org
+{: .btn .btn-info .btn-xs }
+[U_SU]: mailto:user-subscribe@accumulo.apache.org
+{: .btn .btn-success .btn-xs }
+[U_UN]: mailto:user-unsubscribe@accumulo.apache.org
+{: .btn .btn-danger .btn-xs }
+[U_P]: mailto:user@accumulo.apache.org
+{: .btn .btn-warning .btn-xs }
+
+[D_A]: https://mail-archives.apache.org/mod_mbox/accumulo-dev
+{: .btn .btn-primary .btn-xs }
+[D_S]: https://www.mail-archive.com/dev@accumulo.apache.org
+{: .btn .btn-info .btn-xs }
+[D_SU]: mailto:dev-subscribe@accumulo.apache.org
+{: .btn .btn-success .btn-xs }
+[D_UN]: mailto:dev-unsubscribe@accumulo.apache.org
+{: .btn .btn-danger .btn-xs }
+[D_P]: mailto:dev@accumulo.apache.org
+{: .btn .btn-warning .btn-xs }
+
+[C_A]: https://mail-archives.apache.org/mod_mbox/accumulo-commits
+{: .btn .btn-primary .btn-xs }
+[C_S]: https://www.mail-archive.com/commits@accumulo.apache.org
+{: .btn .btn-info .btn-xs }
+[C_SU]: mailto:commits-subscribe@accumulo.apache.org
+{: .btn .btn-success .btn-xs }
+[C_UN]: mailto:commits-unsubscribe@accumulo.apache.org
+{: .btn .btn-danger .btn-xs }
+
+[N_A]: https://mail-archives.apache.org/mod_mbox/accumulo-notifications
+{: .btn .btn-primary .btn-xs }
+[N_S]: https://www.mail-archive.com/notifications@accumulo.apache.org
+{: .btn .btn-info .btn-xs }
+[N_SU]: mailto:notifications-subscribe@accumulo.apache.org
+{: .btn .btn-success .btn-xs }
+[N_UN]: mailto:notifications-unsubscribe@accumulo.apache.org
+{: .btn .btn-danger .btn-xs }
+
 [1]: https://www.mail-archive.com/search?l=all&q=accumulo
 [2]: http://accumulo.markmail.org
 [3]: http://apache-accumulo.1065345.n5.nabble.com

http://git-wip-us.apache.org/repos/asf/accumulo/blob/964cf811/notable_features.md
----------------------------------------------------------------------
diff --git a/notable_features.md b/notable_features.md
index e9c4e27..e826001 100644
--- a/notable_features.md
+++ b/notable_features.md
@@ -3,21 +3,12 @@ title: Notable Features
 nav: nav_features
 ---
 
-## Categories
+{::options toc_levels="2" /}
 
-* [Table Design and Configuration](#design)
-* [Integrity/Availability](#integrity)
-* [Performance](#performance)
-* [Testing](#testing)
-* [Client API](#client)
-* [Extensible Behaviors](#behaviors)
-* [General Administration](#admin)
-* [Internal Data Management](#internal_dm)
-* [On-demand Data Management](#ondemand_dm)
+* Will be replaced with the ToC, excluding the "Contents" header
+{:toc}
 
-***
-
-## Table Design and Configuration <a id="design"></a>
+## Table Design and Configuration
 
 ### Iterators
 
@@ -31,7 +22,7 @@ An additional portion of the Key that sorts after the column qualifier and
 before the timestamp. It is called column visibility and enables expressive
 cell-level access control. Authorizations are passed with each query to control
 what data is returned to the user. The column visibilities are boolean AND and
-OR combinations of arbitrary strings (such as "(A&B)|C") and authorizations
+OR combinations of arbitrary strings (such as "(A&amp;B)|C") and authorizations
 are sets of strings (such as {C,D}).
 
 ### Constraints
@@ -66,14 +57,14 @@ over multiple disjoint HDFS instances.  This allows Accumulo to scale beyond the
 of a single namenode.  When used in conjunction with HDFS federation, multiple namenodes
 can share a pool of datanodes.
 
-## Integrity/Availability <a id="integrity"></a>
+## Integrity/Availability
 
 ### Master fail over
 
 Multiple masters can be configured.  Zookeeper locks are used to determine
 which master is active.  The remaining masters simply wait for the current
 master to lose its lock.  Current master state is held in the metadata table
-and Zookeeper (see [FATE](#fate)).
+and Zookeeper (see [FATE][FATE]).
 
 ### Logical time
 
@@ -124,7 +115,7 @@ Stores its metadata in an Accumulo table and Zookeeper.
 
 Scans will not see data inserted into a row after the scan of that row begins.
 
-## Performance <a id="performance"></a>
+## Performance
 
 ### Relative encoding
 
@@ -176,7 +167,7 @@ is generated.  As a block is read more, larger indexes are generated making
 future seeks faster. This strategy allows Accumulo to dynamically respond to
 read patterns without precomputing block indexes when RFiles are written.
 
-## Testing <a id="testing"></a>
+## Testing
 
 ### Mock
 
@@ -195,7 +186,7 @@ instance more closely.
 Using the Mini Accumulo Cluster in unit and integration tests is a great way for
 developers to test their applications against Accumulo in an environment that is
 much closer to physical deployments than Mock Accumulo provided. Accumulo 1.6.0 also
-introduced a [maven-accumulo-plugin]({{ site.baseurl }}/release_notes/1.6.0#maven-plugin) which
+introduced a [maven-accumulo-plugin][M-A-P] which
 can be used to start a Mini Accumulo Cluster instance as a part of the Maven
 lifecycle that your application tests can use.
 
@@ -231,7 +222,7 @@ Other tests have no concept of data correctness and have the simple goal of
 crashing Accumulo. Many obscure bugs have been uncovered by this testing
 framework and subsequently corrected.
 
-## Client API <a id="client"></a>
+## Client API
 
 ### [Batch Scanner][4]
 
@@ -271,8 +262,8 @@ available to other languages like Python, Ruby, C++, etc.
 In version 1.6.0, Accumulo introduced [ConditionalMutations][7]
 which allow users to perform efficient, atomic read-modify-write operations on rows. Conditions can
 be defined using on equality checks of the values in a column or the absence of a column. For more
-information on using this feature, users can reference the Javadoc for [ConditionalMutation]({{ site.baseurl }}/1.6/apidocs/org/apache/accumulo/core/data/ConditionalMutation) and
-[ConditionalWriter]({{ site.baseurl }}/1.6/apidocs/org/apache/accumulo/core/client/ConditionalWriter)
+information on using this feature, users can reference the Javadoc for [ConditionalMutation][CMUT] and
+[ConditionalWriter][CWRI].
 
 ### Lexicoders
 
@@ -283,7 +274,7 @@ Lexicoders which have numerous implementations that support for efficient transl
 Java primitives to byte arrays and vice versa. These classes can greatly reduce the burden in
 re-implementing common programming mistakes in encoding.
 
-## Extensible Behaviors <a id="behaviors"></a>
+## Extensible Behaviors
 
 ### Pluggable balancer
 
@@ -318,7 +309,7 @@ it is very unlikely that more data will be written to it, and thus paying the pe
 to re-write a large file can be avoided. Implementations of this compaction strategy
 can be used to optimize the data that compactions will write.
 
-## General Administration <a id="admin"></a>
+## General Administration
 
 ### Monitor page
 
@@ -349,7 +340,7 @@ effect until server processes are restarted.
 Tables can be renamed easily because Accumulo uses internal table IDs and
 stores mappings between names and IDs in Zookeeper.
 
-## Internal Data Management <a id="internal_dm"></a>
+## Internal Data Management
 
 ### Locality groups
 
@@ -395,7 +386,7 @@ level of security that Accumulo provides. It is still a work in progress because
 the intermediate files created by Accumulo when recovering from a TabletServer
 failure are not encrypted.
 
-## On-demand Data Management <a id="ondemand_dm"></a>
+## On-demand Data Management
 
 ### Compactions
 
@@ -441,6 +432,10 @@ Added an operation to efficiently delete a range of rows from a table. Tablets
 that fall completely within a range are simply dropped. Tablets overlapping the
 beginning and end of the range are split, compacted, and then merged.  
 
+[FATE]: #fate
+[M-A-P]: {{ site.baseurl }}/release_notes/1.6.0#maven-plugin
+[CMUT]: {{ site.baseurl }}/1.6/apidocs/org/apache/accumulo/core/data/ConditionalMutation
+[CWRI]: {{ site.baseurl }}/1.6/apidocs/org/apache/accumulo/core/client/ConditionalWriter
 [4]: {{ site.baseurl }}/1.5/accumulo_user_manual#_writing_accumulo_clients
 [6]: {{ site.baseurl }}/1.5/accumulo_user_manual#_bulk_ingest
 [7]: {{ site.baseurl }}/1.6/accumulo_user_manual#_conditionalwriter

http://git-wip-us.apache.org/repos/asf/accumulo/blob/964cf811/old_documentation.md
----------------------------------------------------------------------
diff --git a/old_documentation.md b/old_documentation.md
index 4cfaad0..080e441 100644
--- a/old_documentation.md
+++ b/old_documentation.md
@@ -7,28 +7,44 @@ This page contains pointers to the documentation for major versions of Accumulo
 
 #### 1.5 Documentation
 
-* <a href="https://git-wip-us.apache.org/repos/asf?p=accumulo.git;a=blob_plain;f=README;hb=1.5.4" id="/1.5/README">README</a>
-* <a href="{{ site.baseurl }}/1.5/accumulo_user_manual.pdf" id="/1.5/accumulo_user_manual.pdf">PDF manual</a>
+* [README][README_15]
+* [PDF manual][MANUAL_PDF_15]
 * [html manual][MANUAL_HTML_15]
 * [examples][EXAMPLES_15]
-* <a href="{{ site.baseurl }}/1.5/apidocs" id="/1.5/apidocs">Javadoc</a>
+* [Javadoc][JAVADOC_15]
 
 #### 1.4 Documentation
 
-* <a href="https://git-wip-us.apache.org/repos/asf?p=accumulo.git;a=blob_plain;f=README;hb=f7d87b6e407de6597b6c0ca60ca1b6a321faf237" onClick="javascript: _gaq.push(['_trackPageview', '/1.4/README']);">README</a>
-* <a href="{{ site.baseurl }}/1.4/accumulo_user_manual.pdf" onClick="javascript: _gaq.push(['_trackPageview', '/1.4/accumulo_user_manual.pdf']);">pdf manual</a>
+* [README][README_14]
+* [PDF manual][MANUAL_PDF_14]
 * [html manual][MANUAL_HTML_14]
 * [examples][EXAMPLES_14]
-* <a href="{{ site.baseurl }}/1.4/apidocs" onClick="javascript: _gaq.push(['_trackPageview', '/1.4/apidocs']);">Javadoc</a>
+* [Javadoc][JAVADOC_14]
 
 #### 1.3 Documentation
-* <a href="https://git-wip-us.apache.org/repos/asf?p=accumulo.git;a=blob_plain;f=README;h=86713d9b6add9038d5130b4a23ba4a79b72d0f15;hb=3b4ffc158945c1f834fc6f257f21484c61691d0f" onClick="javascript: _gaq.push(['_trackPageview', '/1.3/README']);">README</a>
+* [README][README_13]
 * [html manual][MANUAL_HTML_13]
 * [examples][EXAMPLES_13]
 
+[README_15]: https://git-wip-us.apache.org/repos/asf?p=accumulo.git;a=blob_plain;f=README;hb=1.5.4
+{: onClick="javascript: _gaq.push(['_trackPageview', '/1.5/README']);" }
+[MANUAL_PDF_15]: {{ site.baseurl }}/1.5/accumulo_user_manual.pdf
+{: onClick="javascript: _gaq.push(['_trackPageview', '/1.5/accumulo_user_manual.pdf']);" }
 [MANUAL_HTML_15]: {{ site.baseurl }}/1.5/accumulo_user_manual "1.5 user manual"
 [EXAMPLES_15]: {{ site.baseurl }}/1.5/examples "1.5 examples"
+[JAVADOC_15]: {{ site.baseurl }}/1.5/apidocs
+{: onClick="javascript: _gaq.push(['_trackPageview', '/1.5/apidocs']);" }
+
+[README_14]: https://git-wip-us.apache.org/repos/asf?p=accumulo.git;a=blob_plain;f=README;hb=f7d87b6e407de6597b6c0ca60ca1b6a321faf237
+{: onClick="javascript: _gaq.push(['_trackPageview', '/1.4/README']);" }
+[MANUAL_PDF_14]: {{ site.baseurl }}/1.4/accumulo_user_manual.pdf
+{: onClick="javascript: _gaq.push(['_trackPageview', '/1.4/accumulo_user_manual.pdf']);" }
 [MANUAL_HTML_14]: {{ site.baseurl }}/1.4/user_manual "1.4 user manual"
 [EXAMPLES_14]: {{ site.baseurl }}/1.4/examples "1.4 examples"
+[JAVADOC_14]: {{ site.baseurl }}/1.4/apidocs
+{: onClick="javascript: _gaq.push(['_trackPageview', '/1.4/apidocs']);" }
+
+[README_13]: https://git-wip-us.apache.org/repos/asf?p=accumulo.git;a=blob_plain;f=README;h=86713d9b6add9038d5130b4a23ba4a79b72d0f15;hb=3b4ffc158945c1f834fc6f257f21484c61691d0f
+{: onClick="javascript: _gaq.push(['_trackPageview', '/1.3/README']);" }
 [MANUAL_HTML_13]: {{ site.baseurl }}/user_manual_1.3-incubating/ "1.3 user manual"
 [EXAMPLES_13]: {{ site.baseurl }}/user_manual_1.3-incubating/examples/ "1.3 examples"


[4/4] accumulo git commit: Jekyll build from gh-pages:964cf81

Posted by ct...@apache.org.
Jekyll build from gh-pages:964cf81

Favor markdown over html

Use markdown tables whenever possible
Fix broken html tables with markdown in wikisearch example
Use markdown for glossary
Fix sections with markdown instead of html in bylaws
Other minor fixes using markdown over html


Project: http://git-wip-us.apache.org/repos/asf/accumulo/repo
Commit: http://git-wip-us.apache.org/repos/asf/accumulo/commit/e938fe2b
Tree: http://git-wip-us.apache.org/repos/asf/accumulo/tree/e938fe2b
Diff: http://git-wip-us.apache.org/repos/asf/accumulo/diff/e938fe2b

Branch: refs/heads/asf-site
Commit: e938fe2bd27017efa6784332ffb87fdf925cc608
Parents: 6db455f
Author: Christopher Tubbs <ct...@apache.org>
Authored: Thu Apr 28 23:54:38 2016 -0400
Committer: Christopher Tubbs <ct...@apache.org>
Committed: Thu Apr 28 23:56:07 2016 -0400

----------------------------------------------------------------------
 bylaws.html              | 229 ++++++-----
 css/accumulo.css         |   4 +-
 downloads/index.html     |  58 +--
 example/wikisearch.html  | 927 +++++++++++++++++++++++++-----------------
 feed.xml                 |   4 +-
 glossary.html            | 143 ++++---
 index.html               |  23 +-
 mailing_list.html        |  73 ++--
 notable_features.html    |  45 +-
 old_documentation.html   |  14 +-
 papers/index.html        | 144 ++++---
 release_notes/1.5.1.html | 103 ++---
 release_notes/1.5.2.html |  55 +--
 release_notes/1.5.3.html |  54 +--
 release_notes/1.5.4.html |  54 +--
 release_notes/1.6.0.html | 206 +++++-----
 release_notes/1.6.1.html |  55 +--
 release_notes/1.6.2.html |  87 ++--
 release_notes/1.6.3.html |  86 ++--
 release_notes/1.6.4.html |  38 +-
 release_notes/1.6.5.html |  86 ++--
 release_notes/1.7.0.html |  86 ++--
 release_notes/1.7.1.html |  86 ++--
 23 files changed, 1504 insertions(+), 1156 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/accumulo/blob/e938fe2b/bylaws.html
----------------------------------------------------------------------
diff --git a/bylaws.html b/bylaws.html
index 016d9df..64eae08 100644
--- a/bylaws.html
+++ b/bylaws.html
@@ -337,34 +337,37 @@ See the <a href="https://www.apache.org/dev/pmc">PMC Guide</a> for more informat
 <p>Decisions regarding the project are made by votes on the primary project development mailing list: dev@accumulo.apache.org. Where necessary, PMC voting may take place on the private Accumulo PMC mailing list: private@accumulo.apache.org. Votes are clearly indicated by a subject line starting with [VOTE]. A vote message may only pertain to a single item’s approval; multiple items should be separated into multiple messages. Voting is carried out by replying to the vote mail. A vote may take on one of four forms, defined below.</p>
 
 <table class="table">
-  <tr>
-    <th>Vote</th>
-    <th>Meaning</th>
-  </tr>
-  <tr>
-    <td>+1</td>
-    <td>'Yes,' 'Agree,' or 'The action should be performed.' In general, this vote also indicates a willingness on the behalf of the voter to 'make it happen'.</td>
-  </tr>
-  <tr>
-    <td>+0</td>
-    <td>This vote indicates a willingness for the action under consideration to go ahead. The voter, however, will not be able to help.</td>
-  </tr>
-  <tr>
-    <td>-0</td>
-    <td>This vote indicates that the voter does not, in general, agree with the proposed action but is not concerned enough to prevent the action going ahead.</td>
-  </tr>
-  <tr>
-    <td>-1</td>
-    <td>'No', 'Disagree', or 'The action should not be performed.' On issues where consensus is required, this vote counts as a veto. All vetoes must contain an explanation of why the veto is appropriate. Vetoes with no explanation are void. It may also be appropriate for a -1 vote to include an alternative course of action.</td>
-  </tr>
+  <thead>
+    <tr>
+      <th>Vote</th>
+      <th>Meaning</th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>+1</td>
+      <td><em>Yes</em>, <em>Agree</em>, or <em>The action should be performed</em>. In general, this vote also indicates a willingness on the behalf of the voter to <em>make it happen</em>.</td>
+    </tr>
+    <tr>
+      <td>+0</td>
+      <td>This vote indicates a willingness for the action under consideration to go ahead. The voter, however, will not be able to help.</td>
+    </tr>
+    <tr>
+      <td>-0</td>
+      <td>This vote indicates that the voter does not, in general, agree with the proposed action but is not concerned enough to prevent the action going ahead.</td>
+    </tr>
+    <tr>
+      <td>-1</td>
+      <td><em>No</em>, <em>Disagree</em>, or <em>The action should not be performed</em>. On issues where consensus is required, this vote counts as a veto. All vetoes must contain an explanation of why the veto is appropriate. Vetoes with no explanation are void. It may also be appropriate for a -1 vote to include an alternative course of action.</td>
+    </tr>
+  </tbody>
 </table>
 
 <p>All participants in the Accumulo project are encouraged to vote. For technical decisions, only the votes of active committers are binding. Non-binding votes are still useful for those with binding votes to understand the perception of an action across the wider Accumulo community. For PMC decisions, only the votes of active PMC members are binding.</p>
 
 <p>See the <a href="governance/voting">voting page</a> for more details on the mechanics of voting.</p>
 
-<p><a id="CTR"></a>
-## Commit Then Review (CTR)</p>
+<h2 id="commit-then-review-ctr">Commit Then Review (CTR)</h2>
 
 <p>Voting can also be applied to changes to the Accumulo codebase. Under the Commit Then Review policy, committers can make changes to the codebase without seeking approval beforehand, and the changes are assumed to be approved unless an objection is raised. Only if an objection is raised must a vote take place on the code change.</p>
 
@@ -375,22 +378,26 @@ See the <a href="https://www.apache.org/dev/pmc">PMC Guide</a> for more informat
 <p>These are the types of approvals that can be sought. Different actions require different types of approvals.</p>
 
 <table class="table">
-  <tr>
-    <th>Approval Type</th>
-    <th>Definition</th>
-  </tr>
-  <tr>
-    <td>Consensus Approval</td>
-    <td>A consensus approval vote passes with 3 binding +1 votes and no binding vetoes.</td>
-  </tr>
-  <tr>
-    <td>Majority Approval</td>
-    <td>A majority approval vote passes with 3 binding +1 votes and more binding +1 votes than -1 votes.</td>
-  </tr>
-  <tr>
-    <td>Lazy Approval (or Lazy Consensus)</td>
-    <td>An action with lazy approval is implicitly allowed unless a -1 vote is received, at which time, depending on the type of action, either majority approval or consensus approval must be obtained.  Lazy Approval can be either <em>stated</em> or <em>assumed</em>, as detailed on the <a href="governance/lazyConsensus">lazy consensus page</a>.</td>
-  </tr>
+  <thead>
+    <tr>
+      <th>Approval Type</th>
+      <th>Definition</th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>Consensus Approval</td>
+      <td>A consensus approval vote passes with 3 binding +1 votes and no binding vetoes.</td>
+    </tr>
+    <tr>
+      <td>Majority Approval</td>
+      <td>A majority approval vote passes with 3 binding +1 votes and more binding +1 votes than -1 votes.</td>
+    </tr>
+    <tr>
+      <td>Lazy Approval (or Lazy Consensus)</td>
+      <td>An action with lazy approval is implicitly allowed unless a -1 vote is received, at which time, depending on the type of action, either majority approval or consensus approval must be obtained.  Lazy Approval can be either <em>stated</em> or <em>assumed</em>, as detailed on the <a href="governance/lazyConsensus">lazy consensus page</a>.</td>
+    </tr>
+  </tbody>
 </table>
 
 <h2 id="vetoes">Vetoes</h2>
@@ -403,79 +410,83 @@ See the <a href="https://www.apache.org/dev/pmc">PMC Guide</a> for more informat
 
 <p>This section describes the various actions which are undertaken within the project, the corresponding approval required for that action and those who have binding votes over the action. It also specifies the minimum length of time that a vote must remain open, measured in days. In general, votes should not be called at times when it is known that interested members of the project will be unavailable.</p>
 
-<p>For Code Change actions, a committer may choose to employ assumed or stated Lazy Approval under the <a href="#CTR">CTR</a> policy. Assumed Lazy Approval has no minimum length of time before the change can be made.</p>
+<p>For Code Change actions, a committer may choose to employ assumed or stated Lazy Approval under the <a href="#commit-then-review-ctr">CTR</a> policy. Assumed Lazy Approval has no minimum length of time before the change can be made.</p>
 
 <table class="table">
-  <tr>
-    <th>Action</th>
-    <th>Description</th>
-    <th>Approval</th>
-    <th>Binding Votes</th>
-    <th>Min. Length (days)</th>
-  </tr>
-  <tr>
-    <td>Code Change</td>
-    <td>A change made to a codebase of the project. This includes source code, documentation, website content, etc.</td>
-    <td>Lazy approval, moving to consensus approval upon veto</td>
-    <td>Active committers</td>
-    <td>1</td>
-  </tr>
-  <tr>
-    <td>Release Plan</td>
-    <td>Defines the timetable and actions for an upcoming release. The plan also nominates a Release Manager.</td>
-    <td>Lazy approval, moving to majority approval upon veto</td>
-    <td>Active committers</td>
-    <td>3</td>
-  </tr>
-  <tr>
-    <td>Release Plan Cancellation</td>
-    <td>Cancels an active release plan, due to a need to re-plan (e.g., discovery of a major issue).</td>
-    <td>Majority approval</td>
-    <td>Active committers</td>
-    <td>3</td>
-  </tr>
-  <tr>
-    <td>Product Release</td>
-    <td>Accepts or rejects a release candidate as an official release of the project.</td>
-    <td>Majority approval</td>
-    <td>Active PMC members</td>
-    <td>3</td>
-  </tr>
-  <tr>
-    <td>Adoption of New Codebase</td>
-    <td>When the codebase for an existing, released product is to be replaced with an alternative codebase. If such a vote fails to gain approval, the existing code base will continue. This also covers the creation of new sub-projects within the project.</td>
-    <td>Consensus approval</td>
-    <td>Active PMC members</td>
-    <td>7</td>
-  </tr>
-  <tr>
-    <td>New Committer</td>
-    <td>When a new committer is proposed for the project.</td>
-    <td>Consensus approval</td>
-    <td>Active PMC members</td>
-    <td>3</td>
-  </tr>
-  <tr>
-    <td>New PMC Member</td>
-    <td>When a committer is proposed for the PMC.</td>
-    <td>Consensus approval</td>
-    <td>Active PMC members</td>
-    <td>3</td>
-  </tr>
-  <tr>
-    <td>New PMC Chair</td>
-    <td>When a new PMC chair is chosen to succeed an outgoing chair.</td>
-    <td>Consensus approval</td>
-    <td>Active PMC members</td>
-    <td>3</td>
-  </tr>
-  <tr>
-    <td>Modifying Bylaws</td>
-    <td>Modifying this document.</td>
-    <td>Consensus approval</td>
-    <td>Active PMC members</td>
-    <td>7</td>
-  </tr>
+  <thead>
+    <tr>
+      <th>Action</th>
+      <th>Description</th>
+      <th>Approval</th>
+      <th>Binding Votes</th>
+      <th>Min. Length (days)</th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>Code Change</td>
+      <td>A change made to a codebase of the project. This includes source code, documentation, website content, etc.</td>
+      <td>Lazy approval, moving to consensus approval upon veto</td>
+      <td>Active committers</td>
+      <td>1</td>
+    </tr>
+    <tr>
+      <td>Release Plan</td>
+      <td>Defines the timetable and actions for an upcoming release. The plan also nominates a Release Manager.</td>
+      <td>Lazy approval, moving to majority approval upon veto</td>
+      <td>Active committers</td>
+      <td>3</td>
+    </tr>
+    <tr>
+      <td>Release Plan Cancellation</td>
+      <td>Cancels an active release plan, due to a need to re-plan (e.g., discovery of a major issue).</td>
+      <td>Majority approval</td>
+      <td>Active committers</td>
+      <td>3</td>
+    </tr>
+    <tr>
+      <td>Product Release</td>
+      <td>Accepts or rejects a release candidate as an official release of the project.</td>
+      <td>Majority approval</td>
+      <td>Active PMC members</td>
+      <td>3</td>
+    </tr>
+    <tr>
+      <td>Adoption of New Codebase</td>
+      <td>When the codebase for an existing, released product is to be replaced with an alternative codebase. If such a vote fails to gain approval, the existing code base will continue. This also covers the creation of new sub-projects within the project.</td>
+      <td>Consensus approval</td>
+      <td>Active PMC members</td>
+      <td>7</td>
+    </tr>
+    <tr>
+      <td>New Committer</td>
+      <td>When a new committer is proposed for the project.</td>
+      <td>Consensus approval</td>
+      <td>Active PMC members</td>
+      <td>3</td>
+    </tr>
+    <tr>
+      <td>New PMC Member</td>
+      <td>When a committer is proposed for the PMC.</td>
+      <td>Consensus approval</td>
+      <td>Active PMC members</td>
+      <td>3</td>
+    </tr>
+    <tr>
+      <td>New PMC Chair</td>
+      <td>When a new PMC chair is chosen to succeed an outgoing chair.</td>
+      <td>Consensus approval</td>
+      <td>Active PMC members</td>
+      <td>3</td>
+    </tr>
+    <tr>
+      <td>Modifying Bylaws</td>
+      <td>Modifying this document.</td>
+      <td>Consensus approval</td>
+      <td>Active PMC members</td>
+      <td>7</td>
+    </tr>
+  </tbody>
 </table>
 
 <p>No other voting actions are defined; all other actions should presume Lazy Approval (defaulting to Consensus Approval upon veto). If an action is voted on multiple times, or if a different approval type is desired, these bylaws should be amended to include the action.</p>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/e938fe2b/css/accumulo.css
----------------------------------------------------------------------
diff --git a/css/accumulo.css b/css/accumulo.css
index 5f1969c..a87a2c0 100644
--- a/css/accumulo.css
+++ b/css/accumulo.css
@@ -74,11 +74,11 @@ footer > p {
     border-collapse:collapse;
 }
 
-#release_notes_testing, #release_notes_testing tbody tr, #release_notes_testing tbody tr th, #release_notes_testing tbody tr td {
+#release_notes_testing, #release_notes_testing tr, #release_notes_testing th, #release_notes_testing td {
     border: 2px solid black;
 }
 
-#release_notes_testing tbody tr th, #release_notes_testing tbody tr td {
+#release_notes_testing th, #release_notes_testing td {
     padding: 5px;
 }
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/e938fe2b/downloads/index.html
----------------------------------------------------------------------
diff --git a/downloads/index.html b/downloads/index.html
index 1e7133e..043faef 100644
--- a/downloads/index.html
+++ b/downloads/index.html
@@ -314,7 +314,7 @@ var mirrorsCallback = function(json) {
 };
 
 // get mirrors when page is ready
-var mirrorURL = "/mirrors.cgi"; // http[s]://accumulo.apache.org/mirrors.cgi
+var mirrorURL = "http://accumulo.apache.org/mirrors.cgi"; // http[s]://accumulo.apache.org/mirrors.cgi
 $(function() { $.getJSON(mirrorURL + "?as_json", mirrorsCallback); });
 
 </script>
@@ -326,31 +326,33 @@ $(function() { $.getJSON(mirrorURL + "?as_json", mirrorsCallback); });
 
 <h2 id="current-releases">Current Releases</h2>
 
-<h3 id="span-classlabel-label-primarylatestspan">1.7.1 <span class="label label-primary">latest</span></h3>
+<h3 id="latest-label-label-primary-">1.7.1 <strong class="label label-primary">latest</strong></h3>
 
 <p>The most recent Apache Accumulo™ release is version 1.7.1. See the <a href="/release_notes/1.7.1" title="1.7.1 Release Notes">release notes</a> and <a href="https://issues.apache.org/jira/browse/ACCUMULO/fixforversion/12329940" title="1.7.1 CHANGES">CHANGES</a>.</p>
 
 <p>For convenience, <a href="https://www.apache.org/dist/accumulo/1.7.1/MD5SUM" title="1.7.1 MD5 file hashes">MD5</a> and <a href="https://www.apache.org/dist/accumulo/1.7.1/SHA1SUM" title="1.7.1 SHA1 file hashes">SHA1</a> hashes are also available.</p>
 
 <table class="table">
-<tr>
-<th>Generic Binaries</th>
-<td><a href="https://www.apache.org/dyn/closer.lua/accumulo/1.7.1/accumulo-1.7.1-bin.tar.gz" link-suffix="/accumulo/1.7.1/accumulo-1.7.1-bin.tar.gz" class="download_external" id="/downloads/accumulo-1.7.1-bin.tar.gz">accumulo-1.7.1-bin.tar.gz</a></td>
-<td><a href="https://www.apache.org/dist/accumulo/1.7.1/accumulo-1.7.1-bin.tar.gz.asc">ASC</a></td>
-</tr>
-<tr>
-<th>Source</th>
-<td><a href="https://www.apache.org/dyn/closer.lua/accumulo/1.7.1/accumulo-1.7.1-src.tar.gz" link-suffix="/accumulo/1.7.1/accumulo-1.7.1-src.tar.gz" class="download_external" id="/downloads/accumulo-1.7.1-src.tar.gz">accumulo-1.7.1-src.tar.gz</a></td>
-<td><a href="https://www.apache.org/dist/accumulo/1.7.1/accumulo-1.7.1-src.tar.gz.asc">ASC</a></td>
-</tr>
+  <tbody>
+    <tr>
+      <td><strong>Generic Binaries</strong></td>
+      <td><a class="download_external" link-suffix="/accumulo/1.7.1/accumulo-1.7.1-bin.tar.gz" id="/downloads/accumulo-1.7.1-bin.tar.gz" href="https://www.apache.org/dyn/closer.lua/accumulo/1.7.1/accumulo-1.7.1-bin.tar.gz">accumulo-1.7.1-bin.tar.gz</a></td>
+      <td><a href="https://www.apache.org/dist/accumulo/1.7.1/accumulo-1.7.1-bin.tar.gz.asc">ASC</a></td>
+    </tr>
+    <tr>
+      <td><strong>Source</strong></td>
+      <td><a class="download_external" link-suffix="/accumulo/1.7.1/accumulo-1.7.1-src.tar.gz" id="/downloads/accumulo-1.7.1-src.tar.gz" href="https://www.apache.org/dyn/closer.lua/accumulo/1.7.1/accumulo-1.7.1-src.tar.gz">accumulo-1.7.1-src.tar.gz</a></td>
+      <td><a href="https://www.apache.org/dist/accumulo/1.7.1/accumulo-1.7.1-src.tar.gz.asc">ASC</a></td>
+    </tr>
+  </tbody>
 </table>
 
 <h4 id="documentation">1.7 Documentation</h4>
 <ul>
-  <li><a href="https://github.com/apache/accumulo/blob/rel/1.7.1/README.md" class="download_external" id="/1.7/README">README</a></li>
+  <li><a class="download_external" id="/1.7/README" href="https://github.com/apache/accumulo/blob/rel/1.7.1/README.md">README</a></li>
   <li><a href="/1.7/accumulo_user_manual" title="1.7 user manual">HTML User Manual</a></li>
   <li><a href="/1.7/examples" title="1.7 examples">Examples</a></li>
-  <li><a href="/1.7/apidocs" class="download_external" id="/1.7/apidocs">Javadoc</a></li>
+  <li><a class="download_external" id="/1.7/apidocs" href="/1.7/apidocs">Javadoc</a></li>
 </ul>
 
 <h3 id="section">1.6.5</h3>
@@ -360,25 +362,27 @@ $(function() { $.getJSON(mirrorURL + "?as_json", mirrorsCallback); });
 <p>For convenience, <a href="https://www.apache.org/dist/accumulo/1.6.5/MD5SUM" title="1.6.5 MD5 file hashes">MD5</a> and <a href="https://www.apache.org/dist/accumulo/1.6.5/SHA1SUM" title="1.6.5 SHA1 file hashes">SHA1</a> hashes are also available.</p>
 
 <table class="table">
-<tr>
-<th>Generic Binaries</th>
-<td><a href="https://www.apache.org/dyn/closer.lua/accumulo/1.6.5/accumulo-1.6.5-bin.tar.gz" link-suffix="/accumulo/1.6.5/accumulo-1.6.5-bin.tar.gz" class="download_external" id="/downloads/accumulo-1.6.5-bin.tar.gz">accumulo-1.6.5-bin.tar.gz</a></td>
-<td><a href="https://www.apache.org/dist/accumulo/1.6.5/accumulo-1.6.5-bin.tar.gz.asc">ASC</a></td>
-</tr>
-<tr>
-<th>Source</th>
-<td><a href="https://www.apache.org/dyn/closer.lua/accumulo/1.6.5/accumulo-1.6.5-src.tar.gz" link-suffix="/accumulo/1.6.5/accumulo-1.6.5-src.tar.gz" class="download_external" id="/downloads/accumulo-1.6.5-src.tar.gz">accumulo-1.6.5-src.tar.gz</a></td>
-<td><a href="https://www.apache.org/dist/accumulo/1.6.5/accumulo-1.6.5-src.tar.gz.asc">ASC</a></td>
-</tr>
+  <tbody>
+    <tr>
+      <td><strong>Generic Binaries</strong></td>
+      <td><a class="download_external" link-suffix="/accumulo/1.6.5/accumulo-1.6.5-bin.tar.gz" id="/downloads/accumulo-1.6.5-bin.tar.gz" href="https://www.apache.org/dyn/closer.lua/accumulo/1.6.5/accumulo-1.6.5-bin.tar.gz">accumulo-1.6.5-bin.tar.gz</a></td>
+      <td><a href="https://www.apache.org/dist/accumulo/1.7.1/accumulo-1.7.1-bin.tar.gz.asc">ASC</a></td>
+    </tr>
+    <tr>
+      <td><strong>Source</strong></td>
+      <td><a class="download_external" link-suffix="/accumulo/1.6.5/accumulo-1.6.5-src.tar.gz" id="/downloads/accumulo-1.6.5-src.tar.gz" href="https://www.apache.org/dyn/closer.lua/accumulo/1.6.5/accumulo-1.6.5-src.tar.gz">accumulo-1.6.5-src.tar.gz</a></td>
+      <td><a href="https://www.apache.org/dist/accumulo/1.7.1/accumulo-1.7.1-src.tar.gz.asc">ASC</a></td>
+    </tr>
+  </tbody>
 </table>
 
 <h4 id="documentation-1">1.6 Documentation</h4>
 <ul>
-  <li><a href="https://git-wip-us.apache.org/repos/asf?p=accumulo.git;a=blob_plain;f=README;hb=rel/1.6.5" class="download_external" id="/1.6/README">README</a></li>
-  <li><a href="https://search.maven.org/remotecontent?filepath=org/apache/accumulo/accumulo-docs/1.6.5/accumulo-docs-1.6.5-user-manual.pdf" class="download_external" id="/1.6/accumulo_user_manual.pdf">PDF manual</a></li>
+  <li><a class="download_external" id="/1.6/README" href="https://git-wip-us.apache.org/repos/asf?p=accumulo.git;a=blob_plain;f=README;hb=rel/1.6.5">README</a></li>
+  <li><a class="download_external" id="/1.6/accumulo_user_manual.pdf" href="https://search.maven.org/remotecontent?filepath=org/apache/accumulo/accumulo-docs/1.6.5/accumulo-docs-1.6.5-user-manual.pdf">PDF manual</a></li>
   <li><a href="/1.6/accumulo_user_manual" title="1.6 user manual">html manual</a></li>
   <li><a href="/1.6/examples" title="1.6 examples">examples</a></li>
-  <li><a href="/1.6/apidocs" class="download_external" id="/1.6/apidocs">Javadoc</a></li>
+  <li><a class="download_external" id="/1.6/apidocs" href="/1.6/apidocs">Javadoc</a></li>
 </ul>
 
 <h2 id="older-releases">Older releases</h2>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/e938fe2b/example/wikisearch.html
----------------------------------------------------------------------
diff --git a/example/wikisearch.html b/example/wikisearch.html
index aeba341..b51cc20 100644
--- a/example/wikisearch.html
+++ b/example/wikisearch.html
@@ -237,384 +237,587 @@
 
 <h2 id="sample-application">Sample Application</h2>
 
-<p>Starting with release 1.4, Accumulo includes an example web application that provides a flexible,  scalable search over the articles of Wikipedia, a widely available medium-sized corpus.</p>
+<p>Starting with release 1.4, Accumulo includes an example web application that
+provides a flexible, scalable search over the articles of Wikipedia, a widely
+available medium-sized corpus.</p>
 
-<p>The example uses an indexing technique helpful for doing multiple logical tests against content.  In this case, we can perform a word search on Wikipedia articles.   The sample application takes advantage of 3 unique capabilities of Accumulo:</p>
+<p>The example uses an indexing technique helpful for doing multiple logical tests
+against content. In this case, we can perform a word search on Wikipedia
+articles. The sample application takes advantage of 3 unique capabilities of
+Accumulo:</p>
 
 <ol>
-  <li>Extensible iterators that operate within the distributed tablet servers of the key-value store</li>
-  <li>Custom aggregators which can efficiently condense information during the various life-cycles of the log-structured merge tree</li>
-  <li>Custom load balancing, which ensures that a table is evenly distributed on all tablet servers</li>
+  <li>Extensible iterators that operate within the distributed tablet servers of
+the key-value store</li>
+  <li>Custom aggregators which can efficiently condense information during the
+various life-cycles of the log-structured merge tree</li>
+  <li>Custom load balancing, which ensures that a table is evenly distributed on
+all tablet servers</li>
 </ol>
 
-<p>In the example, Accumulo tracks the cardinality of all terms as elements are ingested.  If the cardinality is small enough, it will track the set of documents by term directly.  For example:</p>
+<p>In the example, Accumulo tracks the cardinality of all terms as elements are
+ingested. If the cardinality is small enough, it will track the set of
+documents by term directly. For example:</p>
 
 <style type="text/css">
-table, td, th {
+table.wiki, table.wiki td, table.wiki th {
   padding-right: 5px;
   padding-left: 5px;
   border: 1px solid black;
   border-collapse: collapse;
 }
-td {
+table.wiki td {
   text-align: right;
 }
-.lt {
-  text-align: left;
-}
 </style>
 
-<table>
-<tr>
-<th>Row (word)</th>
-<th colspan="2">Value (count, document list)</th>
-</tr><tr>
-<td>Octopus
-<td>2
-<td class="lt">[Document 57, Document 220]
-<tr>
-<td>Other
-<td>172,849
-<td class="lt">[]
-<tr>
-<td>Ostrich
-<td>1
-<td class="lt">[Document 901]
-
-
-
-Searches can be optimized to focus on low-cardinality terms.  To create these counts, the example installs “aggregators” which are used to combine inserted values.  The ingester just writes simple  “(Octopus, 1, Document 57)” tuples.  The tablet servers then used the installed aggregators to merge the cells as the data is re-written, or queried.  This reduces the in-memory locking required to update high-cardinality terms, and defers aggregation to a later time, where it can be done more efficiently.
-
-The example also creates a reverse word index to map each word to the document in which it appears. But it does this by choosing an arbitrary partition for the document.  The article, and the word index for the article are grouped together into the same partition.  For example:
-
-<table>
-<tr>
-<th>Row (partition)
-<th>Column Family
-<th>Column Qualifier
-<th>Value
-<tr>
-<td>1
-<td>D
-<td>Document 57
-<td>“smart Octopus”
-<tr>
-<td>1
-<td>Word, Octopus
-<td>Document 57
-<td>
-<tr>
-<td>1
-<td>Word, smart
-<td>Document 57
-<td>
-<tr>
-<td>...
-<td>
-<td>
-<td>
-<tr>
-<td>2
-<td>D
-<td>Document 220
-<td>“big Octopus”
-<tr>
-<td>2
-<td>Word, big
-<td>Document 220
-<td>
-<tr>
-<td>2
-<td>Word, Octopus
-<td>Document 220
-<td>
-
-
-Of course, there would be large numbers of documents in each partition, and the elements of those documents would be interlaced according to their sort order.
-
-By dividing the index space into partitions, the multi-word searches can be performed in parallel across all the nodes.  Also, by grouping the document together with its index, a document can be retrieved without a second request from the client.  The query “octopus” and “big” will be performed on all the servers, but only those partitions for which the low-cardinality term “octopus” can be found by using the aggregated reverse index information.  The query for a document is performed by extensions provided in the example.  These extensions become part of the tablet server's iterator stack.  By cloning the underlying iterators, the query extensions can seek to specific words within the index, and when it finds a matching document, it can then seek to the document location and retrieve the contents.
-
-We loaded the example on a  cluster of 10 servers, each with 12 cores, and 32G RAM, 6 500G drives.  Accumulo tablet servers were allowed a maximum of 3G of working memory, of which 2G was dedicated to caching file data.
-
-Following the instructions in the example, the Wikipedia XML data for articles was loaded for English, Spanish and German languages into 10 partitions.  The data is not partitioned by language: multiple languages were used to get a larger set of test data.  The data load took around 8 hours, and has not been optimized for scale.  Once the data was loaded, the content was compacted which took about 35 minutes.
-
-The example uses the language-specific tokenizers available from the Apache Lucene project for Wikipedia data.
-
-Original files:
-
-<table>
-<tr>
-<th>Articles
-<th>Compressed size
-<th>Filename
-<tr>
-<td>1.3M
-<td>2.5G
-<td>dewiki-20111120-pages-articles.xml.bz2
-<tr>
-<td>3.8M
-<td>7.9G
-<td>enwiki-20111115-pages-articles.xml.bz2
-<tr>
-<td>0.8M
-<td>1.4G
-<td>eswiki-20111112-pages-articles.xml.bz2
-
-
-The resulting tables:
-
-    &gt; du -p wiki.*
-          47,325,680,634 [wiki]
-           5,125,169,305 [wikiIndex]
-                     413 [wikiMetadata]
-           5,521,690,682 [wikiReverseIndex]
-
-Roughly a 6:1 increase in size.
-
-We performed the following queries, and repeated the set 5 times.  The query language is much more expressive than what is shown below.  The actual query specified that these words were to be found in the body of the article.  Regular expressions, searches within titles, negative tests, etc are available.
-
-<table>
-<tr>
-<th>Query
-<th colspan="5">Samples (seconds)
-<th>Matches
-<th>Result Size
-<tr>
-<td>“old” and “man” and “sea”
-<td>4.07
-<td>3.79
-<td>3.65
-<td>3.85
-<td>3.67
-<td>22,956
-<td>3,830,102
-<tr>
-<td>“paris” and “in” and “the” and “spring”
-<td>3.06
-<td>3.06
-<td>2.78
-<td>3.02
-<td>2.92
-<td>10,755
-<td>1,757,293
-<tr>
-<td>“rubber” and “ducky” and “ernie”
-<td>0.08
-<td>0.08
-<td>0.1
-<td>0.11
-<td>0.1
-<td>6
-<td>808
-<tr>
-<td>“fast”  and ( “furious” or “furriest”) 
-<td>1.34
-<td>1.33
-<td>1.3
-<td>1.31
-<td>1.31
-<td>2,973
-<td>493,800
-<tr>
-<td>“slashdot” and “grok”
-<td>0.06
-<td>0.06
-<td>0.06
-<td>0.06
-<td>0.06
-<td>14
-<td>2,371
-<tr>
-<td>“three” and “little” and “pigs”
-<td>0.92
-<td>0.91
-<td>0.9
-<td>1.08
-<td>0.88
-<td>2,742
-<td>481,531
-
-
-Because the terms are tested together within the tablet server, even fairly high-cardinality terms such as “old,” “man,” and “sea” can be tested efficiently, without needing to return to the client, or make distributed calls between servers to perform the intersection between terms.
-
-For reference, here are the cardinalities for all the terms in the query (remember, this is across all languages loaded):
-
-<table>
-<tr> <th>Term <th> Cardinality
-<tr> <td> ducky <td> 795
-<tr> <td> ernie <td> 13,433
-<tr> <td> fast <td> 166,813
-<tr> <td> furious <td> 10,535
-<tr> <td> furriest <td> 45
-<tr> <td> grok <td> 1,168
-<tr> <td> in <td> 1,884,638
-<tr> <td> little <td> 320,748
-<tr> <td> man <td> 548,238
-<tr> <td> old <td> 720,795
-<tr> <td> paris <td> 232,464
-<tr> <td> pigs <td> 8,356
-<tr> <td> rubber <td> 17,235
-<tr> <td> sea <td> 247,231
-<tr> <td> slashdot <td> 2,343
-<tr> <td> spring <td> 125,605
-<tr> <td> the <td> 3,509,498
-<tr> <td> three <td> 718,810
-
-
-
-Accumulo supports caching index information, which is turned on by default, and for the non-index blocks of a file, which is not. After turning on data block caching for the wiki table:
-
-<table>
-<tr>
-<th>Query
-<th colspan="5">Samples (seconds)
-<tr>
-<td>“old” and “man” and “sea”
-<td>2.47
-<td>2.48
-<td>2.51
-<td>2.48
-<td>2.49
-<tr>
-<td>“paris” and “in” and “the” and “spring”
-<td>1.33
-<td>1.42
-<td>1.6
-<td>1.61
-<td>1.47
-<tr>
-<td>“rubber” and “ducky” and “ernie”
-<td>0.07
-<td>0.08
-<td>0.07
-<td>0.07
-<td>0.07
-<tr>
-<td>“fast” and ( “furious” or “furriest”) 
-<td>1.28
-<td>0.78
-<td>0.77
-<td>0.79
-<td>0.78
-<tr>
-<td>“slashdot” and “grok”
-<td>0.04
-<td>0.04
-<td>0.04
-<td>0.04
-<td>0.04
-<tr>
-<td>“three” and “little” and “pigs”
-<td>0.55
-<td>0.32
-<td>0.32
-<td>0.31
-<td>0.27
-
-<table>
-<p>
-For comparison, these are the cold start lookup times (restart Accumulo, and drop the operating system disk cache):
-
-<table>
-<tr>
-<th>Query
-<th>Sample
-<tr>
-<td>“old” and “man” and “sea”
-<td>13.92
-<tr>
-<td>“paris” and “in” and “the” and “spring”
-<td>8.46
-<tr>
-<td>“rubber” and “ducky” and “ernie”
-<td>2.96
-<tr>
-<td>“fast” and ( “furious” or “furriest”) 
-<td>6.77
-<tr>
-<td>“slashdot” and “grok”
-<td>4.06
-<tr>
-<td>“three” and “little” and “pigs”
-<td>8.13
-
-
-### Random Query Load
-
-Random queries were generated using common english words.  A uniform random sample of 3 to 5 words taken from the 10000 most common words in the Project Gutenberg's online text collection were joined with “and”.  Words containing anything other than letters (such as contractions) were not used.  A client was started simultaneously on each of the 10 servers and each ran 100 random queries (1000 queries total).
-
-
-<table>
-<tr>
-<th>Time
-<th>Count
-<tr>
-<td>41.97
-<td>440,743
-<tr>
-<td>41.61
-<td>320,522
-<tr>
-<td>42.11
-<td>347,969
-<tr>
-<td>38.32
-<td>275,655
-
-
-### Query Load During Ingest
-
-The English wikipedia data was re-ingested on top of the existing, compacted data. The following  query samples were taken in 5 minute intervals while ingesting 132 articles/second:
-
-
-<table>
-<tr>
-<th>Query
-<th colspan="5">Samples (seconds)
-<tr>
-<td>“old” and “man” and “sea”
-<td>4.91
-<td>3.92
-<td>11.58
-<td>9.86
-<td>10.21
-<tr>
-<td>“paris” and “in” and “the” and “spring”
-<td>5.03
-<td>3.37
-<td>12.22
-<td>3.29
-<td>9.46
-<tr>
-<td>“rubber” and “ducky” and “ernie”
-<td>4.21
-<td>2.04
-<td>8.57
-<td>1.54
-<td>1.68
-<tr>
-<td>“fast”  and ( “furious” or “furriest”) 
-<td>5.84
-<td>2.83
-<td>2.56
-<td>3.12
-<td>3.09
-<tr>
-<td>“slashdot” and “grok”
-<td>5.68
-<td>2.62
-<td>2.2
-<td>2.78
-<td>2.8
-<tr>
-<td>“three” and “little” and “pigs”
-<td>7.82
-<td>3.42
-<td>2.79
-<td>3.29
-<td>3.3
-
-</td></td></td></td></td></td></tr></td></td></td></td></td></td></tr></td></td></td></td></td></td></tr></td></td></td></td></td></td></tr></td></td></td></td></td></td></tr></td></td></td></td></td></td></tr></th></th></tr></table></td></td></tr></td></td></tr></td></td></tr></td></td></tr></th></th></tr></table></td></td></tr></td></td></tr></td></td></tr></td></td></tr></td></td></tr></td></td></tr></th></th></tr></table></p></table></td></td></td></td></td></td></tr></td></td></td></td></td></td></tr></td></td></td></td></td></td></tr></td></td></td></td></td></td></tr></td></td></td></td></td></td></tr></td></td></td></td></td></td></tr></th></th></tr></table></td></td></tr></td></td></tr></td></td></tr></td></td></tr></td></td></tr></td></td></tr></td></td></tr></td></td></tr></td></td></tr></td></td></tr></td></td></tr></td></td></tr></td></td></tr></td></td></tr></td></td></tr></td></td></tr></td></td></tr></td></td></tr></th></th></tr></table></td></td></td></td></td></td>
 </td></td></tr></td></td></td></td></td></td></td></td></tr></td></td></td></td></td></td></td></td></tr></td></td></td></td></td></td></td></td></tr></td></td></td></td></td></td></td></td></tr></td></td></td></td></td></td></td></td></tr></th></th></th></th></tr></table></td></td></td></tr></td></td></td></tr></td></td></td></tr></th></th></th></tr></table></td></td></td></td></tr></td></td></td></td></tr></td></td></td></td></tr></td></td></td></td></tr></td></td></td></td></tr></td></td></td></td></tr></td></td></td></td></tr></th></th></th></th></tr></table></td></td></td></tr></td></td></td></tr></td></td></td></tr></table>
+<table class="wiki">
+  <thead>
+    <tr>
+      <th>Row (word)</th>
+      <th style="text-align: right">Value (count)</th>
+      <th style="text-align: left">Value (document list)</th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>Octopus</td>
+      <td style="text-align: right">2</td>
+      <td style="text-align: left">[Document 57, Document 220]</td>
+    </tr>
+    <tr>
+      <td>Other</td>
+      <td style="text-align: right">172,849</td>
+      <td style="text-align: left">[]</td>
+    </tr>
+    <tr>
+      <td>Ostrich</td>
+      <td style="text-align: right">1</td>
+      <td style="text-align: left">[Document 901]</td>
+    </tr>
+  </tbody>
+</table>
+
+<p>Searches can be optimized to focus on low-cardinality terms. To create these
+counts, the example installs “aggregators” which are used to combine inserted
+values. The ingester just writes simple “(Octopus, 1, Document 57)” tuples.
+The tablet servers then used the installed aggregators to merge the cells as
+the data is re-written, or queried. This reduces the in-memory locking
+required to update high-cardinality terms, and defers aggregation to a later
+time, where it can be done more efficiently.</p>
+
+<p>The example also creates a reverse word index to map each word to the document
+in which it appears. But it does this by choosing an arbitrary partition for
+the document. The article, and the word index for the article are grouped
+together into the same partition. For example:</p>
+
+<table class="wiki">
+  <thead>
+    <tr>
+      <th>Row (partition)</th>
+      <th>Column Family</th>
+      <th>Column Qualifier</th>
+      <th>Value</th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>1</td>
+      <td>D</td>
+      <td>Document 57</td>
+      <td>“smart Octopus”</td>
+    </tr>
+    <tr>
+      <td>1</td>
+      <td>Word, Octopus</td>
+      <td>Document 57</td>
+      <td> </td>
+    </tr>
+    <tr>
+      <td>1</td>
+      <td>Word, smart</td>
+      <td>Document 57</td>
+      <td> </td>
+    </tr>
+    <tr>
+      <td>…</td>
+      <td> </td>
+      <td> </td>
+      <td> </td>
+    </tr>
+    <tr>
+      <td>2</td>
+      <td>D</td>
+      <td>Document 220</td>
+      <td>“big Octopus”</td>
+    </tr>
+    <tr>
+      <td>2</td>
+      <td>Word, big</td>
+      <td>Document 220</td>
+      <td> </td>
+    </tr>
+    <tr>
+      <td>2</td>
+      <td>Word, Octopus</td>
+      <td>Document 220</td>
+      <td> </td>
+    </tr>
+  </tbody>
+</table>
+
+<p>Of course, there would be large numbers of documents in each partition, and the
+elements of those documents would be interlaced according to their sort order.</p>
+
+<p>By dividing the index space into partitions, the multi-word searches can be
+performed in parallel across all the nodes. Also, by grouping the document
+together with its index, a document can be retrieved without a second request
+from the client. The query “octopus” and “big” will be performed on all the
+servers, but only those partitions for which the low-cardinality term “octopus”
+can be found by using the aggregated reverse index information. The query for a
+document is performed by extensions provided in the example. These extensions
+become part of the tablet server’s iterator stack. By cloning the underlying
+iterators, the query extensions can seek to specific words within the index,
+and when it finds a matching document, it can then seek to the document
+location and retrieve the contents.</p>
+
+<p>We loaded the example on a cluster of 10 servers, each with 12 cores, and 32G
+RAM, 6 500G drives. Accumulo tablet servers were allowed a maximum of 3G of
+working memory, of which 2G was dedicated to caching file data.</p>
+
+<p>Following the instructions in the example, the Wikipedia XML data for articles
+was loaded for English, Spanish and German languages into 10 partitions. The
+data is not partitioned by language: multiple languages were used to get a
+larger set of test data. The data load took around 8 hours, and has not been
+optimized for scale. Once the data was loaded, the content was compacted which
+took about 35 minutes.</p>
+
+<p>The example uses the language-specific tokenizers available from the Apache
+Lucene project for Wikipedia data.</p>
+
+<p>Original files:</p>
+
+<table class="wiki">
+  <thead>
+    <tr>
+      <th>Articles</th>
+      <th>Compressed size</th>
+      <th>Filename</th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>1.3M</td>
+      <td>2.5G</td>
+      <td>dewiki-20111120-pages-articles.xml.bz2</td>
+    </tr>
+    <tr>
+      <td>3.8M</td>
+      <td>7.9G</td>
+      <td>enwiki-20111115-pages-articles.xml.bz2</td>
+    </tr>
+    <tr>
+      <td>0.8M</td>
+      <td>1.4G</td>
+      <td>eswiki-20111112-pages-articles.xml.bz2</td>
+    </tr>
+  </tbody>
+</table>
+
+<p>The resulting tables:</p>
+
+<div class="highlighter-rouge"><pre class="highlight"><code>&gt; du -p wiki.*
+      47,325,680,634 [wiki]
+       5,125,169,305 [wikiIndex]
+                 413 [wikiMetadata]
+       5,521,690,682 [wikiReverseIndex]
+</code></pre>
+</div>
+
+<p>Roughly a 6:1 increase in size.</p>
+
+<p>We performed the following queries, and repeated the set 5 times. The query
+language is much more expressive than what is shown below. The actual query
+specified that these words were to be found in the body of the article. Regular
+expressions, searches within titles, negative tests, etc are available.</p>
+
+<table class="wiki">
+  <thead>
+    <tr>
+      <th>Query</th>
+      <th>Sample 1 (seconds)</th>
+      <th>Sample 2 (seconds)</th>
+      <th>Sample 3 (seconds)</th>
+      <th>Sample 4 (seconds)</th>
+      <th>Sample 5 (seconds)</th>
+      <th>Matches</th>
+      <th>Result Size</th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>“old” and “man” and “sea”</td>
+      <td>4.07</td>
+      <td>3.79</td>
+      <td>3.65</td>
+      <td>3.85</td>
+      <td>3.67</td>
+      <td>22,956</td>
+      <td>3,830,102</td>
+    </tr>
+    <tr>
+      <td>“paris” and “in” and “the” and “spring”</td>
+      <td>3.06</td>
+      <td>3.06</td>
+      <td>2.78</td>
+      <td>3.02</td>
+      <td>2.92</td>
+      <td>10,755</td>
+      <td>1,757,293</td>
+    </tr>
+    <tr>
+      <td>“rubber” and “ducky” and “ernie”</td>
+      <td>0.08</td>
+      <td>0.08</td>
+      <td>0.1</td>
+      <td>0.11</td>
+      <td>0.1</td>
+      <td>6</td>
+      <td>808</td>
+    </tr>
+    <tr>
+      <td>“fast” and ( “furious” or “furriest”)</td>
+      <td>1.34</td>
+      <td>1.33</td>
+      <td>1.3</td>
+      <td>1.31</td>
+      <td>1.31</td>
+      <td>2,973</td>
+      <td>493,800</td>
+    </tr>
+    <tr>
+      <td>“slashdot” and “grok”</td>
+      <td>0.06</td>
+      <td>0.06</td>
+      <td>0.06</td>
+      <td>0.06</td>
+      <td>0.06</td>
+      <td>14</td>
+      <td>2,371</td>
+    </tr>
+    <tr>
+      <td>“three” and “little” and “pigs”</td>
+      <td>0.92</td>
+      <td>0.91</td>
+      <td>0.9</td>
+      <td>1.08</td>
+      <td>0.88</td>
+      <td>2,742</td>
+      <td>481,531</td>
+    </tr>
+  </tbody>
+</table>
+
+<p>Because the terms are tested together within the tablet server, even fairly
+high-cardinality terms such as “old,” “man,” and “sea” can be tested
+efficiently, without needing to return to the client, or make distributed calls
+between servers to perform the intersection between terms.</p>
+
+<p>For reference, here are the cardinalities for all the terms in the query
+(remember, this is across all languages loaded):</p>
+
+<table class="wiki">
+  <thead>
+    <tr>
+      <th>Term</th>
+      <th>Cardinality</th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>ducky</td>
+      <td>795</td>
+    </tr>
+    <tr>
+      <td>ernie</td>
+      <td>13,433</td>
+    </tr>
+    <tr>
+      <td>fast</td>
+      <td>166,813</td>
+    </tr>
+    <tr>
+      <td>furious</td>
+      <td>10,535</td>
+    </tr>
+    <tr>
+      <td>furriest</td>
+      <td>45</td>
+    </tr>
+    <tr>
+      <td>grok</td>
+      <td>1,168</td>
+    </tr>
+    <tr>
+      <td>in</td>
+      <td>1,884,638</td>
+    </tr>
+    <tr>
+      <td>little</td>
+      <td>320,748</td>
+    </tr>
+    <tr>
+      <td>man</td>
+      <td>548,238</td>
+    </tr>
+    <tr>
+      <td>old</td>
+      <td>720,795</td>
+    </tr>
+    <tr>
+      <td>paris</td>
+      <td>232,464</td>
+    </tr>
+    <tr>
+      <td>pigs</td>
+      <td>8,356</td>
+    </tr>
+    <tr>
+      <td>rubber</td>
+      <td>17,235</td>
+    </tr>
+    <tr>
+      <td>sea</td>
+      <td>247,231</td>
+    </tr>
+    <tr>
+      <td>slashdot</td>
+      <td>2,343</td>
+    </tr>
+    <tr>
+      <td>spring</td>
+      <td>125,605</td>
+    </tr>
+    <tr>
+      <td>the</td>
+      <td>3,509,498</td>
+    </tr>
+    <tr>
+      <td>three</td>
+      <td>718,810</td>
+    </tr>
+  </tbody>
+</table>
+
+<p>Accumulo supports caching index information, which is turned on by default, and
+for the non-index blocks of a file, which is not. After turning on data block
+  caching for the wiki table:</p>
+
+<table class="wiki">
+  <thead>
+    <tr>
+      <th>Query</th>
+      <th>Sample 1 (seconds)</th>
+      <th>Sample 2 (seconds)</th>
+      <th>Sample 3 (seconds)</th>
+      <th>Sample 4 (seconds)</th>
+      <th>Sample 5 (seconds)</th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>“old” and “man” and “sea”</td>
+      <td>2.47</td>
+      <td>2.48</td>
+      <td>2.51</td>
+      <td>2.48</td>
+      <td>2.49</td>
+    </tr>
+    <tr>
+      <td>“paris” and “in” and “the” and “spring”</td>
+      <td>1.33</td>
+      <td>1.42</td>
+      <td>1.6</td>
+      <td>1.61</td>
+      <td>1.47</td>
+    </tr>
+    <tr>
+      <td>“rubber” and “ducky” and “ernie”</td>
+      <td>0.07</td>
+      <td>0.08</td>
+      <td>0.07</td>
+      <td>0.07</td>
+      <td>0.07</td>
+    </tr>
+    <tr>
+      <td>“fast” and ( “furious” or “furriest”)</td>
+      <td>1.28</td>
+      <td>0.78</td>
+      <td>0.77</td>
+      <td>0.79</td>
+      <td>0.78</td>
+    </tr>
+    <tr>
+      <td>“slashdot” and “grok”</td>
+      <td>0.04</td>
+      <td>0.04</td>
+      <td>0.04</td>
+      <td>0.04</td>
+      <td>0.04</td>
+    </tr>
+    <tr>
+      <td>“three” and “little” and “pigs”</td>
+      <td>0.55</td>
+      <td>0.32</td>
+      <td>0.32</td>
+      <td>0.31</td>
+      <td>0.27</td>
+    </tr>
+  </tbody>
+</table>
+
+<p>For comparison, these are the cold start lookup times (restart Accumulo, and
+drop the operating system disk cache):</p>
+
+<table class="wiki">
+  <thead>
+    <tr>
+      <th>Query</th>
+      <th>Sample</th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>“old” and “man” and “sea”</td>
+      <td>13.92</td>
+    </tr>
+    <tr>
+      <td>“paris” and “in” and “the” and “spring”</td>
+      <td>8.46</td>
+    </tr>
+    <tr>
+      <td>“rubber” and “ducky” and “ernie”</td>
+      <td>2.96</td>
+    </tr>
+    <tr>
+      <td>“fast” and ( “furious” or “furriest”)</td>
+      <td>6.77</td>
+    </tr>
+    <tr>
+      <td>“slashdot” and “grok”</td>
+      <td>4.06</td>
+    </tr>
+    <tr>
+      <td>“three” and “little” and “pigs”</td>
+      <td>8.13</td>
+    </tr>
+  </tbody>
+</table>
+
+<h3 id="random-query-load">Random Query Load</h3>
+
+<p>Random queries were generated using common english words. A uniform random
+sample of 3 to 5 words taken from the 10000 most common words in the Project
+Gutenberg’s online text collection were joined with “and”. Words containing
+anything other than letters (such as contractions) were not used. A client was
+started simultaneously on each of the 10 servers and each ran 100 random
+queries (1000 queries total).</p>
+
+<table class="wiki">
+  <thead>
+    <tr>
+      <th>Time</th>
+      <th>Count</th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>41.97</td>
+      <td>440,743</td>
+    </tr>
+    <tr>
+      <td>41.61</td>
+      <td>320,522</td>
+    </tr>
+    <tr>
+      <td>42.11</td>
+      <td>347,969</td>
+    </tr>
+    <tr>
+      <td>38.32</td>
+      <td>275,655</td>
+    </tr>
+  </tbody>
+</table>
+
+<h3 id="query-load-during-ingest">Query Load During Ingest</h3>
+
+<p>The English wikipedia data was re-ingested on top of the existing, compacted
+data. The following query samples were taken in 5 minute intervals while
+ingesting 132 articles/second:</p>
+
+<table class="wiki">
+  <thead>
+    <tr>
+      <th>Query</th>
+      <th>Sample 1 (seconds)</th>
+      <th>Sample 2 (seconds)</th>
+      <th>Sample 3 (seconds)</th>
+      <th>Sample 4 (seconds)</th>
+      <th>Sample 5 (seconds)</th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>“old” and “man” and “sea”</td>
+      <td>4.91</td>
+      <td>3.92</td>
+      <td>11.58</td>
+      <td>9.86</td>
+      <td>10.21</td>
+    </tr>
+    <tr>
+      <td>“paris” and “in” and “the” and “spring”</td>
+      <td>5.03</td>
+      <td>3.37</td>
+      <td>12.22</td>
+      <td>3.29</td>
+      <td>9.46</td>
+    </tr>
+    <tr>
+      <td>“rubber” and “ducky” and “ernie”</td>
+      <td>4.21</td>
+      <td>2.04</td>
+      <td>8.57</td>
+      <td>1.54</td>
+      <td>1.68</td>
+    </tr>
+    <tr>
+      <td>“fast” and ( “furious” or “furriest”)</td>
+      <td>5.84</td>
+      <td>2.83</td>
+      <td>2.56</td>
+      <td>3.12</td>
+      <td>3.09</td>
+    </tr>
+    <tr>
+      <td>“slashdot” and “grok”</td>
+      <td>5.68</td>
+      <td>2.62</td>
+      <td>2.2</td>
+      <td>2.78</td>
+      <td>2.8</td>
+    </tr>
+    <tr>
+      <td>“three” and “little” and “pigs”</td>
+      <td>7.82</td>
+      <td>3.42</td>
+      <td>2.79</td>
+      <td>3.29</td>
+      <td>3.3</td>
+    </tr>
+  </tbody>
+</table>
 
       </div>
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/e938fe2b/feed.xml
----------------------------------------------------------------------
diff --git a/feed.xml b/feed.xml
index c10c4e0..2840926 100644
--- a/feed.xml
+++ b/feed.xml
@@ -6,8 +6,8 @@
 </description>
     <link>https://accumulo.apache.org/</link>
     <atom:link href="https://accumulo.apache.org/feed.xml" rel="self" type="application/rss+xml"/>
-    <pubDate>Thu, 28 Apr 2016 10:26:45 -0400</pubDate>
-    <lastBuildDate>Thu, 28 Apr 2016 10:26:45 -0400</lastBuildDate>
+    <pubDate>Thu, 28 Apr 2016 23:56:03 -0400</pubDate>
+    <lastBuildDate>Thu, 28 Apr 2016 23:56:03 -0400</lastBuildDate>
     <generator>Jekyll v3.0.3</generator>
     
   </channel>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/e938fe2b/glossary.html
----------------------------------------------------------------------
diff --git a/glossary.html b/glossary.html
index 219d1a5..cf6464f 100644
--- a/glossary.html
+++ b/glossary.html
@@ -239,56 +239,99 @@
         <h1 class="title">Glossary</h1>
         
         <dl>
-<dt>authorizations</dt>
-<dd>a set of strings associated with a user or with a particular scan that will be used to determine which key/value pairs are visible to the user.</dd>
-<dt>cell</dt>
-<dd>a set of key/value pairs whose keys differ only in timestamp.</dd>
-<dt>column</dt>
-<dd>the portion of the key that sorts after the row and is divided into family, qualifier, and visibility.</dd>
-<dt>column family</dt>
-<dd>the portion of the key that sorts second and controls locality groups, the row/column hybrid nature of accumulo.</dd>
-<dt>column qualifier</dt>
-<dd>the portion of the key that sorts third and provides additional key uniqueness.</dd>
-<dt>column visibility</dt>
-<dd>the portion of the key that sorts fourth and controls user access to individual key/value pairs. Visibilities are boolean AND (&amp;) and OR (|) combinations of authorization strings with parentheses required to determine ordering, e.g. (AB&amp;C)|DEF.</dd>
-<dt>iterator</dt>
-<dd>a mechanism for modifying tablet-local portions of the key/value space. Iterators are used for standard administrative tasks as well as for custom processing.</dd>
-<dt>iterator priority</dt>
-<dd>an iterator must be configured with a particular scope and priority.  When a tablet server enters that scope, it will instantiate iterators in priority order starting from the smallest priority and ending with the largest, and apply each to the data read before rewriting the data or sending the data to the user.</dd>
-<dt>iterator scopes</dt>
-<dd>the possible scopes for iterators are where the tablet server is already reading and/or writing data: minor compaction / flush time (<em>minc</em> scope), major compaction / file merging time (<em>majc</em> scope), and query time (<em>scan</em> scope).</dd>
-<dt>gc</dt>
-<dd>process that identifies temporary files in HDFS that are no longer needed by any process, and deletes them.</dd>
-<dt>key</dt>
-<dd>the key into the distributed sorted map which is accumulo.  The key is subdivided into row, column, and timestamp.  The column is further divided into  family, qualifier, and visibility.</dd>
-<dt>locality group</dt>
-<dd>a set of column families that will be grouped together on disk.  With no locality groups configured, data is stored on disk in row order.  If each column family were configured to be its own locality group, the data for each column would be stored separately, in row order.  Configuring sets of columns into locality groups is a compromise between the two approaches and will improve performance when multiple columns are accessed in the same scan.</dd>
-<dt>log-structured merge-tree</dt>
-<dd>the sorting / flushing / merging scheme on which BigTable's design is based.</dd>
-<dt>logger</dt>
-<dd>in 1.4 and older, process that accepts updates to tablet servers and writes them to local on-disk storage for redundancy. in 1.5 the functionality was subsumed by the tablet server and datanode with HDFS writes.</dd>
-<dt>major compaction</dt>
-<dd>merging multiple files into a single file.  If all of a tablet's files are merged into a single file, it is called a <em>full major compaction</em>.</dd>
-<dt>master</dt>
-<dd>process that detects and responds to tablet failures, balances load across tablet servers by assigning and migrating tablets when required, coordinates table operations, and handles tablet server logistics (startup, shutdown, recovery).</dd>
-<dt>minor compaction</dt>
-<dd>flushing data from memory to disk.  Usually this creates a new file for a tablet, but if the memory flushed is merge-sorted in with data from an existing file (replacing that file), it is called a <em>merging minor compaction</em>.</dd>
-<dt>monitor</dt>
-<dd>process that displays status and usage information for all Accumulo components.</dd>
-<dt>permissions</dt>
-<dd>administrative abilities that must be given to a user such as creating tables or users and changing permissions or configuration parameters.</dd>
-<dt>row</dt>
-<dd>the portion of the key that controls atomicity.  Keys with the same row are guaranteed to remain on a single tablet hosted by a single tablet server, therefore multiple key/value pairs can be added to or removed from a row at the same time. The row is used for the primary sorting of the key.</dd>
-<dt>scan</dt>
-<dd>reading a range of key/value pairs.</dd>
-<dt>tablet</dt>
-<dd>a contiguous key range; the unit of work for a tablet server.</dd>
-<dt>tablet servers</dt>
-<dd>a set of servers that hosts reads and writes for tablets.  Each server hosts a distinct set of tablets at any given time, but the tablets may be hosted by different servers over time.</dd>
-<dt>timestamp</dt>
-<dd>the portion of the key that controls versioning.  Otherwise identical keys with differing timestamps are considered to be versions of a single <em>cell</em>.  Accumulo can be configured to keep the <em>N</em> newest versions of each <em>cell</em>.  When a deletion entry is inserted, it deletes all earlier versions for its cell.</dd>
-<dt>value</dt>
-<dd>immutable bytes associated with a particular key.</dd>
+  <dt>authorizations</dt>
+  <dd>a set of strings associated with a user or with a particular scan that will
+be used to determine which key/value pairs are visible to the user.</dd>
+  <dt>cell</dt>
+  <dd>a set of key/value pairs whose keys differ only in timestamp.</dd>
+  <dt>column</dt>
+  <dd>the portion of the key that sorts after the row and is divided into family,
+qualifier, and visibility.</dd>
+  <dt>column family</dt>
+  <dd>the portion of the key that sorts second and controls locality groups, the
+row/column hybrid nature of accumulo.</dd>
+  <dt>column qualifier</dt>
+  <dd>the portion of the key that sorts third and provides additional key
+uniqueness.</dd>
+  <dt>column visibility</dt>
+  <dd>the portion of the key that sorts fourth and controls user access to
+individual key/value pairs. Visibilities are boolean AND (&amp;) and OR (|)
+combinations of authorization strings with parentheses required to determine
+ordering, e.g. (AB&amp;C)|DEF.</dd>
+  <dt>iterator</dt>
+  <dd>a mechanism for modifying tablet-local portions of the key/value space.
+Iterators are used for standard administrative tasks as well as for custom
+processing.</dd>
+  <dt>iterator priority</dt>
+  <dd>an iterator must be configured with a particular scope and priority. When a
+tablet server enters that scope, it will instantiate iterators in priority
+order starting from the smallest priority and ending with the largest, and
+apply each to the data read before rewriting the data or sending the data to
+the user.</dd>
+  <dt>iterator scopes</dt>
+  <dd>the possible scopes for iterators are where the tablet server is already
+reading and/or writing data: minor compaction / flush time (<em>minc</em>
+scope), major compaction / file merging time (<em>majc</em> scope), and query
+time (<em>scan</em> scope).</dd>
+  <dt>gc</dt>
+  <dd>process that identifies temporary files in HDFS that are no longer needed by
+any process, and deletes them.</dd>
+  <dt>key</dt>
+  <dd>the key into the distributed sorted map which is accumulo. The key is
+subdivided into row, column, and timestamp. The column is further divided into
+family, qualifier, and visibility.</dd>
+  <dt>locality group</dt>
+  <dd>a set of column families that will be grouped together on disk. With no
+locality groups configured, data is stored on disk in row order. If each
+column family were configured to be its own locality group, the data for each
+column would be stored separately, in row order. Configuring sets of columns
+into locality groups is a compromise between the two approaches and will
+improve performance when multiple columns are accessed in the same scan.</dd>
+  <dt>log-structured merge-tree</dt>
+  <dd>the sorting / flushing / merging scheme on which BigTable’s design is based.</dd>
+  <dt>logger</dt>
+  <dd>in 1.4 and older, process that accepts updates to tablet servers and writes
+them to local on-disk storage for redundancy. in 1.5 the functionality was
+subsumed by the tablet server and datanode with HDFS writes.</dd>
+  <dt>major compaction</dt>
+  <dd>merging multiple files into a single file. If all of a tablet’s files are
+merged into a single file, it is called a <em>full major compaction</em>.</dd>
+  <dt>master</dt>
+  <dd>process that detects and responds to tablet failures, balances load across
+tablet servers by assigning and migrating tablets when required, coordinates
+table operations, and handles tablet server logistics (startup, shutdown,
+recovery).</dd>
+  <dt>minor compaction</dt>
+  <dd>flushing data from memory to disk. Usually this creates a new file for a
+tablet, but if the memory flushed is merge-sorted in with data from an existing
+file (replacing that file), it is called a <em>merging minor compaction</em>.</dd>
+  <dt>monitor</dt>
+  <dd>process that displays status and usage information for all Accumulo
+components.</dd>
+  <dt>permissions</dt>
+  <dd>administrative abilities that must be given to a user such as creating tables
+or users and changing permissions or configuration parameters.</dd>
+  <dt>row</dt>
+  <dd>the portion of the key that controls atomicity. Keys with the same row are
+guaranteed to remain on a single tablet hosted by a single tablet server,
+therefore multiple key/value pairs can be added to or removed from a row at the
+same time. The row is used for the primary sorting of the key.</dd>
+  <dt>scan</dt>
+  <dd>reading a range of key/value pairs.</dd>
+  <dt>tablet</dt>
+  <dd>a contiguous key range; the unit of work for a tablet server.</dd>
+  <dt>tablet servers</dt>
+  <dd>a set of servers that hosts reads and writes for tablets. Each server hosts
+a distinct set of tablets at any given time, but the tablets may be hosted by
+different servers over time.</dd>
+  <dt>timestamp</dt>
+  <dd>the portion of the key that controls versioning. Otherwise identical keys
+with differing timestamps are considered to be versions of a single
+<em>cell</em>. Accumulo can be configured to keep the <em>N</em> newest
+versions of each <em>cell</em>. When a deletion entry is inserted, it deletes
+all earlier versions for its cell.</dd>
+  <dt>value</dt>
+  <dd>immutable bytes associated with a particular key.</dd>
 </dl>
 
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/e938fe2b/index.html
----------------------------------------------------------------------
diff --git a/index.html b/index.html
index 9674634..86ebf93 100644
--- a/index.html
+++ b/index.html
@@ -238,22 +238,21 @@
         
         <p><br /></p>
 <div class="jumbotron" style="text-align: center">
-<p>
-The Apache Accumulo&trade; sorted, distributed key/value store is a robust, scalable, high performance data storage and retrieval system.  
-</p>
+<p>The Apache Accumulo&trade; sorted, distributed key/value store is a robust, scalable, high performance data storage and retrieval system.</p>
 <a class="btn btn-success btn-lg" href="downloads/" role="button"><span class="glyphicon glyphicon-download"></span> Download</a>
 </div>
 
-<p>Apache Accumulo is based on Google’s <a href="https://research.google.com/archive/bigtable.html" title="BigTable">BigTable</a> design and is built on
-top of <a href="https://hadoop.apache.org" title="Apache Hadoop">Apache Hadoop</a>, <a href="https://zookeeper.apache.org" title="Apache Zookeeper">Apache Zookeeper</a>, and <a href="https://thrift.apache.org" title="Apache Thrift">Apache Thrift</a>.  Apache Accumulo features a few novel 
-improvements on the BigTable design in the form of cell-based access control and a server-side
-programming mechanism that can modify key/value pairs at various points in the
-data management process.  Other notable improvements and feature are outlined
-<a href="notable_features">here</a>.</p>
+<p>Apache Accumulo is based on Google’s <a href="https://research.google.com/archive/bigtable.html" title="BigTable">BigTable</a> design and is built on top
+of <a href="https://hadoop.apache.org" title="Apache Hadoop">Apache Hadoop</a>, <a href="https://zookeeper.apache.org" title="Apache Zookeeper">Apache Zookeeper</a>, and <a href="https://thrift.apache.org" title="Apache Thrift">Apache Thrift</a>. Apache
+Accumulo features a few novel improvements on the BigTable design in the form
+of cell-based access control and a server-side programming mechanism that can
+modify key/value pairs at various points in the data management process. Other
+notable improvements and feature are outlined <a href="notable_features">here</a>.</p>
 
-<p>Google published the design of BigTable in 2006.  Several other open source
-projects have implemented aspects of this design including <a href="https://hbase.apache.org" title="Apache HBase">Apache HBase</a>, <a href="http://hypertable.com" title="Hypertable">Hypertable</a>,
-and <a href="https://cassandra.apache.org" title="Apache Cassandra">Apache Cassandra</a>.  Accumulo began its development in 2008 and joined the <a href="https://www.apache.org" title="Apache Software Foundation">Apache community</a> in 2011.</p>
+<p>Google published the design of BigTable in 2006. Several other open source
+projects have implemented aspects of this design including <a href="https://hbase.apache.org" title="Apache HBase">Apache HBase</a>,
+<a href="http://hypertable.com" title="Hypertable">Hypertable</a>, and <a href="https://cassandra.apache.org" title="Apache Cassandra">Apache Cassandra</a>. Accumulo began its development in
+2008 and joined the <a href="https://www.apache.org" title="Apache Software Foundation">Apache community</a> in 2011.</p>
 
 
       </div>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/e938fe2b/mailing_list.html
----------------------------------------------------------------------
diff --git a/mailing_list.html b/mailing_list.html
index 0c6b4e0..017e9ed 100644
--- a/mailing_list.html
+++ b/mailing_list.html
@@ -243,40 +243,45 @@ that search providers linked on this page are not part of the <a href="https://m
 mailing list archives</a>.</p>
 
 <table class="table">
-
-<tr>
-<th>user</th>
-<td>For general user questions, help, and announcements.</td>
-<td><a href="https://mail-archives.apache.org/mod_mbox/accumulo-user" class="btn btn-primary btn-xs"><span class="glyphicon glyphicon-book"></span> Archive</a> <a href="https://www.mail-archive.com/user@accumulo.apache.org/" class="btn btn-info btn-xs"><span class="glyphicon glyphicon-search"></span> Search</a></td>
-<td><a href="mailto:user-subscribe@accumulo.apache.org" class="btn btn-success btn-xs"><span class="glyphicon glyphicon-plus"></span> Subscribe</a> <a href="mailto:user-unsubscribe@accumulo.apache.org" class="btn btn-danger btn-xs"><span class="glyphicon glyphicon-remove"></span> Unsubscribe</a></td>
-<td><a href="mailto:user@accumulo.apache.org" class="btn btn-warning btn-xs"><span class="glyphicon glyphicon-envelope"></span> Post</a></td>
-</tr>
-
-<tr>
-<th>dev</th>
-<td>For anyone interested in contributing or following development activities.
-It is recommended that you also subscribe to <b>notifications</b>.</td>
-<td><a href="https://mail-archives.apache.org/mod_mbox/accumulo-dev" class="btn btn-primary btn-xs"><span class="glyphicon glyphicon-book"></span> Archive</a> <a href="https://www.mail-archive.com/dev@accumulo.apache.org/" class="btn btn-info btn-xs"><span class="glyphicon glyphicon-search"></span> Search</a></td>
-<td><a href="mailto:dev-subscribe@accumulo.apache.org" class="btn btn-success btn-xs"><span class="glyphicon glyphicon-plus"></span> Subscribe</a> <a href="mailto:dev-unsubscribe@accumulo.apache.org" class="btn btn-danger btn-xs"><span class="glyphicon glyphicon-remove"></span> Unsubscribe</a></td>
-<td><a href="mailto:dev@accumulo.apache.org" class="btn btn-warning btn-xs"><span class="glyphicon glyphicon-envelope"></span> Post</a></td>
-</tr>
-
-<tr>
-<th>commits</th>
-<td>For following commits.</td>
-<td><a href="https://mail-archives.apache.org/mod_mbox/accumulo-commits" class="btn btn-primary btn-xs"><span class="glyphicon glyphicon-book"></span> Archive</a> <a href="https://www.mail-archive.com/commits@accumulo.apache.org/" class="btn btn-info btn-xs"><span class="glyphicon glyphicon-search"></span> Search</a></td>
-<td><a href="mailto:commits-subscribe@accumulo.apache.org" class="btn btn-success btn-xs"><span class="glyphicon glyphicon-plus"></span> Subscribe</a> <a href="mailto:commits-unsubscribe@accumulo.apache.org" class="btn btn-danger btn-xs"><span class="glyphicon glyphicon-remove"></span> Unsubscribe</a></td>
-<td></td>
-</tr>
-
-<tr>
-<th>notifications</th>
-<td>For following JIRA notifications.</td>
-<td><a href="https://mail-archives.apache.org/mod_mbox/accumulo-notifications" class="btn btn-primary btn-xs"><span class="glyphicon glyphicon-book"></span> Archive</a> <a href="https://www.mail-archive.com/notifications@accumulo.apache.org/" class="btn btn-info btn-xs"><span class="glyphicon glyphicon-search"></span> Search</a></td>
-<td><a href="mailto:notifications-subscribe@accumulo.apache.org" class="btn btn-success btn-xs"><span class="glyphicon glyphicon-plus"></span> Subscribe</a> <a href="mailto:notifications-unsubscribe@accumulo.apache.org" class="btn btn-danger btn-xs"><span class="glyphicon glyphicon-remove"></span> Unsubscribe</a></td>
-<td></td>
-</tr>
-
+  <thead>
+    <tr>
+      <th>Name</th>
+      <th>Description</th>
+      <th>Read</th>
+      <th>Follow</th>
+      <th>Post</th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td><strong>user</strong></td>
+      <td>General user questions, help, and announcements</td>
+      <td><a class="btn btn-primary btn-xs" href="https://mail-archives.apache.org/mod_mbox/accumulo-user"><span class="glyphicon glyphicon-book"></span> Archive</a> <a class="btn btn-info btn-xs" href="https://www.mail-archive.com/user@accumulo.apache.org"><span class="glyphicon glyphicon-search"></span> Search</a></td>
+      <td><a class="btn btn-success btn-xs" href="&#109;&#097;&#105;&#108;&#116;&#111;:&#117;&#115;&#101;&#114;&#045;&#115;&#117;&#098;&#115;&#099;&#114;&#105;&#098;&#101;&#064;&#097;&#099;&#099;&#117;&#109;&#117;&#108;&#111;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;"><span class="glyphicon glyphicon-plus"></span> Subscribe</a> <a class="btn btn-danger btn-xs" href="&#109;&#097;&#105;&#108;&#116;&#111;:&#117;&#115;&#101;&#114;&#045;&#117;&#110;&#115;&#117;&#098;&#115;&#099;&#114;&#105;&#098;&#101;&#064;&#097;&#099;&#099;&#117;&#109;&#117;&#108;&#111;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;"><span class="glyphicon glyphicon-remove"></span> Unsubscribe</a></td>
+      <td><a class="btn btn-warning btn-xs" href="&#109;&#097;&#105;&#108;&#116;&#111;:&#117;&#115;&#101;&#114;&#064;&#097;&#099;&#099;&#117;&#109;&#117;&#108;&#111;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;"><span class="glyphicon glyphicon-envelope"></span> Post</a></td>
+    </tr>
+    <tr>
+      <td><strong>dev</strong></td>
+      <td>Contributor discussions and development activity</td>
+      <td><a class="btn btn-primary btn-xs" href="https://mail-archives.apache.org/mod_mbox/accumulo-dev"><span class="glyphicon glyphicon-book"></span> Archive</a> <a class="btn btn-info btn-xs" href="https://www.mail-archive.com/dev@accumulo.apache.org"><span class="glyphicon glyphicon-search"></span> Search</a></td>
+      <td><a class="btn btn-success btn-xs" href="&#109;&#097;&#105;&#108;&#116;&#111;:&#100;&#101;&#118;&#045;&#115;&#117;&#098;&#115;&#099;&#114;&#105;&#098;&#101;&#064;&#097;&#099;&#099;&#117;&#109;&#117;&#108;&#111;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;"><span class="glyphicon glyphicon-plus"></span> Subscribe</a> <a class="btn btn-danger btn-xs" href="&#109;&#097;&#105;&#108;&#116;&#111;:&#100;&#101;&#118;&#045;&#117;&#110;&#115;&#117;&#098;&#115;&#099;&#114;&#105;&#098;&#101;&#064;&#097;&#099;&#099;&#117;&#109;&#117;&#108;&#111;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;"><span class="glyphicon glyphicon-remove"></span> Unsubscribe</a></td>
+      <td><a class="btn btn-warning btn-xs" href="&#109;&#097;&#105;&#108;&#116;&#111;:&#100;&#101;&#118;&#064;&#097;&#099;&#099;&#117;&#109;&#117;&#108;&#111;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;"><span class="glyphicon glyphicon-envelope"></span> Post</a></td>
+    </tr>
+    <tr>
+      <td><strong>commits</strong></td>
+      <td>Code changes</td>
+      <td><a class="btn btn-primary btn-xs" href="https://mail-archives.apache.org/mod_mbox/accumulo-commits"><span class="glyphicon glyphicon-book"></span> Archive</a> <a class="btn btn-info btn-xs" href="https://www.mail-archive.com/commits@accumulo.apache.org"><span class="glyphicon glyphicon-search"></span> Search</a></td>
+      <td><a class="btn btn-success btn-xs" href="&#109;&#097;&#105;&#108;&#116;&#111;:&#099;&#111;&#109;&#109;&#105;&#116;&#115;&#045;&#115;&#117;&#098;&#115;&#099;&#114;&#105;&#098;&#101;&#064;&#097;&#099;&#099;&#117;&#109;&#117;&#108;&#111;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;"><span class="glyphicon glyphicon-plus"></span> Subscribe</a> <a class="btn btn-danger btn-xs" href="&#109;&#097;&#105;&#108;&#116;&#111;:&#099;&#111;&#109;&#109;&#105;&#116;&#115;&#045;&#117;&#110;&#115;&#117;&#098;&#115;&#099;&#114;&#105;&#098;&#101;&#064;&#097;&#099;&#099;&#117;&#109;&#117;&#108;&#111;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;"><span class="glyphicon glyphicon-remove"></span> Unsubscribe</a></td>
+      <td> </td>
+    </tr>
+    <tr>
+      <td><strong>notifications</strong></td>
+      <td>Automated notifications (JIRA, etc.)</td>
+      <td><a class="btn btn-primary btn-xs" href="https://mail-archives.apache.org/mod_mbox/accumulo-notifications"><span class="glyphicon glyphicon-book"></span> Archive</a> <a class="btn btn-info btn-xs" href="https://www.mail-archive.com/notifications@accumulo.apache.org"><span class="glyphicon glyphicon-search"></span> Search</a></td>
+      <td><a class="btn btn-success btn-xs" href="&#109;&#097;&#105;&#108;&#116;&#111;:&#110;&#111;&#116;&#105;&#102;&#105;&#099;&#097;&#116;&#105;&#111;&#110;&#115;&#045;&#115;&#117;&#098;&#115;&#099;&#114;&#105;&#098;&#101;&#064;&#097;&#099;&#099;&#117;&#109;&#117;&#108;&#111;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;"><span class="glyphicon glyphicon-plus"></span> Subscribe</a> <a class="btn btn-danger btn-xs" href="&#109;&#097;&#105;&#108;&#116;&#111;:&#110;&#111;&#116;&#105;&#102;&#105;&#099;&#097;&#116;&#105;&#111;&#110;&#115;&#045;&#117;&#110;&#115;&#117;&#098;&#115;&#099;&#114;&#105;&#098;&#101;&#064;&#097;&#099;&#099;&#117;&#109;&#117;&#108;&#111;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;"><span class="glyphicon glyphicon-remove"></span> Unsubscribe</a></td>
+      <td> </td>
+    </tr>
+  </tbody>
 </table>
 
 <h2 id="mailing-list-search-providers">Mailing List Search Providers</h2>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/e938fe2b/notable_features.html
----------------------------------------------------------------------
diff --git a/notable_features.html b/notable_features.html
index 9a90644..563a248 100644
--- a/notable_features.html
+++ b/notable_features.html
@@ -238,23 +238,20 @@
         
         <h1 class="title">Notable Features</h1>
         
-        <h2 id="categories">Categories</h2>
-
-<ul>
-  <li><a href="#design">Table Design and Configuration</a></li>
-  <li><a href="#integrity">Integrity/Availability</a></li>
-  <li><a href="#performance">Performance</a></li>
-  <li><a href="#testing">Testing</a></li>
-  <li><a href="#client">Client API</a></li>
-  <li><a href="#behaviors">Extensible Behaviors</a></li>
-  <li><a href="#admin">General Administration</a></li>
-  <li><a href="#internal_dm">Internal Data Management</a></li>
-  <li><a href="#ondemand_dm">On-demand Data Management</a></li>
+        
+<ul id="markdown-toc">
+  <li><a href="#table-design-and-configuration" id="markdown-toc-table-design-and-configuration">Table Design and Configuration</a></li>
+  <li><a href="#integrityavailability" id="markdown-toc-integrityavailability">Integrity/Availability</a></li>
+  <li><a href="#performance" id="markdown-toc-performance">Performance</a></li>
+  <li><a href="#testing" id="markdown-toc-testing">Testing</a></li>
+  <li><a href="#client-api" id="markdown-toc-client-api">Client API</a></li>
+  <li><a href="#extensible-behaviors" id="markdown-toc-extensible-behaviors">Extensible Behaviors</a></li>
+  <li><a href="#general-administration" id="markdown-toc-general-administration">General Administration</a></li>
+  <li><a href="#internal-data-management" id="markdown-toc-internal-data-management">Internal Data Management</a></li>
+  <li><a href="#on-demand-data-management" id="markdown-toc-on-demand-data-management">On-demand Data Management</a></li>
 </ul>
 
-<hr />
-
-<h2 id="table-design-and-configuration-a-iddesigna">Table Design and Configuration <a id="design"></a></h2>
+<h2 id="table-design-and-configuration">Table Design and Configuration</h2>
 
 <h3 id="iterators">Iterators</h3>
 
@@ -303,7 +300,7 @@ over multiple disjoint HDFS instances.  This allows Accumulo to scale beyond the
 of a single namenode.  When used in conjunction with HDFS federation, multiple namenodes
 can share a pool of datanodes.</p>
 
-<h2 id="integrityavailability-a-idintegritya">Integrity/Availability <a id="integrity"></a></h2>
+<h2 id="integrityavailability">Integrity/Availability</h2>
 
 <h3 id="master-fail-over">Master fail over</h3>
 
@@ -361,7 +358,7 @@ Zookeeper to synchronize operations across process faults.</p>
 
 <p>Scans will not see data inserted into a row after the scan of that row begins.</p>
 
-<h2 id="performance-a-idperformancea">Performance <a id="performance"></a></h2>
+<h2 id="performance">Performance</h2>
 
 <h3 id="relative-encoding">Relative encoding</h3>
 
@@ -413,7 +410,7 @@ is generated.  As a block is read more, larger indexes are generated making
 future seeks faster. This strategy allows Accumulo to dynamically respond to
 read patterns without precomputing block indexes when RFiles are written.</p>
 
-<h2 id="testing-a-idtestinga">Testing <a id="testing"></a></h2>
+<h2 id="testing">Testing</h2>
 
 <h3 id="mock">Mock</h3>
 
@@ -468,7 +465,7 @@ Other tests have no concept of data correctness and have the simple goal of
 crashing Accumulo. Many obscure bugs have been uncovered by this testing
 framework and subsequently corrected.</p>
 
-<h2 id="client-api-a-idclienta">Client API <a id="client"></a></h2>
+<h2 id="client-api">Client API</h2>
 
 <h3 id="batch-scanner4"><a href="/1.5/accumulo_user_manual#_writing_accumulo_clients">Batch Scanner</a></h3>
 
@@ -509,7 +506,7 @@ available to other languages like Python, Ruby, C++, etc.</p>
 which allow users to perform efficient, atomic read-modify-write operations on rows. Conditions can
 be defined using on equality checks of the values in a column or the absence of a column. For more
 information on using this feature, users can reference the Javadoc for <a href="/1.6/apidocs/org/apache/accumulo/core/data/ConditionalMutation">ConditionalMutation</a> and
-<a href="/1.6/apidocs/org/apache/accumulo/core/client/ConditionalWriter">ConditionalWriter</a></p>
+<a href="/1.6/apidocs/org/apache/accumulo/core/client/ConditionalWriter">ConditionalWriter</a>.</p>
 
 <h3 id="lexicoders">Lexicoders</h3>
 
@@ -520,7 +517,7 @@ Lexicoders which have numerous implementations that support for efficient transl
 Java primitives to byte arrays and vice versa. These classes can greatly reduce the burden in
 re-implementing common programming mistakes in encoding.</p>
 
-<h2 id="extensible-behaviors-a-idbehaviorsa">Extensible Behaviors <a id="behaviors"></a></h2>
+<h2 id="extensible-behaviors">Extensible Behaviors</h2>
 
 <h3 id="pluggable-balancer">Pluggable balancer</h3>
 
@@ -555,7 +552,7 @@ it is very unlikely that more data will be written to it, and thus paying the pe
 to re-write a large file can be avoided. Implementations of this compaction strategy
 can be used to optimize the data that compactions will write.</p>
 
-<h2 id="general-administration-a-idadmina">General Administration <a id="admin"></a></h2>
+<h2 id="general-administration">General Administration</h2>
 
 <h3 id="monitor-page">Monitor page</h3>
 
@@ -586,7 +583,7 @@ effect until server processes are restarted.</p>
 <p>Tables can be renamed easily because Accumulo uses internal table IDs and
 stores mappings between names and IDs in Zookeeper.</p>
 
-<h2 id="internal-data-management-a-idinternaldma">Internal Data Management <a id="internal_dm"></a></h2>
+<h2 id="internal-data-management">Internal Data Management</h2>
 
 <h3 id="locality-groups">Locality groups</h3>
 
@@ -632,7 +629,7 @@ level of security that Accumulo provides. It is still a work in progress because
 the intermediate files created by Accumulo when recovering from a TabletServer
 failure are not encrypted.</p>
 
-<h2 id="on-demand-data-management-a-idondemanddma">On-demand Data Management <a id="ondemand_dm"></a></h2>
+<h2 id="on-demand-data-management">On-demand Data Management</h2>
 
 <h3 id="compactions">Compactions</h3>
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/e938fe2b/old_documentation.html
----------------------------------------------------------------------
diff --git a/old_documentation.html b/old_documentation.html
index 1356706..a1d0034 100644
--- a/old_documentation.html
+++ b/old_documentation.html
@@ -243,26 +243,26 @@
 <h4 id="documentation">1.5 Documentation</h4>
 
 <ul>
-  <li><a href="https://git-wip-us.apache.org/repos/asf?p=accumulo.git;a=blob_plain;f=README;hb=1.5.4" id="/1.5/README">README</a></li>
-  <li><a href="/1.5/accumulo_user_manual.pdf" id="/1.5/accumulo_user_manual.pdf">PDF manual</a></li>
+  <li><a onClick="javascript: _gaq.push(['_trackPageview', '/1.5/README']);" href="https://git-wip-us.apache.org/repos/asf?p=accumulo.git;a=blob_plain;f=README;hb=1.5.4">README</a></li>
+  <li><a onClick="javascript: _gaq.push(['_trackPageview', '/1.5/accumulo_user_manual.pdf']);" href="/1.5/accumulo_user_manual.pdf">PDF manual</a></li>
   <li><a href="/1.5/accumulo_user_manual" title="1.5 user manual">html manual</a></li>
   <li><a href="/1.5/examples" title="1.5 examples">examples</a></li>
-  <li><a href="/1.5/apidocs" id="/1.5/apidocs">Javadoc</a></li>
+  <li><a onClick="javascript: _gaq.push(['_trackPageview', '/1.5/apidocs']);" href="/1.5/apidocs">Javadoc</a></li>
 </ul>
 
 <h4 id="documentation-1">1.4 Documentation</h4>
 
 <ul>
-  <li><a href="https://git-wip-us.apache.org/repos/asf?p=accumulo.git;a=blob_plain;f=README;hb=f7d87b6e407de6597b6c0ca60ca1b6a321faf237" onclick="javascript: _gaq.push(['_trackPageview', '/1.4/README']);">README</a></li>
-  <li><a href="/1.4/accumulo_user_manual.pdf" onclick="javascript: _gaq.push(['_trackPageview', '/1.4/accumulo_user_manual.pdf']);">pdf manual</a></li>
+  <li><a onClick="javascript: _gaq.push(['_trackPageview', '/1.4/README']);" href="https://git-wip-us.apache.org/repos/asf?p=accumulo.git;a=blob_plain;f=README;hb=f7d87b6e407de6597b6c0ca60ca1b6a321faf237">README</a></li>
+  <li><a onClick="javascript: _gaq.push(['_trackPageview', '/1.4/accumulo_user_manual.pdf']);" href="/1.4/accumulo_user_manual.pdf">PDF manual</a></li>
   <li><a href="/1.4/user_manual" title="1.4 user manual">html manual</a></li>
   <li><a href="/1.4/examples" title="1.4 examples">examples</a></li>
-  <li><a href="/1.4/apidocs" onclick="javascript: _gaq.push(['_trackPageview', '/1.4/apidocs']);">Javadoc</a></li>
+  <li><a onClick="javascript: _gaq.push(['_trackPageview', '/1.4/apidocs']);" href="/1.4/apidocs">Javadoc</a></li>
 </ul>
 
 <h4 id="documentation-2">1.3 Documentation</h4>
 <ul>
-  <li><a href="https://git-wip-us.apache.org/repos/asf?p=accumulo.git;a=blob_plain;f=README;h=86713d9b6add9038d5130b4a23ba4a79b72d0f15;hb=3b4ffc158945c1f834fc6f257f21484c61691d0f" onclick="javascript: _gaq.push(['_trackPageview', '/1.3/README']);">README</a></li>
+  <li><a onClick="javascript: _gaq.push(['_trackPageview', '/1.3/README']);" href="https://git-wip-us.apache.org/repos/asf?p=accumulo.git;a=blob_plain;f=README;h=86713d9b6add9038d5130b4a23ba4a79b72d0f15;hb=3b4ffc158945c1f834fc6f257f21484c61691d0f">README</a></li>
   <li><a href="/user_manual_1.3-incubating/" title="1.3 user manual">html manual</a></li>
   <li><a href="/user_manual_1.3-incubating/examples/" title="1.3 examples">examples</a></li>
 </ul>