You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hbase.apache.org by st...@apache.org on 2014/08/01 18:40:17 UTC

[3/3] git commit: HBASE-11640 Add syntax highlighting support to HBase Ref Guide programlistings (Misty Stanley-Jones)

HBASE-11640 Add syntax highlighting support to HBase Ref Guide programlistings (Misty Stanley-Jones)


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/24b5fa7f
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/24b5fa7f
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/24b5fa7f

Branch: refs/heads/master
Commit: 24b5fa7f0c442b54157d961bf47d912455275151
Parents: 78d47cc
Author: stack <st...@apache.org>
Authored: Fri Aug 1 09:39:56 2014 -0700
Committer: stack <st...@apache.org>
Committed: Fri Aug 1 09:40:03 2014 -0700

----------------------------------------------------------------------
 .../appendix_contributing_to_documentation.xml  |  41 ++++--
 src/main/docbkx/book.xml                        | 146 +++++++++----------
 src/main/docbkx/case_studies.xml                |   4 +-
 src/main/docbkx/configuration.xml               |  44 +++---
 src/main/docbkx/cp.xml                          |   2 +-
 src/main/docbkx/customization.xsl               |   1 +
 src/main/docbkx/developer.xml                   | 130 ++++++++---------
 src/main/docbkx/getting_started.xml             |  36 ++---
 src/main/docbkx/hbase_apis.xml                  |   4 +-
 src/main/docbkx/ops_mgt.xml                     |  82 +++++------
 src/main/docbkx/performance.xml                 |  16 +-
 src/main/docbkx/preface.xml                     |   2 +-
 src/main/docbkx/schema_design.xml               |  12 +-
 src/main/docbkx/security.xml                    |  78 +++++-----
 src/main/docbkx/thrift_filter_language.xml      |  86 +++++------
 src/main/docbkx/tracing.xml                     |  12 +-
 src/main/docbkx/troubleshooting.xml             |  26 ++--
 src/main/docbkx/upgrading.xml                   |   8 +-
 src/main/docbkx/zookeeper.xml                   |  28 ++--
 19 files changed, 391 insertions(+), 367 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hbase/blob/24b5fa7f/src/main/docbkx/appendix_contributing_to_documentation.xml
----------------------------------------------------------------------
diff --git a/src/main/docbkx/appendix_contributing_to_documentation.xml b/src/main/docbkx/appendix_contributing_to_documentation.xml
index 080525a..2f19c7b 100644
--- a/src/main/docbkx/appendix_contributing_to_documentation.xml
+++ b/src/main/docbkx/appendix_contributing_to_documentation.xml
@@ -107,7 +107,7 @@
                 <para>For each issue you work on, create a new branch. One convention that works
                     well for naming the branches is to name a given branch the same as the JIRA it
                     relates to:</para>
-                <screen>$ git checkout -b HBASE-123456</screen>
+                <screen language="bourne">$ git checkout -b HBASE-123456</screen>
             </step>
             <step>
                 <para>Make your suggested changes on your branch, committing your changes to your
@@ -123,8 +123,8 @@
                         sure you have built HBase at least once, in order to fetch all the Maven
                         dependencies you need.</para>
                 </note>
-                <screen>$ mvn clean install -DskipTests               # Builds HBase</screen>
-                <screen>$ mvn clean site -DskipTests                  # Builds the website and documentation</screen>
+                <screen language="bourne">$ mvn clean install -DskipTests               # Builds HBase</screen>
+                <screen language="bourne">$ mvn clean site -DskipTests                  # Builds the website and documentation</screen>
                 <para>If any errors occur, address them.</para>
             </step>
             <step>
@@ -132,7 +132,7 @@
                     the area of the code you are working in has had a lot of changes lately, make
                     sure you rebase your branch against the remote master and take care of any
                     conflicts before submitting your patch.</para>
-                <screen>
+                <screen language="bourne">
 $ git checkout HBASE-123456
 $ git rebase origin/master                
                 </screen>
@@ -141,7 +141,7 @@ $ git rebase origin/master
                 <para>Generate your patch against the remote master. Run the following command from
                     the top level of your git repository (usually called
                     <literal>hbase</literal>):</para>
-                <screen>$ git diff --no-prefix origin/master > HBASE-123456.patch</screen>
+                <screen language="bourne">$ git diff --no-prefix origin/master > HBASE-123456.patch</screen>
                 <para>The name of the patch should contain the JIRA ID. Look over the patch file to
                     be sure that you did not change any additional files by accident and that there
                     are no other surprises. When you are satisfied, attach the patch to the JIRA and
@@ -227,7 +227,7 @@ $ git rebase origin/master
             recommended that you use a &lt;figure&gt; Docbook element for an image. This allows
             screen readers to navigate to the image and also provides alternative text for the
             image. The following is an example of a &lt;figure&gt; element.</para>
-        <programlisting><![CDATA[<figure>
+        <programlisting language="xml"><![CDATA[<figure>
   <title>HFile Version 1</title>
   <mediaobject>
     <imageobject>
@@ -295,7 +295,7 @@ $ git rebase origin/master
                         render as block-level elements (they take the whole width of the page), it
                         is better to mark them up as siblings to the paragraphs around them, like
                         this:</para>
-                    <programlisting><![CDATA[<para>This is the paragraph.</para>
+                    <programlisting language="xml"><![CDATA[<para>This is the paragraph.</para>
 <note>
     <para>This is an admonition which occurs after the paragraph.</para>
 </note>]]></programlisting>
@@ -312,7 +312,7 @@ $ git rebase origin/master
                         consist of things other than plain text, they need to be wrapped in some
                         element. If they are plain text, they need to be inclosed in &lt;para&gt;
                         tags. This is tedious but necessary for validity.</para>
-                    <programlisting><![CDATA[<itemizedlist>
+                    <programlisting language="xml"><![CDATA[<itemizedlist>
     <listitem>
         <para>This is a paragraph.</para>
     </listitem>
@@ -367,7 +367,7 @@ $ git rebase origin/master
                                 the content. Also, to avoid having an extra blank line at the
                                 beginning of the programlisting output, do not put the CDATA
                                 element on its own line. For example:</para>
-                            <programlisting><![CDATA[        <programlisting>
+                            <programlisting language="bourne"><![CDATA[        <programlisting>
 case $1 in
   --cleanZk|--cleanHdfs|--cleanAll)
     matches="yes" ;;
@@ -396,6 +396,29 @@ esac
                         especially if you use GUI mode in the editor.</para>
                 </answer>
             </qandaentry>
+            <qandaentry>
+                <question>
+                    <para>Syntax Highlighting</para>
+                </question>
+                <answer>
+                    <para>The HBase Reference Guide uses the <link
+                            xlink:href="http://sourceforge.net/projects/xslthl/files/xslthl/2.1.0/"
+                            >XSLT Syntax Highlighting</link> Maven module for syntax highlighting.
+                        To enable syntax highlighting for a given &lt;programlisting&gt; or
+                        &lt;screen&gt; (or possibly other elements), add the attribute
+                                <literal>language=<replaceable>LANGUAGE_OF_CHOICE</replaceable></literal>
+                        to the element, as in the following example:</para>
+                    <programlisting language="xml"><![CDATA[
+<programlisting language="xml">
+    <foo>bar</foo>
+    <bar>foo</bar>
+</programlisting>]]></programlisting>
+                    <para>Several syntax types are supported. The most interesting ones for the
+                        HBase Reference Guide are <literal>java</literal>, <literal>xml</literal>,
+                            <literal>sql</literal>, and <literal>bourne</literal> (for BASH shell
+                        output or Linux command-line examples).</para>
+                </answer>
+            </qandaentry>
         </qandaset>
     </section>
 </appendix>

http://git-wip-us.apache.org/repos/asf/hbase/blob/24b5fa7f/src/main/docbkx/book.xml
----------------------------------------------------------------------
diff --git a/src/main/docbkx/book.xml b/src/main/docbkx/book.xml
index 0564354..36f2257 100644
--- a/src/main/docbkx/book.xml
+++ b/src/main/docbkx/book.xml
@@ -300,25 +300,25 @@
         <para> A namespace can be created, removed or altered. Namespace membership is determined
           during table creation by specifying a fully-qualified table name of the form:</para>
 
-        <programlisting><![CDATA[<table namespace>:<table qualifier>]]></programlisting>
+        <programlisting language="xml"><![CDATA[<table namespace>:<table qualifier>]]></programlisting>
 
 
         <example>
           <title>Examples</title>
 
-          <programlisting>
+          <programlisting language="bourne">
 #Create a namespace
 create_namespace 'my_ns'
             </programlisting>
-          <programlisting>
+          <programlisting language="bourne">
 #create my_table in my_ns namespace
 create 'my_ns:my_table', 'fam'
           </programlisting>
-          <programlisting>
+          <programlisting language="bourne">
 #drop namespace
 drop_namespace 'my_ns'
           </programlisting>
-          <programlisting>
+          <programlisting language="bourne">
 #alter namespace
 alter_namespace 'my_ns', {METHOD => 'set', 'PROPERTY_NAME' => 'PROPERTY_VALUE'}
         </programlisting>
@@ -340,7 +340,7 @@ alter_namespace 'my_ns', {METHOD => 'set', 'PROPERTY_NAME' => 'PROPERTY_VALUE'}
         <example>
           <title>Examples</title>
 
-          <programlisting>
+          <programlisting language="bourne">
 #namespace=foo and table qualifier=bar
 create 'foo:bar', 'fam'
 
@@ -429,7 +429,7 @@ create 'bar', 'fam'
           populated with rows with keys "row1", "row2", "row3", and then another set of rows with
           the keys "abc1", "abc2", and "abc3". The following example shows how startRow and stopRow
           can be applied to a Scan instance to return the rows beginning with "row".</para>
-        <programlisting>
+        <programlisting language="java">
 public static final byte[] CF = "cf".getBytes();
 public static final byte[] ATTR = "attr".getBytes();
 ...
@@ -562,7 +562,7 @@ try {
           xml:id="default_get_example">
           <title>Default Get Example</title>
           <para>The following Get will only retrieve the current version of the row</para>
-          <programlisting>
+          <programlisting language="java">
 public static final byte[] CF = "cf".getBytes();
 public static final byte[] ATTR = "attr".getBytes();
 ...
@@ -575,7 +575,7 @@ byte[] b = r.getValue(CF, ATTR);  // returns current version of value
           xml:id="versioned_get_example">
           <title>Versioned Get Example</title>
           <para>The following Get will return the last 3 versions of the row.</para>
-          <programlisting>
+          <programlisting language="java">
 public static final byte[] CF = "cf".getBytes();
 public static final byte[] ATTR = "attr".getBytes();
 ...
@@ -603,7 +603,7 @@ List&lt;KeyValue&gt; kv = r.getColumn(CF, ATTR);  // returns all versions of thi
             <title>Implicit Version Example</title>
             <para>The following Put will be implicitly versioned by HBase with the current
               time.</para>
-            <programlisting>
+            <programlisting language="java">
 public static final byte[] CF = "cf".getBytes();
 public static final byte[] ATTR = "attr".getBytes();
 ...
@@ -616,7 +616,7 @@ htable.put(put);
             xml:id="explicit_version_example">
             <title>Explicit Version Example</title>
             <para>The following Put has the version timestamp explicitly set.</para>
-            <programlisting>
+            <programlisting language="java">
 public static final byte[] CF = "cf".getBytes();
 public static final byte[] ATTR = "attr".getBytes();
 ...
@@ -815,7 +815,7 @@ htable.put(put);
         Be sure to use the correct version of the HBase JAR for your system. The backticks
           (<literal>`</literal> symbols) cause ths shell to execute the sub-commands, setting the
         CLASSPATH as part of the command. This example assumes you use a BASH-compatible shell. </para>
-      <screen>$ <userinput>HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath` ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-0.90.0.jar rowcounter usertable</userinput></screen>
+      <screen language="bourne">$ <userinput>HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath` ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-0.90.0.jar rowcounter usertable</userinput></screen>
       <para>When the command runs, internally, the HBase JAR finds the dependencies it needs for
         zookeeper, guava, and its other dependencies on the passed <envar>HADOOP_CLASSPATH</envar>
         and adds the JARs to the MapReduce job configuration. See the source at
@@ -826,7 +826,7 @@ htable.put(put);
         <screen>java.lang.RuntimeException: java.lang.ClassNotFoundException: org.apache.hadoop.hbase.mapreduce.RowCounter$RowCounterMapper</screen>
         <para>If this occurs, try modifying the command as follows, so that it uses the HBase JARs
           from the <filename>target/</filename> directory within the build environment.</para>
-        <screen>$ <userinput>HADOOP_CLASSPATH=${HBASE_HOME}/target/hbase-0.90.0-SNAPSHOT.jar:`${HBASE_HOME}/bin/hbase classpath` ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/target/hbase-0.90.0-SNAPSHOT.jar rowcounter usertable</userinput></screen>
+        <screen language="bourne">$ <userinput>HADOOP_CLASSPATH=${HBASE_HOME}/target/hbase-0.90.0-SNAPSHOT.jar:`${HBASE_HOME}/bin/hbase classpath` ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/target/hbase-0.90.0-SNAPSHOT.jar rowcounter usertable</userinput></screen>
       </note>
       <caution>
         <title>Notice to Mapreduce users of HBase 0.96.1 and above</title>
@@ -876,14 +876,14 @@ Exception in thread "main" java.lang.IllegalAccessError: class
             <code>HADOOP_CLASSPATH</code> environment variable at job submission time. When
           launching jobs that package their dependencies, all three of the following job launching
           commands satisfy this requirement:</para>
-        <screen>
+        <screen language="bourne">
 $ <userinput>HADOOP_CLASSPATH=/path/to/hbase-protocol.jar:/path/to/hbase/conf hadoop jar MyJob.jar MyJobMainClass</userinput>
 $ <userinput>HADOOP_CLASSPATH=$(hbase mapredcp):/path/to/hbase/conf hadoop jar MyJob.jar MyJobMainClass</userinput>
 $ <userinput>HADOOP_CLASSPATH=$(hbase classpath) hadoop jar MyJob.jar MyJobMainClass</userinput>
         </screen>
         <para>For jars that do not package their dependencies, the following command structure is
           necessary:</para>
-        <screen>
+        <screen language="bourne">
 $ <userinput>HADOOP_CLASSPATH=$(hbase mapredcp):/etc/hbase/conf hadoop jar MyApp.jar MyJobMainClass -libjars $(hbase mapredcp | tr ':' ',')</userinput> ...
         </screen>
         <para>See also <link
@@ -898,7 +898,7 @@ $ <userinput>HADOOP_CLASSPATH=$(hbase mapredcp):/etc/hbase/conf hadoop jar MyApp
       <para>The HBase JAR also serves as a Driver for some bundled mapreduce jobs. To learn about
         the bundled MapReduce jobs, run the following command.</para>
 
-      <screen>$ <userinput>${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-0.90.0-SNAPSHOT.jar</userinput>
+      <screen language="bourne">$ <userinput>${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-0.90.0-SNAPSHOT.jar</userinput>
 <computeroutput>An example program must be given as the first argument.
 Valid program names are:
   copytable: Export a table from local cluster to peer cluster
@@ -910,7 +910,7 @@ Valid program names are:
     </screen>
       <para>Each of the valid program names are bundled MapReduce jobs. To run one of the jobs,
         model your command after the following example.</para>
-      <screen>$ <userinput>${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-0.90.0-SNAPSHOT.jar rowcounter myTable</userinput></screen>
+      <screen language="bourne">$ <userinput>${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-0.90.0-SNAPSHOT.jar rowcounter myTable</userinput></screen>
     </section>
 
     <section>
@@ -972,7 +972,7 @@ Valid program names are:
         xlink:href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/RowCounter.html">RowCounter</link>
         MapReduce job uses <code>TableInputFormat</code> and does a count of all rows in the specified
         table. To run it, use the following command: </para>
-      <screen>$ <userinput>./bin/hadoop jar hbase-X.X.X.jar</userinput></screen> 
+      <screen language="bourne">$ <userinput>./bin/hadoop jar hbase-X.X.X.jar</userinput></screen> 
       <para>This will
         invoke the HBase MapReduce Driver class. Select <literal>rowcounter</literal> from the choice of jobs
         offered. This will print rowcouner usage advice to standard output. Specify the tablename,
@@ -1011,7 +1011,7 @@ Valid program names are:
         <para>The following is an example of using HBase as a MapReduce source in read-only manner.
           Specifically, there is a Mapper instance but no Reducer, and nothing is being emitted from
           the Mapper. There job would be defined as follows...</para>
-        <programlisting>
+        <programlisting language="java">
 Configuration config = HBaseConfiguration.create();
 Job job = new Job(config, "ExampleRead");
 job.setJarByClass(MyReadJob.class);     // class that contains mapper
@@ -1038,7 +1038,7 @@ if (!b) {
   </programlisting>
         <para>...and the mapper instance would extend <link
             xlink:href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableMapper.html">TableMapper</link>...</para>
-        <programlisting>
+        <programlisting language="java">
 public static class MyMapper extends TableMapper&lt;Text, Text&gt; {
 
   public void map(ImmutableBytesWritable row, Result value, Context context) throws InterruptedException, IOException {
@@ -1052,7 +1052,7 @@ public static class MyMapper extends TableMapper&lt;Text, Text&gt; {
         <title>HBase MapReduce Read/Write Example</title>
         <para>The following is an example of using HBase both as a source and as a sink with
           MapReduce. This example will simply copy data from one table to another.</para>
-        <programlisting>
+        <programlisting language="java">
 Configuration config = HBaseConfiguration.create();
 Job job = new Job(config,"ExampleReadWrite");
 job.setJarByClass(MyReadWriteJob.class);    // class that contains mapper
@@ -1091,7 +1091,7 @@ if (!b) {
         <para>The following is the example mapper, which will create a <classname>Put</classname>
           and matching the input <classname>Result</classname> and emit it. Note: this is what the
           CopyTable utility does. </para>
-        <programlisting>
+        <programlisting language="java">
 public static class MyMapper extends TableMapper&lt;ImmutableBytesWritable, Put&gt;  {
 
 	public void map(ImmutableBytesWritable row, Result value, Context context) throws IOException, InterruptedException {
@@ -1125,7 +1125,7 @@ public static class MyMapper extends TableMapper&lt;ImmutableBytesWritable, Put&
         <para>The following example uses HBase as a MapReduce source and sink with a summarization
           step. This example will count the number of distinct instances of a value in a table and
           write those summarized counts in another table.
-          <programlisting>
+          <programlisting language="java">
 Configuration config = HBaseConfiguration.create();
 Job job = new Job(config,"ExampleSummary");
 job.setJarByClass(MySummaryJob.class);     // class that contains mapper and reducer
@@ -1156,7 +1156,7 @@ if (!b) {
           In this example mapper a column with a String-value is chosen as the value to summarize
           upon. This value is used as the key to emit from the mapper, and an
             <classname>IntWritable</classname> represents an instance counter.
-          <programlisting>
+          <programlisting language="java">
 public static class MyMapper extends TableMapper&lt;Text, IntWritable&gt;  {
 	public static final byte[] CF = "cf".getBytes();
 	public static final byte[] ATTR1 = "attr1".getBytes();
@@ -1174,7 +1174,7 @@ public static class MyMapper extends TableMapper&lt;Text, IntWritable&gt;  {
     </programlisting>
           In the reducer, the "ones" are counted (just like any other MR example that does this),
           and then emits a <classname>Put</classname>.
-          <programlisting>
+          <programlisting language="java">
 public static class MyTableReducer extends TableReducer&lt;Text, IntWritable, ImmutableBytesWritable&gt;  {
 	public static final byte[] CF = "cf".getBytes();
 	public static final byte[] COUNT = "count".getBytes();
@@ -1199,7 +1199,7 @@ public static class MyTableReducer extends TableReducer&lt;Text, IntWritable, Im
         <para>This very similar to the summary example above, with exception that this is using
           HBase as a MapReduce source but HDFS as the sink. The differences are in the job setup and
           in the reducer. The mapper remains the same. </para>
-        <programlisting>
+        <programlisting language="java">
 Configuration config = HBaseConfiguration.create();
 Job job = new Job(config,"ExampleSummaryToFile");
 job.setJarByClass(MySummaryFileJob.class);     // class that contains mapper and reducer
@@ -1228,7 +1228,7 @@ if (!b) {
         <para>As stated above, the previous Mapper can run unchanged with this example. As for the
           Reducer, it is a "generic" Reducer instead of extending TableMapper and emitting
           Puts.</para>
-        <programlisting>
+        <programlisting language="java">
  public static class MyReducer extends Reducer&lt;Text, IntWritable, Text, IntWritable&gt;  {
 
 	public void reduce(Text key, Iterable&lt;IntWritable&gt; values, Context context) throws IOException, InterruptedException {
@@ -1268,7 +1268,7 @@ if (!b) {
           reducers. Neither is right or wrong, it depends on your use-case. Recognize that the more
           reducers that are assigned to the job, the more simultaneous connections to the RDBMS will
           be created - this will scale, but only to a point. </para>
-        <programlisting>
+        <programlisting language="java">
  public static class MyRdbmsReducer extends Reducer&lt;Text, IntWritable, Text, IntWritable&gt;  {
 
 	private Connection c = null;
@@ -1299,7 +1299,7 @@ if (!b) {
       <para>Although the framework currently allows one HBase table as input to a MapReduce job,
         other HBase tables can be accessed as lookup tables, etc., in a MapReduce job via creating
         an HTable instance in the setup method of the Mapper.
-        <programlisting>public class MyMapper extends TableMapper&lt;Text, LongWritable&gt; {
+        <programlisting language="java">public class MyMapper extends TableMapper&lt;Text, LongWritable&gt; {
   private HTable myOtherTable;
 
   public void setup(Context context) {
@@ -1519,11 +1519,11 @@ if (!b) {
             xlink:href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HBaseConfiguration">HBaseConfiguration</link>
           instance. This will ensure sharing of ZooKeeper and socket instances to the RegionServers
           which is usually what you want. For example, this is preferred:</para>
-          <programlisting>HBaseConfiguration conf = HBaseConfiguration.create();
+          <programlisting language="java">HBaseConfiguration conf = HBaseConfiguration.create();
 HTable table1 = new HTable(conf, "myTable");
 HTable table2 = new HTable(conf, "myTable");</programlisting>
           <para>as opposed to this:</para>
-          <programlisting>HBaseConfiguration conf1 = HBaseConfiguration.create();
+          <programlisting language="java">HBaseConfiguration conf1 = HBaseConfiguration.create();
 HTable table1 = new HTable(conf1, "myTable");
 HBaseConfiguration conf2 = HBaseConfiguration.create();
 HTable table2 = new HTable(conf2, "myTable");</programlisting>
@@ -1537,7 +1537,7 @@ HTable table2 = new HTable(conf2, "myTable");</programlisting>
               the following example:</para>
             <example>
               <title>Pre-Creating a <code>HConnection</code></title>
-              <programlisting>// Create a connection to the cluster.
+              <programlisting language="java">// Create a connection to the cluster.
 HConnection connection = HConnectionManager.createConnection(Configuration);
 HTableInterface table = connection.getTable("myTable");
 // use table as needed, the table returned is lightweight
@@ -1594,7 +1594,7 @@ connection.close();</programlisting>
           represents a list of Filters with a relationship of <code>FilterList.Operator.MUST_PASS_ALL</code> or
           <code>FilterList.Operator.MUST_PASS_ONE</code> between the Filters.  The following example shows an 'or' between two
           Filters (checking for either 'my value' or 'my other value' on the same attribute).</para>
-<programlisting>
+<programlisting language="java">
 FilterList list = new FilterList(FilterList.Operator.MUST_PASS_ONE);
 SingleColumnValueFilter filter1 = new SingleColumnValueFilter(
 	cf,
@@ -1627,7 +1627,7 @@ scan.setFilter(list);
             </code>), inequality (<code>CompareOp.NOT_EQUAL</code>), or ranges (e.g.,
               <code>CompareOp.GREATER</code>). The following is example of testing equivalence a
             column to a String value "my value"...</para>
-          <programlisting>
+          <programlisting language="java">
 SingleColumnValueFilter filter = new SingleColumnValueFilter(
 	cf,
 	column,
@@ -1650,7 +1650,7 @@ scan.setFilter(filter);
           <para><link
               xlink:href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/RegexStringComparator.html">RegexStringComparator</link>
             supports regular expressions for value comparisons.</para>
-          <programlisting>
+          <programlisting language="java">
 RegexStringComparator comp = new RegexStringComparator("my.");   // any value that starts with 'my'
 SingleColumnValueFilter filter = new SingleColumnValueFilter(
 	cf,
@@ -1671,7 +1671,7 @@ scan.setFilter(filter);
               xlink:href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/SubstringComparator.html">SubstringComparator</link>
             can be used to determine if a given substring exists in a value. The comparison is
             case-insensitive. </para>
-          <programlisting>
+          <programlisting language="java">
 SubstringComparator comp = new SubstringComparator("y val");   // looking for 'my value'
 SingleColumnValueFilter filter = new SingleColumnValueFilter(
 	cf,
@@ -1728,7 +1728,7 @@ scan.setFilter(filter);
           <para>Note: The same column qualifier can be used in different column families. This
             filter returns all matching columns. </para>
           <para>Example: Find all columns in a row and family that start with "abc"</para>
-          <programlisting>
+          <programlisting language="java">
 HTableInterface t = ...;
 byte[] row = ...;
 byte[] family = ...;
@@ -1758,7 +1758,7 @@ rs.close();
             prefixes. It can be used to efficiently get discontinuous sets of columns from very wide
             rows. </para>
           <para>Example: Find all columns in a row and family that start with "abc" or "xyz"</para>
-          <programlisting>
+          <programlisting language="java">
 HTableInterface t = ...;
 byte[] row = ...;
 byte[] family = ...;
@@ -1791,7 +1791,7 @@ rs.close();
             filter returns all matching columns. </para>
           <para>Example: Find all columns in a row and family between "bbbb" (inclusive) and "bbdd"
             (inclusive)</para>
-          <programlisting>
+          <programlisting language="java">
 HTableInterface t = ...;
 byte[] row = ...;
 byte[] family = ...;
@@ -2018,7 +2018,7 @@ rs.close();
                 was accessed. Catalog tables are configured like this. This group is the last one
                 considered during evictions.</para>
             <para>To mark a column family as in-memory, call
-                <programlisting>HColumnDescriptor.setInMemory(true);</programlisting> if creating a table from java,
+                <programlisting language="java">HColumnDescriptor.setInMemory(true);</programlisting> if creating a table from java,
                 or set <command>IN_MEMORY => true</command> when creating or altering a table in
                 the shell: e.g.  <programlisting>hbase(main):003:0> create  't', {NAME => 'f', IN_MEMORY => 'true'}</programlisting></para>
             </listitem>
@@ -2218,7 +2218,7 @@ rs.close();
               <step>
                 <para>Next, add the following configuration to the RegionServer's
                     <filename>hbase-site.xml</filename>.</para>
-                <programlisting>
+                <programlisting language="xml">
 <![CDATA[<property>
   <name>hbase.bucketcache.ioengine</name>
   <value>offheap</value>
@@ -2461,7 +2461,7 @@ rs.close();
                     ZooKeeper splitlog node (<filename>/hbase/splitlog</filename>) as tasks. You can
                   view the contents of the splitlog by issuing the following
                     <command>zkcli</command> command. Example output is shown.</para>
-                  <screen>ls /hbase/splitlog
+                  <screen language="bourne">ls /hbase/splitlog
 [hdfs%3A%2F%2Fhost2.sample.com%3A56020%2Fhbase%2F.logs%2Fhost8.sample.com%2C57020%2C1340474893275-splitting%2Fhost8.sample.com%253A57020.1340474893900, 
 hdfs%3A%2F%2Fhost2.sample.com%3A56020%2Fhbase%2F.logs%2Fhost3.sample.com%2C57020%2C1340474893299-splitting%2Fhost3.sample.com%253A57020.1340474893931, 
 hdfs%3A%2F%2Fhost2.sample.com%3A56020%2Fhbase%2F.logs%2Fhost4.sample.com%2C57020%2C1340474893287-splitting%2Fhost4.sample.com%253A57020.1340474893946]                  
@@ -2846,7 +2846,7 @@ ctime = Sat Jun 23 11:13:40 PDT 2012
           Typically a custom split policy should extend HBase's default split policy: <link xlink:href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/regionserver/ConstantSizeRegionSplitPolicy.html">ConstantSizeRegionSplitPolicy</link>.
           </para>
           <para>The policy can set globally through the HBaseConfiguration used or on a per table basis:
-<programlisting>
+<programlisting language="java">
 HTableDescriptor myHtd = ...;
 myHtd.setValue(HTableDescriptor.SPLIT_POLICY, MyCustomSplitPolicy.class.getName());
 </programlisting>
@@ -2867,7 +2867,7 @@ myHtd.setValue(HTableDescriptor.SPLIT_POLICY, MyCustomSplitPolicy.class.getName(
         opens merged region on the regionserver and reports the merge to Master at last.
         </para>
         <para>An example of region merges in the hbase shell
-          <programlisting>$ hbase> merge_region 'ENCODED_REGIONNAME', 'ENCODED_REGIONNAME'
+          <programlisting language="bourne">$ hbase> merge_region 'ENCODED_REGIONNAME', 'ENCODED_REGIONNAME'
           hbase> merge_region 'ENCODED_REGIONNAME', 'ENCODED_REGIONNAME', true
           </programlisting>
           It's an asynchronous operation and call returns immediately without waiting merge completed.
@@ -2969,10 +2969,10 @@ myHtd.setValue(HTableDescriptor.SPLIT_POLICY, MyCustomSplitPolicy.class.getName(
 
         <para>To view a textualized version of hfile content, you can do use
         the <classname>org.apache.hadoop.hbase.io.hfile.HFile
-        </classname>tool. Type the following to see usage:<programlisting><code>$ ${HBASE_HOME}/bin/hbase org.apache.hadoop.hbase.io.hfile.HFile </code> </programlisting>For
+        </classname>tool. Type the following to see usage:<programlisting language="bourne"><code>$ ${HBASE_HOME}/bin/hbase org.apache.hadoop.hbase.io.hfile.HFile </code> </programlisting>For
         example, to view the content of the file
         <filename>hdfs://10.81.47.41:8020/hbase/TEST/1418428042/DSMP/4759508618286845475</filename>,
-        type the following:<programlisting> <code>$ ${HBASE_HOME}/bin/hbase org.apache.hadoop.hbase.io.hfile.HFile -v -f hdfs://10.81.47.41:8020/hbase/TEST/1418428042/DSMP/4759508618286845475 </code> </programlisting>If
+        type the following:<programlisting language="bourne"> <code>$ ${HBASE_HOME}/bin/hbase org.apache.hadoop.hbase.io.hfile.HFile -v -f hdfs://10.81.47.41:8020/hbase/TEST/1418428042/DSMP/4759508618286845475 </code> </programlisting>If
         you leave off the option -v to see just a summary on the hfile. See
         usage for other things to do with the <classname>HFile</classname>
         tool.</para>
@@ -3818,7 +3818,7 @@ myHtd.setValue(HTableDescriptor.SPLIT_POLICY, MyCustomSplitPolicy.class.getName(
                 <step>
                   <para>Run one of following commands in the HBase shell. Replace the table name
                       <literal>orders_table</literal> with the name of your table.</para>
-                  <screen>
+                  <screen language="sql">
 <userinput>alter 'orders_table', CONFIGURATION => {'hbase.hstore.engine.class' => 'org.apache.hadoop.hbase.regionserver.StripeStoreEngine', 'hbase.hstore.blockingStoreFiles' => '100'}</userinput>
 <userinput>alter 'orders_table', {NAME => 'blobs_cf', CONFIGURATION => {'hbase.hstore.engine.class' => 'org.apache.hadoop.hbase.regionserver.StripeStoreEngine', 'hbase.hstore.blockingStoreFiles' => '100'}}</userinput>
 <userinput>create 'orders_table', 'blobs_cf', CONFIGURATION => {'hbase.hstore.engine.class' => 'org.apache.hadoop.hbase.regionserver.StripeStoreEngine', 'hbase.hstore.blockingStoreFiles' => '100'}</userinput>                  
@@ -3842,7 +3842,7 @@ myHtd.setValue(HTableDescriptor.SPLIT_POLICY, MyCustomSplitPolicy.class.getName(
                   <para>Set the <varname>hbase.hstore.engine.class</varname> option to either nil or
                       <literal>org.apache.hadoop.hbase.regionserver.DefaultStoreEngine</literal>.
                     Either option has the same effect.</para>
-                  <screen>
+                  <screen language="sql">
 <userinput>alter 'orders_table', CONFIGURATION => {'hbase.hstore.engine.class' => ''}</userinput>
                 </screen>
                 </step>
@@ -3861,7 +3861,7 @@ myHtd.setValue(HTableDescriptor.SPLIT_POLICY, MyCustomSplitPolicy.class.getName(
                 column family, after disabling the table. If you use HBase shell, the general
                 command pattern is as follows:</para>
 
-              <programlisting>
+              <programlisting language="sql">
 alter 'orders_table', CONFIGURATION => {'key' => 'value', ..., 'key' => 'value'}}
               </programlisting>
               <section
@@ -4061,7 +4061,7 @@ alter 'orders_table', CONFIGURATION => {'key' => 'value', ..., 'key' => 'value'}
         where <code>importtsv</code> or your MapReduce job put its results, and
         the table name to import into. For example:
       </para>
-      <screen>$ hadoop jar hbase-VERSION.jar completebulkload [-c /path/to/hbase/config/hbase-site.xml] /user/todd/myoutput mytable</screen>
+      <screen language="bourne">$ hadoop jar hbase-VERSION.jar completebulkload [-c /path/to/hbase/config/hbase-site.xml] /user/todd/myoutput mytable</screen>
       <para>
         The <code>-c config-file</code> option can be used to specify a file
         containing the appropriate hbase parameters (e.g., hbase-site.xml) if
@@ -4143,7 +4143,7 @@ alter 'orders_table', CONFIGURATION => {'key' => 'value', ..., 'key' => 'value'}
 	       <title>Timeline Consistency </title>
 	         <para>
 			With this feature, HBase introduces a Consistency definition, which can be provided per read operation (get or scan).
-	<programlisting>
+	<programlisting language="java">
 public enum Consistency {
     STRONG,
     TIMELINE
@@ -4254,7 +4254,7 @@ public enum Consistency {
 		</para>
 		<section>
 			<title>Server side properties</title>
-			<programlisting><![CDATA[
+			<programlisting language="xml"><![CDATA[
 <property>
     <name>hbase.regionserver.storefile.refresh.period</name>
     <value>0</value>
@@ -4274,7 +4274,7 @@ public enum Consistency {
 				<title>Client side properties</title>
           <para> Ensure to set the following for all clients (and servers) that will use region
             replicas. </para>			
-			  <programlisting><![CDATA[
+			  <programlisting language="xml"><![CDATA[
 <property>
     <name>hbase.ipc.client.allowsInterrupt</name>
     <value>true</value>
@@ -4325,7 +4325,7 @@ flush 't1'
 
 	</section>
 	<section><title>Java</title>
-	<programlisting><![CDATA[
+	<programlisting language="java"><![CDATA[
 HTableDescriptor htd = new HTableDesctiptor(TableName.valueOf(“test_table”)); 
 htd.setRegionReplication(2);
 ...
@@ -4362,7 +4362,7 @@ hbase(main):001:0> get 't1','r6', {CONSISTENCY => "TIMELINE"}
 ]]></programlisting>
           <para> You can simulate a region server pausing or becoming unavailable and do a read from
             the secondary replica: </para>	
-				  <programlisting><![CDATA[
+				  <programlisting language="bourne"><![CDATA[
 $ kill -STOP <pid or primary region server>
 
 hbase(main):001:0> get 't1','r6', {CONSISTENCY => "TIMELINE"}
@@ -4376,14 +4376,14 @@ hbase> scan 't1', {CONSISTENCY => 'TIMELINE'}
 			<title>Java</title>
           <para>You can set set the consistency for Gets and Scans and do requests as
             follows.</para> 
-	<programlisting><![CDATA[
+	<programlisting language="java"><![CDATA[
 Get get = new Get(row);
 get.setConsistency(Consistency.TIMELINE);
 ...
 Result result = table.get(get); 
 ]]></programlisting>
           <para>You can also pass multiple gets: </para>
-	<programlisting><![CDATA[
+	<programlisting language="java"><![CDATA[
 Get get1 = new Get(row);
 get1.setConsistency(Consistency.TIMELINE);
 ...
@@ -4393,7 +4393,7 @@ gets.add(get1);
 Result[] results = table.get(gets); 
 ]]></programlisting>
           <para>And Scans: </para>
-	<programlisting><![CDATA[
+	<programlisting language="java"><![CDATA[
 Scan scan = new Scan();
 scan.setConsistency(Consistency.TIMELINE);
 ...
@@ -4402,7 +4402,7 @@ ResultScanner scanner = table.getScanner(scan);
           <para>You can inspect whether the results are coming from primary region or not by calling
             the Result.isStale() method: </para>
 
-	<programlisting><![CDATA[
+	<programlisting language="java"><![CDATA[
 Result result = table.get(get); 
 if (result.isStale()) {
   ...
@@ -4649,7 +4649,7 @@ identifying mode and a multi-phase read-write repair mode.
 	<section>
 	  <title>Running hbck to identify inconsistencies</title>
 <para>To check to see if your HBase cluster has corruptions, run hbck against your HBase cluster:</para>
-<programlisting>
+<programlisting language="bourne">
 $ ./bin/hbase hbck
 </programlisting>
 	<para>
@@ -4661,13 +4661,13 @@ A run of hbck will report a list of inconsistencies along with a brief descripti
 tables affected. The using the <code>-details</code> option will report more details including a representative
 listing of all the splits present in all the tables.
 	</para>
-<programlisting>
+<programlisting language="bourne">
 $ ./bin/hbase hbck -details
 </programlisting>
 <para>If you just want to know if some tables are corrupted, you can limit hbck to identify inconsistencies
 in only specific tables. For example the following command would only attempt to check table
 TableFoo and TableBar. The benefit is that hbck will run in less time.</para>
-<programlisting>
+<programlisting language="bourne">
 $ ./bin/hbase hbck TableFoo TableBar
 </programlisting>
 	</section>
@@ -4726,12 +4726,12 @@ assigned or multiply assigned regions.</para>
 	</itemizedlist>
 	To fix deployment and assignment problems you can run this command:
 </para>
-<programlisting>
+<programlisting language="bourne">
 $ ./bin/hbase hbck -fixAssignments
 </programlisting>
 <para>To fix deployment and assignment problems as well as repairing incorrect meta rows you can
 run this command:</para>
-<programlisting>
+<programlisting language="bourne">
 $ ./bin/hbase hbck -fixAssignments -fixMeta
 </programlisting>
 <para>There are a few classes of table integrity problems that are low risk repairs. The first two are
@@ -4743,12 +4743,12 @@ The third low-risk class is hdfs region holes. This can be repaired by using the
 If holes are detected you can use -fixHdfsHoles and should include -fixMeta and -fixAssignments to make the new region consistent.</para>
 		</listitem>
 	</itemizedlist>
-<programlisting>
+<programlisting language="bourne">
 $ ./bin/hbase hbck -fixAssignments -fixMeta -fixHdfsHoles
 </programlisting>
 <para>Since this is a common operation, we’ve added a the <code>-repairHoles</code> flag that is equivalent to the
 previous command:</para>
-<programlisting>
+<programlisting language="bourne">
 $ ./bin/hbase hbck -repairHoles
 </programlisting>
 <para>If inconsistencies still remain after these steps, you most likely have table integrity problems
@@ -4800,14 +4800,14 @@ integrity options.</para>
 	</itemizedlist>
 <para>Finally, there are safeguards to limit repairs to only specific tables. For example the following
 command would only attempt to check and repair table TableFoo and TableBar.</para>
-<screen>
+<screen language="bourne">
 $ ./bin/hbase hbck -repair TableFoo TableBar
 </screen>
 	<section><title>Special cases: Meta is not properly assigned</title>
 <para>There are a few special cases that hbck can handle as well.
 Sometimes the meta table’s only region is inconsistently assigned or deployed. In this case
 there is a special <code>-fixMetaOnly</code> option that can try to fix meta assignments.</para>
-<screen>
+<screen language="bourne">
 $ ./bin/hbase hbck -fixMetaOnly -fixAssignments
 </screen>
 	</section>
@@ -4825,7 +4825,7 @@ directory, loads as much information from region metadata files (.regioninfo fil
 from the file system. If the region metadata has proper table integrity, it sidelines the original root
 and meta table directories, and builds new ones with pointers to the region directories and their
 data.</para>
-<screen>
+<screen language="bourne">
 $ ./bin/hbase org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair
 </screen>
 <para>NOTE: This tool is not as clever as uberhbck but can be used to bootstrap repairs that uberhbck
@@ -5085,7 +5085,7 @@ This option should not normally be used, and it is not in <code>-fixAll</code>.
               linkend="hbase.native.platform" />), you can make a symbolic link from HBase to the native Hadoop
             libraries. This assumes the two software installs are colocated. For example, if my
             'platform' is Linux-amd64-64:
-            <programlisting>$ cd $HBASE_HOME
+            <programlisting language="bourne">$ cd $HBASE_HOME
 $ mkdir lib/native
 $ ln -s $HADOOP_HOME/lib/native lib/native/Linux-amd64-64</programlisting>
             Use the compression tool to check that LZ4 is installed on all nodes. Start up (or restart)
@@ -5128,7 +5128,7 @@ hbase(main):003:0> <userinput>alter 'TestTable', {NAME => 'info', COMPRESSION =>
           <title>CompressionTest</title>
           <para>You can use the CompressionTest tool to verify that your compressor is available to
             HBase:</para>
-          <screen>
+          <screen language="bourne">
  $ hbase org.apache.hadoop.hbase.util.CompressionTest hdfs://<replaceable>host/path/to/hbase</replaceable> snappy       
           </screen>
         </section>
@@ -5192,7 +5192,7 @@ DESCRIPTION                                          ENABLED
         parameter, usage advice is printed for each option.</para>
         <example>
           <title><command>LoadTestTool</command> Usage</title>
-          <screen><![CDATA[
+          <screen language="bourne"><![CDATA[
 $ bin/hbase org.apache.hadoop.hbase.util.LoadTestTool -h            
 usage: bin/hbase org.apache.hadoop.hbase.util.LoadTestTool <options>
 Options:
@@ -5248,7 +5248,7 @@ Options:
         </example>
         <example>
           <title>Example Usage of LoadTestTool</title>
-          <screen>
+          <screen language="bourne">
 $ hbase org.apache.hadoop.hbase.util.LoadTestTool -write 1:10:100 -num_keys 1000000
           -read 100:30 -num_tables 1 -data_block_encoding NONE -tn load_test_tool_NONE
           </screen>

http://git-wip-us.apache.org/repos/asf/hbase/blob/24b5fa7f/src/main/docbkx/case_studies.xml
----------------------------------------------------------------------
diff --git a/src/main/docbkx/case_studies.xml b/src/main/docbkx/case_studies.xml
index 7824c7d..332caf8 100644
--- a/src/main/docbkx/case_studies.xml
+++ b/src/main/docbkx/case_studies.xml
@@ -145,7 +145,7 @@
             some unusual anomalies, namely interface errors, overruns, framing errors. While not
             unheard of, these kinds of errors are exceedingly rare on modern hardware which is
             operating as it should: </para>
-          <screen>		
+          <screen language="bourne">		
 $ /sbin/ifconfig bond0
 bond0  Link encap:Ethernet  HWaddr 00:00:00:00:00:00  
 inet addr:10.x.x.x  Bcast:10.x.x.255  Mask:255.255.255.0
@@ -160,7 +160,7 @@ RX bytes:2416328868676 (2.4 TB)  TX bytes:3464991094001 (3.4 TB)
             running an ICMP ping from an external host and observing round-trip-time in excess of
             700ms, and by running <code>ethtool(8)</code> on the members of the bond interface and
             discovering that the active interface was operating at 100Mbs/, full duplex. </para>
-          <screen>		
+          <screen language="bourne">		
 $ sudo ethtool eth0
 Settings for eth0:
 Supported ports: [ TP ]

http://git-wip-us.apache.org/repos/asf/hbase/blob/24b5fa7f/src/main/docbkx/configuration.xml
----------------------------------------------------------------------
diff --git a/src/main/docbkx/configuration.xml b/src/main/docbkx/configuration.xml
index b0b2864..5949b0a 100644
--- a/src/main/docbkx/configuration.xml
+++ b/src/main/docbkx/configuration.xml
@@ -520,16 +520,16 @@ Index: pom.xml
                     <listitem>
                       <para>Type the following commands:</para>
                       <para>
-                        <programlisting><![CDATA[$ protoc -Isrc/main/protobuf --java_out=src/main/java src/main/protobuf/hbase.proto]]></programlisting>
+                        <programlisting language="bourne"><![CDATA[$ protoc -Isrc/main/protobuf --java_out=src/main/java src/main/protobuf/hbase.proto]]></programlisting>
                       </para>
                       <para>
-                        <programlisting><![CDATA[$ protoc -Isrc/main/protobuf --java_out=src/main/java src/main/protobuf/ErrorHandling.proto]]></programlisting>
+                        <programlisting language="bourne"><![CDATA[$ protoc -Isrc/main/protobuf --java_out=src/main/java src/main/protobuf/ErrorHandling.proto]]></programlisting>
                       </para>
                     </listitem>
                   </itemizedlist>
                   <para> Building against the hadoop 2 profile by running something like the
                     following command: </para>
-                  <screen>$  mvn clean install assembly:single -Dhadoop.profile=2.0 -DskipTests</screen>
+                  <screen language="bourne">$  mvn clean install assembly:single -Dhadoop.profile=2.0 -DskipTests</screen>
                 </footnote></entry>
               <entry>S</entry>
               <entry>S</entry>
@@ -615,7 +615,7 @@ Index: pom.xml
             <filename>hbase-site.xml</filename> -- and on the serverside in
             <filename>hdfs-site.xml</filename> (The sync facility HBase needs is a subset of the
           append code path).</para>
-        <programlisting><![CDATA[  
+        <programlisting language="xml"><![CDATA[  
 <property>
   <name>dfs.support.append</name>
   <value>true</value>
@@ -644,7 +644,7 @@ Index: pom.xml
           Hadoop's <filename>conf/hdfs-site.xml</filename>, setting the
           <varname>dfs.datanode.max.transfer.threads</varname> value to at least the following:
         </para>
-        <programlisting><![CDATA[
+        <programlisting language="xml"><![CDATA[
 <property>
   <name>dfs.datanode.max.transfer.threads</name>
   <value>4096</value>
@@ -779,7 +779,7 @@ Index: pom.xml
           configuration parameters. Most HBase configuration directives have default values, which
           are used unless the value is overridden in the <filename>hbase-site.xml</filename>. See <xref
             linkend="config.files" /> for more information.</para>
-        <programlisting><![CDATA[
+        <programlisting language="xml"><![CDATA[
 <configuration>
   <property>
     <name>hbase.rootdir</name>
@@ -891,7 +891,7 @@ node-c.example.com
         finally disable and drop your tables.</para>
 
       <para>To stop HBase after exiting the HBase shell enter</para>
-      <screen>$ ./bin/stop-hbase.sh
+      <screen language="bourne">$ ./bin/stop-hbase.sh
 stopping hbase...............</screen>
       <para>Shutdown can take a moment to complete. It can take longer if your cluster is comprised
         of many machines. If you are running a distributed operation, be sure to wait until HBase
@@ -1063,7 +1063,7 @@ slf4j-log4j (slf4j-log4j12-1.5.8.jar)
 zookeeper (zookeeper-3.4.2.jar)</programlisting>
       </para>
       <para> An example basic <filename>hbase-site.xml</filename> for client only might look as
-        follows: <programlisting><![CDATA[
+        follows: <programlisting language="xml"><![CDATA[
 <?xml version="1.0"?>
 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
 <configuration>
@@ -1090,7 +1090,7 @@ zookeeper (zookeeper-3.4.2.jar)</programlisting>
             <filename>hbase.X.X.X.jar</filename>). It is also possible to specify configuration
           directly without having to read from a <filename>hbase-site.xml</filename>. For example,
           to set the ZooKeeper ensemble for the cluster programmatically do as follows:
-          <programlisting>Configuration config = HBaseConfiguration.create();
+          <programlisting language="java">Configuration config = HBaseConfiguration.create();
 config.set("hbase.zookeeper.quorum", "localhost");  // Here we are running zookeeper locally</programlisting>
           If multiple ZooKeeper instances make up your ZooKeeper ensemble, they may be specified in
           a comma-separated list (just as in the <filename>hbase-site.xml</filename> file). This
@@ -1126,7 +1126,7 @@ config.set("hbase.zookeeper.quorum", "localhost");  // Here we are running zooke
         xml:id="hbase_site">
         <title><filename>hbase-site.xml</filename></title>
 
-        <programlisting>
+        <programlisting language="bourne">
 <![CDATA[
 <?xml version="1.0"?>
 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
@@ -1140,7 +1140,7 @@ config.set("hbase.zookeeper.quorum", "localhost");  // Here we are running zooke
   <property>
     <name>hbase.zookeeper.property.dataDir</name>
     <value>/export/zookeeper</value>
-    <description>Property from ZooKeeper's config zoo.cfg.
+    <description>Property from ZooKeeper config zoo.cfg.
     The directory where the snapshot is stored.
     </description>
   </property>
@@ -1191,7 +1191,7 @@ example9
             <filename>hbase-env.sh</filename> file. Here we are setting the HBase heap to be 4G
           instead of the default 1G.</para>
 
-        <screen>
+        <screen language="bourne">
 
 $ git diff hbase-env.sh
 diff --git a/conf/hbase-env.sh b/conf/hbase-env.sh
@@ -1476,7 +1476,7 @@ index e70ebc6..96f8c27 100644
           running on a late-version HDFS so you have the fixes he refers too and himself adds to HDFS that help HBase MTTR
           (e.g. HDFS-3703, HDFS-3712, and HDFS-4791 -- hadoop 2 for sure has them and late hadoop 1 has some).
           Set the following in the RegionServer.</para>
-      <programlisting>
+      <programlisting language="xml">
 <![CDATA[<property>
 <property>
     <name>hbase.lease.recovery.dfs.timeout</name>
@@ -1493,7 +1493,7 @@ index e70ebc6..96f8c27 100644
 
         <para>And on the namenode/datanode side, set the following to enable 'staleness' introduced
           in HDFS-3703, HDFS-3912. </para>
-        <programlisting><![CDATA[
+        <programlisting language="xml"><![CDATA[
 <property>
     <name>dfs.client.socket-timeout</name>
     <value>10000</value>
@@ -1550,7 +1550,7 @@ index e70ebc6..96f8c27 100644
         <para>As an alternative, You can use the coprocessor-based JMX implementation provided
           by HBase. To enable it in 0.99 or above, add below property in
           <filename>hbase-site.xml</filename>:
-        <programlisting><![CDATA[
+        <programlisting language="xml"><![CDATA[
 <property>
     <name>hbase.coprocessor.regionserver.classes</name>
     <value>org.apache.hadoop.hbase.JMXListener</value>
@@ -1566,7 +1566,7 @@ index e70ebc6..96f8c27 100644
           By default, the JMX listens on TCP port 10102, you can further configure the port
           using below properties:
 
-        <programlisting><![CDATA[
+        <programlisting language="xml"><![CDATA[
 <property>
     <name>regionserver.rmi.registry.port</name>
     <value>61130</value>
@@ -1584,7 +1584,7 @@ index e70ebc6..96f8c27 100644
         <para>By default the password authentication and SSL communication is disabled.
           To enable password authentication, you need to update <filename>hbase-env.sh</filename>
           like below:
-      <screen>
+      <screen language="bourne">
 export HBASE_JMX_BASE="-Dcom.sun.management.jmxremote.authenticate=true                  \
                        -Dcom.sun.management.jmxremote.password.file=your_password_file   \
                        -Dcom.sun.management.jmxremote.access.file=your_access_file"
@@ -1596,7 +1596,7 @@ export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS $HBASE_JMX_BASE "
         </para>
 
         <para>To enable SSL communication with password authentication, follow below steps:
-      <screen>
+      <screen language="bourne">
 #1. generate a key pair, stored in myKeyStore
 keytool -genkey -alias jconsole -keystore myKeyStore
 
@@ -1607,10 +1607,10 @@ keytool -export -alias jconsole -keystore myKeyStore -file jconsole.cert
 keytool -import -alias jconsole -keystore jconsoleKeyStore -file jconsole.cert
       </screen>
           And then update <filename>hbase-env.sh</filename> like below:
-      <screen>
+      <screen language="bourne">
 export HBASE_JMX_BASE="-Dcom.sun.management.jmxremote.ssl=true                         \
                        -Djavax.net.ssl.keyStore=/home/tianq/myKeyStore                 \
-                       -Djavax.net.ssl.keyStorePassword=your_password_in_step_#1       \
+                       -Djavax.net.ssl.keyStorePassword=your_password_in_step_1       \
                        -Dcom.sun.management.jmxremote.authenticate=true                \
                        -Dcom.sun.management.jmxremote.password.file=your_password file \
                        -Dcom.sun.management.jmxremote.access.file=your_access_file"
@@ -1620,13 +1620,13 @@ export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS $HBASE_JMX_BASE "
       </screen>
 
           Finally start jconsole on client using the key store:
-      <screen>
+      <screen language="bourne">
 jconsole -J-Djavax.net.ssl.trustStore=/home/tianq/jconsoleKeyStore
       </screen>
         </para>
         <para>NOTE: for HBase 0.98, To enable the HBase JMX implementation on Master, you also
           need to add below property in <filename>hbase-site.xml</filename>:
-        <programlisting><![CDATA[
+        <programlisting language="xml"><![CDATA[
 <property>
     <name>hbase.coprocessor.master.classes</name>
     <value>org.apache.hadoop.hbase.JMXListener</value>

http://git-wip-us.apache.org/repos/asf/hbase/blob/24b5fa7f/src/main/docbkx/cp.xml
----------------------------------------------------------------------
diff --git a/src/main/docbkx/cp.xml b/src/main/docbkx/cp.xml
index 9cc0859..7062d7d 100644
--- a/src/main/docbkx/cp.xml
+++ b/src/main/docbkx/cp.xml
@@ -265,7 +265,7 @@
       <example>
         <title>Example RegionObserver Configuration</title>
         <para>In this example, one RegionObserver is configured for all the HBase tables.</para>
-        <screen><![CDATA[
+        <screen language="xml"><![CDATA[
 <property>
     <name>hbase.coprocessor.region.classes</name>
     <value>org.apache.hadoop.hbase.coprocessor.AggregateImplementation</value>

http://git-wip-us.apache.org/repos/asf/hbase/blob/24b5fa7f/src/main/docbkx/customization.xsl
----------------------------------------------------------------------
diff --git a/src/main/docbkx/customization.xsl b/src/main/docbkx/customization.xsl
index 43d8df7..5d0ec2c 100644
--- a/src/main/docbkx/customization.xsl
+++ b/src/main/docbkx/customization.xsl
@@ -22,6 +22,7 @@
  */
 -->
   <xsl:import href="urn:docbkx:stylesheet"/>
+  <xsl:import href="urn:docbkx:stylesheet/highlight.xsl"/>
   <xsl:output method="html" encoding="UTF-8" indent="no"/>
 
   <xsl:template name="user.header.content">