You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@eagle.apache.org by ha...@apache.org on 2017/11/22 06:00:05 UTC

[1/3] eagle git commit: Add eagle v0.5.0 release doc

Repository: eagle
Updated Branches:
  refs/heads/site d6fed6778 -> 0094010b8


Add eagle v0.5.0 release doc


Project: http://git-wip-us.apache.org/repos/asf/eagle/repo
Commit: http://git-wip-us.apache.org/repos/asf/eagle/commit/038bde7f
Tree: http://git-wip-us.apache.org/repos/asf/eagle/tree/038bde7f
Diff: http://git-wip-us.apache.org/repos/asf/eagle/diff/038bde7f

Branch: refs/heads/site
Commit: 038bde7f53fa48655340d043465e74a1698564c6
Parents: d6fed67
Author: Hao Chen <hc...@ebay.com>
Authored: Wed Nov 22 13:50:39 2017 +0800
Committer: Hao Chen <hc...@ebay.com>
Committed: Wed Nov 22 13:50:39 2017 +0800

----------------------------------------------------------------------
 download-latest.md | 39 +++++++++++----------------------------
 download.md        | 12 ++++++++++++
 2 files changed, 23 insertions(+), 28 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/eagle/blob/038bde7f/download-latest.md
----------------------------------------------------------------------
diff --git a/download-latest.md b/download-latest.md
index d0f83c6..21427ba 100644
--- a/download-latest.md
+++ b/download-latest.md
@@ -4,41 +4,24 @@ title:  "Apache Eagle Latest Download"
 permalink: /docs/download-latest.html
 ---
 
-> Version `0.4.0-incubating` is the latest release and `0.5.0-SNAPSHOT` is under active development on [master](https://github.com/apache/eagle/tree/master) branch.
+> Version **0.5.0** is the latest release and 0.5.0-SNAPSHOT is under active development on [master](https://github.com/apache/eagle/tree/master) branch.
 >
 > You can verify your download by following these [procedures](https://www.apache.org/info/verification.html) and using these [KEYS](https://dist.apache.org/repos/dist/release/eagle/KEYS).
 
-
-# 0.5.0-SNAPSHOT
-
-> The first GA version `v0.5.0` with fantastic improvements and features is coming soon!
-
-* Build from source code:
-		
-		git clone https://github.com/apache/eagle.git
-		
-* Release notes for preview:
-	* [Eagle 0.5.0 Release Notes](https://cwiki.apache.org/confluence/display/EAG/Eagle+Version+0.5.0)
-* Documentation: 
-	* [Eagle 0.5.0 Documentations](/docs/latest/)
-
-# 0.4.0-incubating
-
-[![Eagle Latest Maven Release](https://maven-badges.herokuapp.com/maven-central/org.apache.eagle/eagle-parent/badge.svg)](http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.apache.eagle%22%20AND%20a%3A%22eagle-parent%22)
-
-* Release notes: 
-	* [Eagle 0.4.0 Release Notes](https://git-wip-us.apache.org/repos/asf?p=eagle.git;a=blob_plain;f=CHANGELOG.txt;hb=refs/tags/v0.4.0-incubating)
-* Source download: 
-	* [apache-eagle-0.4.0-incubating-src.tar.gz](http://www.apache.org/dyn/closer.cgi?path=/eagle/apache-eagle-0.4.0-incubating)
-	* [apache-eagle-0.4.0-incubating-src.tar.gz.md5](https://dist.apache.org/repos/dist/release/eagle/apache-eagle-0.4.0-incubating/apache-eagle-0.4.0-incubating-src.tar.gz.md5)
-	* [apache-eagle-0.4.0-incubating-src.tar.gz.sha1](https://dist.apache.org/repos/dist/release/eagle/apache-eagle-0.4.0-incubating/apache-eagle-0.4.0-incubating-src.tar.gz.sha1)
-	* [apache-eagle-0.4.0-incubating-src.tar.gz.asc](https://dist.apache.org/repos/dist/release/eagle/apache-eagle-0.4.0-incubating/apache-eagle-0.4.0-incubating-src.tar.gz.asc)
+# 0.5.0
+* Release notes:
+	* [Eagle 0.5.0 Release Notes](https://git-wip-us.apache.org/repos/asf?p=eagle.git;a=blob_plain;f=CHANGELOG.txt;hb=refs/tags/v0.5.0)
+* Source download:
+	* [apache-eagle-0.5.0-src.tar.gz](http://www.apache.org/dyn/closer.cgi?path=/incubator/eagle/apache-eagle-0.5.0-incubating)
+	* [apache-eagle-0.5.0-src.tar.gz.md5](https://dist.apache.org/repos/dist/release/eagle/apache-eagle-0.5.0/apache-eagle-0.5.0-src.tar.gz.md5)
+	* [apache-eagle-0.5.0-src.tar.gz.sha1](https://dist.apache.org/repos/dist/release/eagle/apache-eagle-0.5.0/apache-eagle-0.5.0-src.tar.gz.sha1)
+	* [apache-eagle-0.5.0-src.tar.gz.asc](https://dist.apache.org/repos/dist/release/eagle/apache-eagle-0.5.0/apache-eagle-0.5.0-src.tar.gz.asc)
 * Git revision: 
-	* tag: [v0.4.0-incubating](https://git-wip-us.apache.org/repos/asf?p=eagle.git;a=commit;h=refs/tags/v0.4.0-incubating)
-	* commit: [eac0f27958f2ed8c6842938dad0a995a87fd0715](https://git-wip-us.apache.org/repos/asf?p=eagle.git;a=commit;h=eac0f27958f2ed8c6842938dad0a995a87fd0715)
+	* tag: [v0.5.0](https://git-wip-us.apache.org/repos/asf?p=eagle.git;a=commit;h=refs/tags/v0.5.0)
 
 # Previous Releases
 
+* [Eagle 0.4.0-incubating](/docs/download.html#0.4.0-incubating)
 * [Eagle 0.3.0-incubating](/docs/download.html#0.3.0-incubating)
 
 More history releases can be found on [here](/docs/download.html).

http://git-wip-us.apache.org/repos/asf/eagle/blob/038bde7f/download.md
----------------------------------------------------------------------
diff --git a/download.md b/download.md
index 71d7710..7638017 100644
--- a/download.md
+++ b/download.md
@@ -6,6 +6,18 @@ permalink: /docs/download.html
 
 > You can verify your download by following these [procedures](https://www.apache.org/info/verification.html) and using these [KEYS](https://dist.apache.org/repos/dist/release/eagle/KEYS).
 
+# 0.5.0
+* Release notes:
+	* [Eagle 0.5.0 Release Notes](https://git-wip-us.apache.org/repos/asf?p=eagle.git;a=blob_plain;f=CHANGELOG.txt;hb=refs/tags/v0.5.0)
+* Source download:
+	* [apache-eagle-0.5.0-src.tar.gz](http://www.apache.org/dyn/closer.cgi?path=/incubator/eagle/apache-eagle-0.5.0-incubating)
+	* [apache-eagle-0.5.0-src.tar.gz.md5](https://dist.apache.org/repos/dist/release/eagle/apache-eagle-0.5.0/apache-eagle-0.5.0-src.tar.gz.md5)
+	* [apache-eagle-0.5.0-src.tar.gz.sha1](https://dist.apache.org/repos/dist/release/eagle/apache-eagle-0.5.0/apache-eagle-0.5.0-src.tar.gz.sha1)
+	* [apache-eagle-0.5.0-src.tar.gz.asc](https://dist.apache.org/repos/dist/release/eagle/apache-eagle-0.5.0/apache-eagle-0.5.0-src.tar.gz.asc)
+* Git revision: 
+	* tag: [v0.5.0](https://git-wip-us.apache.org/repos/asf?p=eagle.git;a=commit;h=refs/tags/v0.5.0)
+	* commit: [c930a7ab4a4fb78a1cbacd8f8419de3b5dbf1bd7](https://git-wip-us.apache.org/repos/asf?p=eagle.git;a=commit;h=c930a7ab4a4fb78a1cbacd8f8419de3b5dbf1bd7)
+
 # 0.4.0-incubating
 * Release notes: 
 	* [Eagle 0.4.0 Release Notes](https://git-wip-us.apache.org/repos/asf?p=eagle.git;a=blob_plain;f=CHANGELOG.txt;hb=refs/tags/v0.4.0-incubating)


[3/3] eagle git commit: Update site

Posted by ha...@apache.org.
Update site


Project: http://git-wip-us.apache.org/repos/asf/eagle/repo
Commit: http://git-wip-us.apache.org/repos/asf/eagle/commit/0094010b
Tree: http://git-wip-us.apache.org/repos/asf/eagle/tree/0094010b
Diff: http://git-wip-us.apache.org/repos/asf/eagle/diff/0094010b

Branch: refs/heads/site
Commit: 0094010b853e3d86acd253649fe21d32f418a0c9
Parents: 038bde7
Author: Hao Chen <hc...@ebay.com>
Authored: Wed Nov 22 13:55:06 2017 +0800
Committer: Hao Chen <hc...@ebay.com>
Committed: Wed Nov 22 13:55:06 2017 +0800

----------------------------------------------------------------------
 .gitignore                                      |   1 +
 .../_base.scssc                                 | Bin 14569 -> 0 bytes
 .../_layout.scssc                               | Bin 43155 -> 0 bytes
 .../_syntax-highlighting.scssc                  | Bin 40242 -> 0 bytes
 _site/docs/FAQ.html                             |   6 +-
 _site/docs/ambari-plugin-install.html           |  19 ++--
 _site/docs/cloudera-integration.html            |  62 +++++++------
 _site/docs/community.html                       |  28 +++---
 _site/docs/configuration.html                   |  14 +--
 _site/docs/deployment-env.html                  |  14 +--
 _site/docs/deployment-in-docker.html            |  12 ++-
 _site/docs/deployment-in-production.html        |  22 +++--
 _site/docs/deployment-in-sandbox.html           |  37 ++++----
 _site/docs/development-in-intellij.html         |  16 ++--
 _site/docs/development-in-macosx.html           |  68 +++++++++------
 _site/docs/download-latest.html                 |  46 ++--------
 _site/docs/download.html                        |  27 +++++-
 _site/docs/hbase-auth-activity-monitoring.html  |  12 +--
 _site/docs/hbase-data-activity-monitoring.html  |  48 +++++++----
 _site/docs/hdfs-auth-activity-monitoring.html   |  13 +--
 _site/docs/hdfs-data-activity-monitoring.html   |  20 +++--
 _site/docs/hive-query-activity-monitoring.html  |   5 +-
 _site/docs/import-hdfs-auditLog.html            |  30 ++++---
 _site/docs/index.html                           |   6 +-
 _site/docs/installation.html                    |  45 ++++++----
 _site/docs/jmx-metric-monitoring.html           |  13 +--
 _site/docs/mapr-integration.html                |  61 +++++++------
 _site/docs/quick-start-0.3.0.html               |  10 ++-
 _site/docs/quick-start.html                     |  17 ++--
 _site/docs/security.html                        |   2 +-
 _site/docs/serviceconfiguration.html            |   9 +-
 _site/docs/terminology.html                     |   4 +-
 _site/docs/tutorial/classification.html         |  22 +++--
 _site/docs/tutorial/ldap.html                   |   6 +-
 _site/docs/tutorial/notificationplugin.html     |  34 ++++----
 _site/docs/tutorial/policy.html                 |   8 +-
 _site/docs/tutorial/site-0.3.0.html             |  86 ++++++++++---------
 _site/docs/tutorial/topologymanagement.html     |  14 +--
 _site/docs/tutorial/userprofile.html            |  25 +++---
 _site/docs/usecases.html                        |   8 +-
 _site/feed.xml                                  |  42 +++++----
 .../2015/10/27/apache-eagle-announce-cn.html    |  36 ++++----
 42 files changed, 545 insertions(+), 403 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/.gitignore
----------------------------------------------------------------------
diff --git a/.gitignore b/.gitignore
index 173da0c..9be3c2a 100644
--- a/.gitignore
+++ b/.gitignore
@@ -84,3 +84,4 @@ logs/
 
 **/*.db
 docs/site
+.sass-cache/

http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/.sass-cache/bde63e899d42a91f9035e0b509e18ff435977dba/_base.scssc
----------------------------------------------------------------------
diff --git a/.sass-cache/bde63e899d42a91f9035e0b509e18ff435977dba/_base.scssc b/.sass-cache/bde63e899d42a91f9035e0b509e18ff435977dba/_base.scssc
deleted file mode 100644
index 93239c2..0000000
Binary files a/.sass-cache/bde63e899d42a91f9035e0b509e18ff435977dba/_base.scssc and /dev/null differ

http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/.sass-cache/bde63e899d42a91f9035e0b509e18ff435977dba/_layout.scssc
----------------------------------------------------------------------
diff --git a/.sass-cache/bde63e899d42a91f9035e0b509e18ff435977dba/_layout.scssc b/.sass-cache/bde63e899d42a91f9035e0b509e18ff435977dba/_layout.scssc
deleted file mode 100644
index 6fd7426..0000000
Binary files a/.sass-cache/bde63e899d42a91f9035e0b509e18ff435977dba/_layout.scssc and /dev/null differ

http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/.sass-cache/bde63e899d42a91f9035e0b509e18ff435977dba/_syntax-highlighting.scssc
----------------------------------------------------------------------
diff --git a/.sass-cache/bde63e899d42a91f9035e0b509e18ff435977dba/_syntax-highlighting.scssc b/.sass-cache/bde63e899d42a91f9035e0b509e18ff435977dba/_syntax-highlighting.scssc
deleted file mode 100644
index ae4fd0e..0000000
Binary files a/.sass-cache/bde63e899d42a91f9035e0b509e18ff435977dba/_syntax-highlighting.scssc and /dev/null differ

http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/FAQ.html
----------------------------------------------------------------------
diff --git a/_site/docs/FAQ.html b/_site/docs/FAQ.html
index 78881fd..4dc3f45 100644
--- a/_site/docs/FAQ.html
+++ b/_site/docs/FAQ.html
@@ -164,14 +164,16 @@
       <p>Add the following line in host machine’s hosts file</p>
     </blockquote>
 
-    <pre><code>127.0.0.1 sandbox.hortonworks.com
+    <div class="highlighter-rouge"><pre class="highlight"><code>127.0.0.1 sandbox.hortonworks.com
 </code></pre>
+    </div>
   </li>
   <li>
     <p><strong>Q2. Not able to send data into kafka using kafka console producer</strong>:</p>
 
-    <pre><code>/usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list localhost:6667 --topic sandbox_hdfs_audit_log
+    <div class="highlighter-rouge"><pre class="highlight"><code>/usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list localhost:6667 --topic sandbox_hdfs_audit_log
 </code></pre>
+    </div>
 
     <blockquote>
       <p>Apache Kafka broker are binding to host sandbox.hortonworks.com</p>

http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/ambari-plugin-install.html
----------------------------------------------------------------------
diff --git a/_site/docs/ambari-plugin-install.html b/_site/docs/ambari-plugin-install.html
index b1f6578..ecb4cb7 100644
--- a/_site/docs/ambari-plugin-install.html
+++ b/_site/docs/ambari-plugin-install.html
@@ -168,8 +168,9 @@
   <li>
     <p>Create a Kafka<sup id="fnref:KAFKA"><a href="#fn:KAFKA" class="footnote">1</a></sup> topic if you have not. Here is an example command.</p>
 
-    <pre><code>$ /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic sandbox_hdfs_audit_log
+    <div class="highlighter-rouge"><pre class="highlight"><code>$ /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic sandbox_hdfs_audit_log
 </code></pre>
+    </div>
   </li>
   <li>
     <p>Stream HDFS log data to Kafka, and refer to <a href="/docs/import-hdfs-auditLog.html">here</a> on how to do it .</p>
@@ -185,8 +186,9 @@
   <li>
     <p>Install Eagle Ambari plugin</p>
 
-    <pre><code>$ /usr/hdp/current/eagle/bin/eagle-ambari.sh install
+    <div class="highlighter-rouge"><pre class="highlight"><code>$ /usr/hdp/current/eagle/bin/eagle-ambari.sh install
 </code></pre>
+    </div>
   </li>
   <li>
     <p>Restart <a href="http://127.0.0.1:8000/">Ambari</a> click on disable and enable Ambari back.</p>
@@ -200,10 +202,11 @@
   <li>
     <p>Add Policies and meta data required by running the below script.</p>
 
-    <pre><code>$ cd &lt;eagle-home&gt;
+    <div class="highlighter-rouge"><pre class="highlight"><code>$ cd &lt;eagle-home&gt;
 $ examples/sample-sensitivity-resource-create.sh
 $ examples/sample-policy-create.sh
 </code></pre>
+    </div>
   </li>
 </ol>
 
@@ -214,19 +217,19 @@ $ examples/sample-policy-create.sh
 <div class="footnotes">
   <ol>
     <li id="fn:KAFKA">
-      <p><em>All mentions of “kafka” on this page represent Apache Kafka.</em> <a href="#fnref:KAFKA" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “kafka” on this page represent Apache Kafka.</em>&nbsp;<a href="#fnref:KAFKA" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:STORM">
-      <p><em>Apache Storm.</em> <a href="#fnref:STORM" class="reversefootnote">&#8617;</a></p>
+      <p><em>Apache Storm.</em>&nbsp;<a href="#fnref:STORM" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:SPARK">
-      <p><em>Apache Spark.</em> <a href="#fnref:SPARK" class="reversefootnote">&#8617;</a></p>
+      <p><em>Apache Spark.</em>&nbsp;<a href="#fnref:SPARK" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:HBASE">
-      <p><em>Apache HBase.</em> <a href="#fnref:HBASE" class="reversefootnote">&#8617;</a></p>
+      <p><em>Apache HBase.</em>&nbsp;<a href="#fnref:HBASE" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:AMBARI">
-      <p><em>All mentions of “ambari” on this page represent Apache Ambari.</em> <a href="#fnref:AMBARI" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “ambari” on this page represent Apache Ambari.</em>&nbsp;<a href="#fnref:AMBARI" class="reversefootnote">&#8617;</a></p>
     </li>
   </ol>
 </div>

http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/cloudera-integration.html
----------------------------------------------------------------------
diff --git a/_site/docs/cloudera-integration.html b/_site/docs/cloudera-integration.html
index 20dbf64..f39b50f 100644
--- a/_site/docs/cloudera-integration.html
+++ b/_site/docs/cloudera-integration.html
@@ -168,8 +168,8 @@ This tutorial is to address these issues before you continue to follow the tutor
 <ul>
   <li>Zookeeper (installed through Cloudera Manager)</li>
   <li>Kafka (installed through Cloudera Manager)</li>
-  <li>Storm (<code>0.9.x</code> or <code>0.10.x</code>, installed manually)</li>
-  <li>Logstash (<code>2.X</code>, installed manually on NameNode)</li>
+  <li>Storm (<code class="highlighter-rouge">0.9.x</code> or <code class="highlighter-rouge">0.10.x</code>, installed manually)</li>
+  <li>Logstash (<code class="highlighter-rouge">2.X</code>, installed manually on NameNode)</li>
 </ul>
 
 <h3 id="kafka">Kafka</h3>
@@ -180,17 +180,18 @@ This tutorial is to address these issues before you continue to follow the tutor
 
 <ul>
   <li>
-    <p>Open Cloudera Manager and open “kafka” configuration, then set <code>“zookeeper Root”</code> to <code>“/”</code>.</p>
+    <p>Open Cloudera Manager and open “kafka” configuration, then set <code class="highlighter-rouge">“zookeeper Root”</code> to <code class="highlighter-rouge">“/”</code>.</p>
   </li>
   <li>
-    <p>If Kafka cannot be started successfully, check kafka’s log. If stack trace shows: <code>“java.lang.OutOfMemoryError: Java heap space”</code>. Increase heap size by setting <code>"KAFKA_HEAP_OPTS"</code>in <code>/bin/kafka-server-start.sh</code>.</p>
+    <p>If Kafka cannot be started successfully, check kafka’s log. If stack trace shows: <code class="highlighter-rouge">“java.lang.OutOfMemoryError: Java heap space”</code>. Increase heap size by setting <code class="highlighter-rouge">"KAFKA_HEAP_OPTS"</code>in <code class="highlighter-rouge">/bin/kafka-server-start.sh</code>.</p>
   </li>
 </ul>
 
 <p>Example:</p>
 
-<pre><code>                  export KAFKA_HEAP_OPTS="-Xmx2G -Xms2G"
+<div class="highlighter-rouge"><pre class="highlight"><code>                  export KAFKA_HEAP_OPTS="-Xmx2G -Xms2G"
 </code></pre>
+</div>
 
 <h4 id="verification">Verification</h4>
 
@@ -198,15 +199,17 @@ This tutorial is to address these issues before you continue to follow the tutor
   <li>Step1: create a kafka topic (here I created a topic called “test”, which will be used in  logstash configuration file to receive hdfsAudit log messages from Cloudera.</li>
 </ul>
 
-<pre><code>bin/kafka-topics.sh --create --zookeeper 127.0.0.1:2181 --replication-factor 1 --partitions 1 --topic test
+<div class="highlighter-rouge"><pre class="highlight"><code>bin/kafka-topics.sh --create --zookeeper 127.0.0.1:2181 --replication-factor 1 --partitions 1 --topic test
 </code></pre>
+</div>
 
 <ul>
   <li>Step2: check if topic has been created successfully.</li>
 </ul>
 
-<pre><code>bin/kafka-topics.sh --list --zookeeper 127.0.0.1:2181
+<div class="highlighter-rouge"><pre class="highlight"><code>bin/kafka-topics.sh --list --zookeeper 127.0.0.1:2181
 </code></pre>
+</div>
 
 <p>this command will show all created topics.</p>
 
@@ -214,9 +217,10 @@ This tutorial is to address these issues before you continue to follow the tutor
   <li>Step3: open two terminals, start “producer” and “consumer” separately.</li>
 </ul>
 
-<pre><code>/usr/bin/kafka-console-producer --broker-list hostname:9092 --topic test
+<div class="highlighter-rouge"><pre class="highlight"><code>/usr/bin/kafka-console-producer --broker-list hostname:9092 --topic test
 /usr/bin/kafka-console-consumer --zookeeper hostname:2181 --topic test
 </code></pre>
+</div>
 
 <ul>
   <li>Step4: type in some message in producer. If consumer can receive the messages sent from producer, then kafka is working fine. Otherwise please check the configuration and logs to identify the root cause of issues.</li>
@@ -228,33 +232,36 @@ This tutorial is to address these issues before you continue to follow the tutor
 
 <p>You can follow <a href="https://www.elastic.co/downloads/logstash">logstash online doc</a> to download and install logstash on your machine:</p>
 
-<p>Or you can install it through <code>yum</code> if you are using centos:</p>
+<p>Or you can install it through <code class="highlighter-rouge">yum</code> if you are using centos:</p>
 
 <ul>
   <li>download and install the public signing key:</li>
 </ul>
 
-<pre><code>rpm --import  https://packages.elastic.co/GPG-KEY-elasticsearch
+<div class="highlighter-rouge"><pre class="highlight"><code>rpm --import  https://packages.elastic.co/GPG-KEY-elasticsearch
 </code></pre>
+</div>
 
 <ul>
-  <li>Add the following lines in <code>/etc/yum.repos.d/</code> directory in a file with a <code>.repo</code> suffix, for example <code>logstash.repo</code>.</li>
+  <li>Add the following lines in <code class="highlighter-rouge">/etc/yum.repos.d/</code> directory in a file with a <code class="highlighter-rouge">.repo</code> suffix, for example <code class="highlighter-rouge">logstash.repo</code>.</li>
 </ul>
 
-<pre><code>[logstash-2.3]
+<div class="highlighter-rouge"><pre class="highlight"><code>[logstash-2.3]
 name=Logstash repository for 2.3.x packages
 baseurl=https://packages.elastic.co/logstash/2.3/centos
 gpgcheck=1
 gpgkey=https://packages.elastic.co/GPG-KEY-elasticsearch
 enabled=1
 </code></pre>
+</div>
 
 <ul>
-  <li>Then install it using <code>yum</code>:</li>
+  <li>Then install it using <code class="highlighter-rouge">yum</code>:</li>
 </ul>
 
-<pre><code>yum install logstash
+<div class="highlighter-rouge"><pre class="highlight"><code>yum install logstash
 </code></pre>
+</div>
 
 <h4 id="create-conf-file">Create conf file</h4>
 
@@ -262,8 +269,9 @@ enabled=1
 
 <h4 id="start-logstash">Start logstash</h4>
 
-<pre><code>bin/logstash -f conf/first-pipeline.conf
+<div class="highlighter-rouge"><pre class="highlight"><code>bin/logstash -f conf/first-pipeline.conf
 </code></pre>
+</div>
 
 <h4 id="verification-1">Verification</h4>
 
@@ -274,51 +282,55 @@ enabled=1
 
 <h4 id="installation-1">Installation</h4>
 
-<p>Download Apache Storm from <a href="http://storm.apache.org/downloads.html">here</a>, the version you choose should be <code>0.10.x</code> or <code>0.9.x</code> release.
+<p>Download Apache Storm from <a href="http://storm.apache.org/downloads.html">here</a>, the version you choose should be <code class="highlighter-rouge">0.10.x</code> or <code class="highlighter-rouge">0.9.x</code> release.
 Then follow <a href="http://storm.apache.org/releases/0.10.0/Setting-up-a-Storm-cluster.html">Apache Storm online doc</a>) to install Apache Storm on your cluster.</p>
 
-<p>In <code>/etc/profile</code>, add this:</p>
+<p>In <code class="highlighter-rouge">/etc/profile</code>, add this:</p>
 
-<pre><code>export PATH=$PATH:/opt/apache-storm-0.10.1/bin/
+<div class="highlighter-rouge"><pre class="highlight"><code>export PATH=$PATH:/opt/apache-storm-0.10.1/bin/
 </code></pre>
+</div>
 
 <p>save the profile and then type:</p>
 
-<pre><code>source /etc/profile 
+<div class="highlighter-rouge"><pre class="highlight"><code>source /etc/profile 
 </code></pre>
+</div>
 
 <p>to make it work.</p>
 
 <h4 id="configuration-1">Configuration</h4>
 
-<p>In <code>storm/conf/storm.yaml</code>, change the hostname to your own host.</p>
+<p>In <code class="highlighter-rouge">storm/conf/storm.yaml</code>, change the hostname to your own host.</p>
 
 <h4 id="start-apache-storm">Start Apache Storm</h4>
 
 <p>In Termial, type:</p>
 
-<pre><code>$: storm nimbus
+<div class="highlighter-rouge"><pre class="highlight"><code>$: storm nimbus
 $: storm supervisor
 $: storm UI
 </code></pre>
+</div>
 
 <h4 id="verification-2">Verification</h4>
 
-<p>Open storm UI in your browser, default URL is : <code>http://hostname:8080/index.html</code>.</p>
+<p>Open storm UI in your browser, default URL is : <code class="highlighter-rouge">http://hostname:8080/index.html</code>.</p>
 
 <h3 id="apache-eagle">Apache Eagle</h3>
 
 <p>To download and install Apache Eagle, please refer to  <a href="http://eagle.incubator.apache.org/docs/quick-start.html">Get Started with Sandbox.</a> .</p>
 
-<p>One thing need to mention is: in <code>“/bin/eagle-topology.sh”</code>, line 102:</p>
+<p>One thing need to mention is: in <code class="highlighter-rouge">“/bin/eagle-topology.sh”</code>, line 102:</p>
 
-<pre><code>			storm_ui=http://localhost:8080
+<div class="highlighter-rouge"><pre class="highlight"><code>			storm_ui=http://localhost:8080
 </code></pre>
+</div>
 
 <p>If you are not using the default port number, change this to your own Storm UI url.</p>
 
 <p>I know it takes time to finish these configuration, but now it is time to have fun! 
-Just try <code>HDFS Data Activity Monitoring</code> with <code>Demo</code> listed in <a href="http://eagle.incubator.apache.org/docs/hdfs-data-activity-monitoring.html">HDFS Data Activity Monitoring.</a></p>
+Just try <code class="highlighter-rouge">HDFS Data Activity Monitoring</code> with <code class="highlighter-rouge">Demo</code> listed in <a href="http://eagle.incubator.apache.org/docs/hdfs-data-activity-monitoring.html">HDFS Data Activity Monitoring.</a></p>
 
 
       </div><!--end of loadcontent-->  

http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/community.html
----------------------------------------------------------------------
diff --git a/_site/docs/community.html b/_site/docs/community.html
index 9df21e5..f616802 100644
--- a/_site/docs/community.html
+++ b/_site/docs/community.html
@@ -183,13 +183,13 @@
       <td> </td>
       <td> </td>
       <td> </td>
-      <td><a href="&#109;&#097;&#105;&#108;&#116;&#111;:&#117;&#115;&#101;&#114;&#064;&#101;&#097;&#103;&#108;&#101;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;">&#117;&#115;&#101;&#114;&#064;&#101;&#097;&#103;&#108;&#101;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;</a></td>
+      <td><a href="mailto:user@eagle.apache.org">user@eagle.apache.org</a></td>
       <td> </td>
       <td> </td>
       <td> </td>
       <td> </td>
-      <td><a href="&#109;&#097;&#105;&#108;&#116;&#111;:&#117;&#115;&#101;&#114;&#045;&#115;&#117;&#098;&#115;&#099;&#114;&#105;&#098;&#101;&#064;&#101;&#097;&#103;&#108;&#101;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;">subscribe</a></td>
-      <td><a href="&#109;&#097;&#105;&#108;&#116;&#111;:&#117;&#115;&#101;&#114;&#045;&#117;&#110;&#115;&#117;&#098;&#115;&#099;&#114;&#105;&#098;&#101;&#064;&#101;&#097;&#103;&#108;&#101;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;">unsubscribe</a></td>
+      <td><a href="mailto:user-subscribe@eagle.apache.org">subscribe</a></td>
+      <td><a href="mailto:user-unsubscribe@eagle.apache.org">unsubscribe</a></td>
       <td><a href="http://mail-archives.apache.org/mod_mbox/eagle-user/">eagle-user</a></td>
     </tr>
     <tr>
@@ -198,13 +198,13 @@
       <td> </td>
       <td> </td>
       <td> </td>
-      <td><a href="&#109;&#097;&#105;&#108;&#116;&#111;:&#100;&#101;&#118;&#064;&#101;&#097;&#103;&#108;&#101;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;">&#100;&#101;&#118;&#064;&#101;&#097;&#103;&#108;&#101;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;</a></td>
+      <td><a href="mailto:dev@eagle.apache.org">dev@eagle.apache.org</a></td>
       <td> </td>
       <td> </td>
       <td> </td>
       <td> </td>
-      <td><a href="&#109;&#097;&#105;&#108;&#116;&#111;:&#100;&#101;&#118;&#045;&#115;&#117;&#098;&#115;&#099;&#114;&#105;&#098;&#101;&#064;&#101;&#097;&#103;&#108;&#101;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;">subscribe</a></td>
-      <td><a href="&#109;&#097;&#105;&#108;&#116;&#111;:&#100;&#101;&#118;&#045;&#117;&#110;&#115;&#117;&#098;&#115;&#099;&#114;&#105;&#098;&#101;&#064;&#101;&#097;&#103;&#108;&#101;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;">unsubscribe</a></td>
+      <td><a href="mailto:dev-subscribe@eagle.apache.org">subscribe</a></td>
+      <td><a href="mailto:dev-unsubscribe@eagle.apache.org">unsubscribe</a></td>
       <td><a href="http://mail-archives.apache.org/mod_mbox/eagle-dev/">eagle-dev</a></td>
     </tr>
     <tr>
@@ -213,13 +213,13 @@
       <td> </td>
       <td> </td>
       <td> </td>
-      <td><a href="&#109;&#097;&#105;&#108;&#116;&#111;:&#105;&#115;&#115;&#117;&#101;&#115;&#064;&#101;&#097;&#103;&#108;&#101;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;">&#105;&#115;&#115;&#117;&#101;&#115;&#064;&#101;&#097;&#103;&#108;&#101;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;</a></td>
+      <td><a href="mailto:issues@eagle.apache.org">issues@eagle.apache.org</a></td>
       <td> </td>
       <td> </td>
       <td> </td>
       <td> </td>
-      <td><a href="&#109;&#097;&#105;&#108;&#116;&#111;:&#105;&#115;&#115;&#117;&#101;&#115;&#045;&#115;&#117;&#098;&#115;&#099;&#114;&#105;&#098;&#101;&#064;&#101;&#097;&#103;&#108;&#101;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;">subscribe</a></td>
-      <td><a href="&#109;&#097;&#105;&#108;&#116;&#111;:&#105;&#115;&#115;&#117;&#101;&#115;&#045;&#117;&#110;&#115;&#117;&#098;&#115;&#099;&#114;&#105;&#098;&#101;&#064;&#101;&#097;&#103;&#108;&#101;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;">unsubscribe</a></td>
+      <td><a href="mailto:issues-subscribe@eagle.apache.org">subscribe</a></td>
+      <td><a href="mailto:issues-unsubscribe@eagle.apache.org">unsubscribe</a></td>
       <td><a href="http://mail-archives.apache.org/mod_mbox/eagle-issues/">eagle-issues</a></td>
     </tr>
     <tr>
@@ -228,13 +228,13 @@
       <td> </td>
       <td> </td>
       <td> </td>
-      <td><a href="&#109;&#097;&#105;&#108;&#116;&#111;:&#099;&#111;&#109;&#109;&#105;&#116;&#115;&#064;&#101;&#097;&#103;&#108;&#101;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;">&#099;&#111;&#109;&#109;&#105;&#116;&#115;&#064;&#101;&#097;&#103;&#108;&#101;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;</a></td>
+      <td><a href="mailto:commits@eagle.apache.org">commits@eagle.apache.org</a></td>
       <td> </td>
       <td> </td>
       <td> </td>
       <td> </td>
-      <td><a href="&#109;&#097;&#105;&#108;&#116;&#111;:&#099;&#111;&#109;&#109;&#105;&#116;&#115;&#045;&#115;&#117;&#098;&#115;&#099;&#114;&#105;&#098;&#101;&#064;&#101;&#097;&#103;&#108;&#101;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;">subscribe</a></td>
-      <td><a href="&#109;&#097;&#105;&#108;&#116;&#111;:&#099;&#111;&#109;&#109;&#105;&#116;&#115;&#045;&#117;&#110;&#115;&#117;&#098;&#115;&#099;&#114;&#105;&#098;&#101;&#064;&#101;&#097;&#103;&#108;&#101;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;">unsubscribe</a></td>
+      <td><a href="mailto:commits-subscribe@eagle.apache.org">subscribe</a></td>
+      <td><a href="mailto:commits-unsubscribe@eagle.apache.org">unsubscribe</a></td>
       <td><a href="http://mail-archives.apache.org/mod_mbox/eagle-commits/">eagle-commits</a></td>
     </tr>
   </tbody>
@@ -317,7 +317,7 @@
   <li><strong>Wechat</strong>: Apache_Eagle</li>
 </ul>
 
-<h3 id="events-and-meetupshadoop">Events and Meetups<sup id="fnref:HADOOP"><a href="#fn:HADOOP" class="footnote">1</a></sup></h3>
+<h3 id="events-and-meetups">Events and Meetups<sup id="fnref:HADOOP"><a href="#fn:HADOOP" class="footnote">1</a></sup></h3>
 
 <p><strong>Conferences</strong></p>
 
@@ -335,7 +335,7 @@
 <div class="footnotes">
   <ol>
     <li id="fn:HADOOP">
-      <p><em>All mentions of “hadoop” on this page represent Apache Hadoop.</em> <a href="#fnref:HADOOP" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “hadoop” on this page represent Apache Hadoop.</em>&nbsp;<a href="#fnref:HADOOP" class="reversefootnote">&#8617;</a></p>
     </li>
   </ol>
 </div>

http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/configuration.html
----------------------------------------------------------------------
diff --git a/_site/docs/configuration.html b/_site/docs/configuration.html
index 83501ae..d664d26 100644
--- a/_site/docs/configuration.html
+++ b/_site/docs/configuration.html
@@ -156,7 +156,7 @@
       </div>
       <div class="col-xs-6 col-sm-9 page-main-content" style="margin-left: -15px" id="loadcontent">
         <h1 class="page-header" style="margin-top: 0px">Application Configuration</h1>
-        <p>Apache Eagle (called Eagle in the following) requires you to create a configuration file under <code>$EAGLE_HOME/conf/</code> for each application. Basically, there are some common properties shared, e.g., envContextConfig, eagleProps, and dynamicConfigSource. While dataSourceConfig differs from application to application.</p>
+        <p>Apache Eagle (called Eagle in the following) requires you to create a configuration file under <code class="highlighter-rouge">$EAGLE_HOME/conf/</code> for each application. Basically, there are some common properties shared, e.g., envContextConfig, eagleProps, and dynamicConfigSource. While dataSourceConfig differs from application to application.</p>
 
 <p>In this page we take the following two application as examples</p>
 
@@ -610,22 +610,22 @@
 <div class="footnotes">
   <ol>
     <li id="fn:HIVE">
-      <p><em>All mentions of “hive” on this page represent Apache Hive.</em> <a href="#fnref:HIVE" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “hive” on this page represent Apache Hive.</em>&nbsp;<a href="#fnref:HIVE" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:STORM">
-      <p><em>All mentions of “storm” on this page represent Apache Storm.</em> <a href="#fnref:STORM" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “storm” on this page represent Apache Storm.</em>&nbsp;<a href="#fnref:STORM" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:KAFKA">
-      <p><em>All mentions of “kafka” on this page represent Apache Kafka.</em> <a href="#fnref:KAFKA" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “kafka” on this page represent Apache Kafka.</em>&nbsp;<a href="#fnref:KAFKA" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:ZOOKEEPER">
-      <p><em>All mentions of “zookeeper” on this page represent Apache ZooKeeper.</em> <a href="#fnref:ZOOKEEPER" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “zookeeper” on this page represent Apache ZooKeeper.</em>&nbsp;<a href="#fnref:ZOOKEEPER" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:HBASE">
-      <p><em>Apache HBase.</em> <a href="#fnref:HBASE" class="reversefootnote">&#8617;</a></p>
+      <p><em>Apache HBase.</em>&nbsp;<a href="#fnref:HBASE" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:TOMCAT">
-      <p><em>Apache Tomcat.</em> <a href="#fnref:TOMCAT" class="reversefootnote">&#8617;</a></p>
+      <p><em>Apache Tomcat.</em>&nbsp;<a href="#fnref:TOMCAT" class="reversefootnote">&#8617;</a></p>
     </li>
   </ol>
 </div>

http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/deployment-env.html
----------------------------------------------------------------------
diff --git a/_site/docs/deployment-env.html b/_site/docs/deployment-env.html
index 73cea54..8fc2f3d 100644
--- a/_site/docs/deployment-env.html
+++ b/_site/docs/deployment-env.html
@@ -158,7 +158,7 @@
         <h1 class="page-header" style="margin-top: 0px">Deploy Environment</h1>
         <h3 id="setup-environment">Setup Environment</h3>
 
-<p>Apache Eagle (called Eagle in the following) as an analytics solution for identifying security and performance issues instantly, relies on streaming platform <code>Storm</code><sup id="fnref:STORM"><a href="#fn:STORM" class="footnote">1</a></sup> + <code>Kafka</code><sup id="fnref:KAFKA"><a href="#fn:KAFKA" class="footnote">2</a></sup> to meet the realtime criteria, and persistence storage to store metadata and some metrics. As for the persistence storage, it supports three types of database: <code>HBase</code><sup id="fnref:HBASE"><a href="#fn:HBASE" class="footnote">3</a></sup>, <code>Derby</code><sup id="fnref:DERBY"><a href="#fn:DERBY" class="footnote">4</a></sup>, and <code>Mysql</code></p>
+<p>Apache Eagle (called Eagle in the following) as an analytics solution for identifying security and performance issues instantly, relies on streaming platform <code class="highlighter-rouge">Storm</code><sup id="fnref:STORM"><a href="#fn:STORM" class="footnote">1</a></sup> + <code class="highlighter-rouge">Kafka</code><sup id="fnref:KAFKA"><a href="#fn:KAFKA" class="footnote">2</a></sup> to meet the realtime criteria, and persistence storage to store metadata and some metrics. As for the persistence storage, it supports three types of database: <code class="highlighter-rouge">HBase</code><sup id="fnref:HBASE"><a href="#fn:HBASE" class="footnote">3</a></sup>, <code class="highlighter-rouge">Derby</code><sup id="fnref:DERBY"><a href="#fn:DERBY" class="footnote">4</a></sup>, and <code class="highlighter-rouge">Mysql</code></p>
 
 <p>To run monitoring applications, Eagle requires the following dependencies.</p>
 
@@ -226,22 +226,22 @@
 <div class="footnotes">
   <ol>
     <li id="fn:STORM">
-      <p><em>All mentions of “storm” on this page represent Apache Storm.</em> <a href="#fnref:STORM" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “storm” on this page represent Apache Storm.</em>&nbsp;<a href="#fnref:STORM" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:KAFKA">
-      <p><em>All mentions of “kafka” on this page represent Apache Kafka.</em> <a href="#fnref:KAFKA" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “kafka” on this page represent Apache Kafka.</em>&nbsp;<a href="#fnref:KAFKA" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:HBASE">
-      <p><em>All mentions of “hbase” on this page represent Apache HBase.</em> <a href="#fnref:HBASE" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “hbase” on this page represent Apache HBase.</em>&nbsp;<a href="#fnref:HBASE" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:DERBY">
-      <p><em>All mentions of “derby” on this page represent Apache Derby.</em> <a href="#fnref:DERBY" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “derby” on this page represent Apache Derby.</em>&nbsp;<a href="#fnref:DERBY" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:HADOOP">
-      <p><em>Apache Hadoop.</em> <a href="#fnref:HADOOP" class="reversefootnote">&#8617;</a></p>
+      <p><em>Apache Hadoop.</em>&nbsp;<a href="#fnref:HADOOP" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:AMBARI">
-      <p><em>All mentions of “ambari” on this page represent Apache Ambari.</em> <a href="#fnref:AMBARI" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “ambari” on this page represent Apache Ambari.</em>&nbsp;<a href="#fnref:AMBARI" class="reversefootnote">&#8617;</a></p>
     </li>
   </ol>
 </div>

http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/deployment-in-docker.html
----------------------------------------------------------------------
diff --git a/_site/docs/deployment-in-docker.html b/_site/docs/deployment-in-docker.html
index 8b8b009..f9c3cee 100644
--- a/_site/docs/deployment-in-docker.html
+++ b/_site/docs/deployment-in-docker.html
@@ -166,13 +166,14 @@
       <li>
         <p>Pull latest eagle docker image from <a href="https://hub.docker.com/r/apacheeagle/sandbox/">docker hub</a> directly:</p>
 
-        <pre><code>docker pull apacheeagle/sandbox
+        <div class="highlighter-rouge"><pre class="highlight"><code>docker pull apacheeagle/sandbox
 </code></pre>
+        </div>
       </li>
       <li>
         <p>Then run eagle docker image:</p>
 
-        <pre><code>docker run -p 9099:9099 -p 8080:8080 -p 8744:8744 -p 2181:2181 -p 2888:2888 \
+        <div class="highlighter-rouge"><pre class="highlight"><code>docker run -p 9099:9099 -p 8080:8080 -p 8744:8744 -p 2181:2181 -p 2888:2888 \
   -p 6667:6667 -p 60020:60020 -p 60030:60030 -p 60010:60010 -d --dns 127.0.0.1 \
   --entrypoint /usr/local/serf/bin/start-serf-agent.sh -e KEYCHAIN= \
   --env EAGLE_SERVER_HOST=sandbox.eagle.apache.org --name sandbox \
@@ -182,6 +183,7 @@ docker run -it --rm -e EXPECTED_HOST_COUNT=1 -e BLUEPRINT=hdp-singlenode-eagle \
   --link sandbox:ambariserver --entrypoint /bin/sh apacheeagle/sandbox:latest \
   -c /tmp/install-cluster.sh
 </code></pre>
+        </div>
       </li>
     </ul>
   </li>
@@ -192,14 +194,16 @@ docker run -it --rm -e EXPECTED_HOST_COUNT=1 -e BLUEPRINT=hdp-singlenode-eagle \
       <li>
         <p>Get latest source code of eagle.</p>
 
-        <pre><code>git clone https://github.com/apache/eagle.git
+        <div class="highlighter-rouge"><pre class="highlight"><code>git clone https://github.com/apache/eagle.git
 </code></pre>
+        </div>
       </li>
       <li>
         <p>Then run eagle docker command.</p>
 
-        <pre><code>cd eagle &amp;&amp; ./eagle-docker boot
+        <div class="highlighter-rouge"><pre class="highlight"><code>cd eagle &amp;&amp; ./eagle-docker boot
 </code></pre>
+        </div>
       </li>
     </ul>
   </li>

http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/deployment-in-production.html
----------------------------------------------------------------------
diff --git a/_site/docs/deployment-in-production.html b/_site/docs/deployment-in-production.html
index 49f6c78..cc3af9a 100644
--- a/_site/docs/deployment-in-production.html
+++ b/_site/docs/deployment-in-production.html
@@ -184,9 +184,9 @@
 
     <ul>
       <li>
-        <p>Edit <code>bin/eagle-env.sh</code></p>
+        <p>Edit <code class="highlighter-rouge">bin/eagle-env.sh</code></p>
 
-        <pre><code>  # TODO: make sure java version is 1.7.x
+        <div class="highlighter-rouge"><pre class="highlight"><code>  # TODO: make sure java version is 1.7.x
   export JAVA_HOME=
 
   # TODO: Apache Storm nimbus host. Default is localhost
@@ -195,11 +195,12 @@
   # TODO: EAGLE_SERVICE_HOST, default is `hostname -f`
   export EAGLE_SERVICE_HOST=localhost
 </code></pre>
+        </div>
       </li>
       <li>
-        <p>Edit <code>conf/eagle-service.conf</code> to configure the database to use (for example: HBase<sup id="fnref:HBASE"><a href="#fn:HBASE" class="footnote">1</a></sup>)</p>
+        <p>Edit <code class="highlighter-rouge">conf/eagle-service.conf</code> to configure the database to use (for example: HBase<sup id="fnref:HBASE"><a href="#fn:HBASE" class="footnote">1</a></sup>)</p>
 
-        <pre><code>  # TODO: hbase.zookeeper.quorum in the format host1,host2,host3,...
+        <div class="highlighter-rouge"><pre class="highlight"><code>  # TODO: hbase.zookeeper.quorum in the format host1,host2,host3,...
   # default is "localhost"
   hbase-zookeeper-quorum="localhost"
 
@@ -211,13 +212,14 @@
   # default is "/hbase"
   zookeeper-znode-parent="/hbase"
 </code></pre>
+        </div>
       </li>
     </ul>
   </li>
   <li>
     <p>Step 2: Install metadata for policies</p>
 
-    <pre><code>  $ cd &lt;eagle-home&gt;
+    <div class="highlighter-rouge"><pre class="highlight"><code>  $ cd &lt;eagle-home&gt;
 
   # start Eagle web service
   $ bin/eagle-service.sh start
@@ -225,6 +227,7 @@
   # import metadata after Eagle service is successfully started
   $ bin/eagle-topology-init.sh
 </code></pre>
+    </div>
   </li>
 </ul>
 
@@ -260,8 +263,9 @@
   <li>
     <p>Stop eagle service</p>
 
-    <pre><code>$ bin/eagle-service.sh stop
+    <div class="highlighter-rouge"><pre class="highlight"><code>$ bin/eagle-service.sh stop
 </code></pre>
+    </div>
   </li>
 </ul>
 
@@ -272,13 +276,13 @@
 <div class="footnotes">
   <ol>
     <li id="fn:HBASE">
-      <p><em>All mentions of “hbase” on this page represent Apache HBase.</em> <a href="#fnref:HBASE" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “hbase” on this page represent Apache HBase.</em>&nbsp;<a href="#fnref:HBASE" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:HADOOP">
-      <p><em>All mentions of “hadoop” on this page represent Apache Hadoop.</em> <a href="#fnref:HADOOP" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “hadoop” on this page represent Apache Hadoop.</em>&nbsp;<a href="#fnref:HADOOP" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:HIVE">
-      <p><em>Apache Hive.</em> <a href="#fnref:HIVE" class="reversefootnote">&#8617;</a></p>
+      <p><em>Apache Hive.</em>&nbsp;<a href="#fnref:HIVE" class="reversefootnote">&#8617;</a></p>
     </li>
   </ol>
 </div>

http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/deployment-in-sandbox.html
----------------------------------------------------------------------
diff --git a/_site/docs/deployment-in-sandbox.html b/_site/docs/deployment-in-sandbox.html
index 239ed7d..c6868b0 100644
--- a/_site/docs/deployment-in-sandbox.html
+++ b/_site/docs/deployment-in-sandbox.html
@@ -201,21 +201,23 @@
         <p><strong>Option 1</strong>: Download eagle jar from <a href="http://66.211.190.194/eagle-0.1.0.tar.gz">here</a>.</p>
       </li>
       <li>
-        <p><strong>Option 2</strong>: Build form source code <a href="https://github.com/apache/eagle">eagle github</a>. After successful build, ‘eagle-xxx-bin.tar.gz’ will be generated under <code>./eagle-assembly/target</code></p>
+        <p><strong>Option 2</strong>: Build form source code <a href="https://github.com/apache/eagle">eagle github</a>. After successful build, ‘eagle-xxx-bin.tar.gz’ will be generated under <code class="highlighter-rouge">./eagle-assembly/target</code></p>
 
-        <pre><code># installed npm is required before compiling
+        <div class="highlighter-rouge"><pre class="highlight"><code># installed npm is required before compiling
 $ mvn clean install -DskipTests=true
 </code></pre>
+        </div>
       </li>
     </ul>
   </li>
   <li>
     <p><strong>Copy and extract the package to sandbox</strong></p>
 
-    <pre><code>#extract
+    <div class="highlighter-rouge"><pre class="highlight"><code>#extract
 $ tar -zxvf eagle-0.1.0-bin.tar.gz
 $ mv eagle-0.1.0 /usr/hdp/current/eagle
 </code></pre>
+    </div>
   </li>
 </ul>
 
@@ -229,9 +231,10 @@ $ mv eagle-0.1.0 /usr/hdp/current/eagle
   <li>
     <p><strong>Option 1</strong>: Install Eagle using command line</p>
 
-    <pre><code>$ cd /usr/hdp/current/eagle
+    <div class="highlighter-rouge"><pre class="highlight"><code>$ cd /usr/hdp/current/eagle
 $ examples/eagle-sandbox-starter.sh
 </code></pre>
+    </div>
   </li>
   <li>
     <p><strong>Option 2</strong>: Install Eagle using <a href="/docs/ambari-plugin-install.html">Eagle Ambari plugin</a></p>
@@ -246,7 +249,7 @@ $ examples/eagle-sandbox-starter.sh
   <li>
     <p><strong>Step 1</strong>: Configure Advanced hadoop-log4j via <a href="http://localhost:8080/#/main/services/HDFS/configs" target="_blank">Ambari UI</a>, and add below “KAFKA_HDFS_AUDIT” log4j appender to hdfs audit logging.</p>
 
-    <pre><code>log4j.appender.KAFKA_HDFS_AUDIT=org.apache.eagle.log4j.kafka.KafkaLog4jAppender
+    <div class="highlighter-rouge"><pre class="highlight"><code>log4j.appender.KAFKA_HDFS_AUDIT=org.apache.eagle.log4j.kafka.KafkaLog4jAppender
 log4j.appender.KAFKA_HDFS_AUDIT.Topic=sandbox_hdfs_audit_log
 log4j.appender.KAFKA_HDFS_AUDIT.BrokerList=sandbox.hortonworks.com:6667
 log4j.appender.KAFKA_HDFS_AUDIT.KeyClass=org.apache.eagle.log4j.kafka.hadoop.AuditLogKeyer
@@ -256,22 +259,25 @@ log4j.appender.KAFKA_HDFS_AUDIT.ProducerType=async
 #log4j.appender.KAFKA_HDFS_AUDIT.BatchSize=1
 #log4j.appender.KAFKA_HDFS_AUDIT.QueueSize=1
 </code></pre>
+    </div>
 
     <p><img src="/images/docs/hdfs-log4j-conf.png" alt="HDFS LOG4J Configuration" title="hdfslog4jconf" /></p>
   </li>
   <li>
     <p><strong>Step 3</strong>: Edit Advanced hadoop-env via <a href="http://localhost:8080/#/main/services/HDFS/configs" target="_blank">Ambari UI</a>, and add the reference to KAFKA_HDFS_AUDIT to HADOOP_NAMENODE_OPTS.</p>
 
-    <pre><code>-Dhdfs.audit.logger=INFO,DRFAAUDIT,KAFKA_HDFS_AUDIT
+    <div class="highlighter-rouge"><pre class="highlight"><code>-Dhdfs.audit.logger=INFO,DRFAAUDIT,KAFKA_HDFS_AUDIT
 </code></pre>
+    </div>
 
     <p><img src="/images/docs/hdfs-env-conf.png" alt="HDFS Environment Configuration" title="hdfsenvconf" /></p>
   </li>
   <li>
     <p><strong>Step 4</strong>: Edit Advanced hadoop-env via <a href="http://localhost:8080/#/main/services/HDFS/configs" target="_blank">Ambari UI</a>, and append the following command to it.</p>
 
-    <pre><code>export HADOOP_CLASSPATH=${HADOOP_CLASSPATH}:/usr/hdp/current/eagle/lib/log4jkafka/lib/*
+    <div class="highlighter-rouge"><pre class="highlight"><code>export HADOOP_CLASSPATH=${HADOOP_CLASSPATH}:/usr/hdp/current/eagle/lib/log4jkafka/lib/*
 </code></pre>
+    </div>
 
     <p><img src="/images/docs/hdfs-env-conf2.png" alt="HDFS Environment Configuration" title="hdfsenvconf2" /></p>
   </li>
@@ -279,14 +285,15 @@ log4j.appender.KAFKA_HDFS_AUDIT.ProducerType=async
     <p><strong>Step 5</strong>: save the changes and restart the namenode.</p>
   </li>
   <li>
-    <p><strong>Step 6</strong>: Check whether logs are flowing into topic <code>sandbox_hdfs_audit_log</code></p>
+    <p><strong>Step 6</strong>: Check whether logs are flowing into topic <code class="highlighter-rouge">sandbox_hdfs_audit_log</code></p>
 
-    <pre><code>$ /usr/hdp/current/kafka-broker/bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic sandbox_hdfs_audit_log
+    <div class="highlighter-rouge"><pre class="highlight"><code>$ /usr/hdp/current/kafka-broker/bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic sandbox_hdfs_audit_log
 </code></pre>
+    </div>
   </li>
 </ul>
 
-<p>Now please login to Eagle web http://localhost:9099/eagle-service with account <code>admin/secret</code>, and try the sample demos on
+<p>Now please login to Eagle web http://localhost:9099/eagle-service with account <code class="highlighter-rouge">admin/secret</code>, and try the sample demos on
 <a href="/docs/quick-start.html">Quick Starer</a></p>
 
 <p>(If the NAT network is used in a virtual machine, it’s required to add port 9099 to forwarding ports)
@@ -300,19 +307,19 @@ log4j.appender.KAFKA_HDFS_AUDIT.ProducerType=async
 <div class="footnotes">
   <ol>
     <li id="fn:HADOOP">
-      <p><em>All mentions of “hadoop” on this page represent Apache Hadoop.</em> <a href="#fnref:HADOOP" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “hadoop” on this page represent Apache Hadoop.</em>&nbsp;<a href="#fnref:HADOOP" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:AMBARI">
-      <p><em>All mentions of “ambari” on this page represent Apache Ambari.</em> <a href="#fnref:AMBARI" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “ambari” on this page represent Apache Ambari.</em>&nbsp;<a href="#fnref:AMBARI" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:HBASE">
-      <p><em>All mentions of “hbase” on this page represent Apache HBase.</em> <a href="#fnref:HBASE" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “hbase” on this page represent Apache HBase.</em>&nbsp;<a href="#fnref:HBASE" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:STORM">
-      <p><em>All mentions of “storm” on this page represent Apache Storm.</em> <a href="#fnref:STORM" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “storm” on this page represent Apache Storm.</em>&nbsp;<a href="#fnref:STORM" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:KAFKA">
-      <p><em>All mentions of “kafka” on this page represent Apache Kafka.</em> <a href="#fnref:KAFKA" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “kafka” on this page represent Apache Kafka.</em>&nbsp;<a href="#fnref:KAFKA" class="reversefootnote">&#8617;</a></p>
     </li>
   </ol>
 </div>

http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/development-in-intellij.html
----------------------------------------------------------------------
diff --git a/_site/docs/development-in-intellij.html b/_site/docs/development-in-intellij.html
index e6c8536..2c67818 100644
--- a/_site/docs/development-in-intellij.html
+++ b/_site/docs/development-in-intellij.html
@@ -158,7 +158,7 @@
         <h1 class="page-header" style="margin-top: 0px">Development in Intellij</h1>
         <p>Apache Eagle (called Eagle in the following) can be developed in popular IDE, e.g. Intellij and Eclipse. Here we focus on development in Intellij.</p>
 
-<h3 id="prepare-hadoophadoop-environment">1. Prepare Hadoop<sup id="fnref:HADOOP"><a href="#fn:HADOOP" class="footnote">1</a></sup> environment</h3>
+<h3 id="1-prepare-hadoop-environment">1. Prepare Hadoop<sup id="fnref:HADOOP"><a href="#fn:HADOOP" class="footnote">1</a></sup> environment</h3>
 
 <p>Normally HDP sandbox is needed for testing Hadoop monitoring. Please reference <a href="/docs/quick-start.html">Quick Start</a> for setting up HDP sandbox.</p>
 
@@ -185,7 +185,7 @@
   </li>
 </ul>
 
-<h3 id="start-eagle-web-service-in-intellij">2. Start Eagle web service in Intellij</h3>
+<h3 id="2-start-eagle-web-service-in-intellij">2. Start Eagle web service in Intellij</h3>
 
 <p>Import source code into Intellij, and find eagle-webservice project. Intellij Ultimate supports launching J2EE server within Intellij. If you don’t have 
 Intellij Ultimate version, Eclipse is another option.</p>
@@ -208,7 +208,7 @@ Intellij Ultimate version, Eclipse is another option.</p>
 
 <p>Configure Intellij for running Apache Tomcat server with eagle-service artifacts</p>
 
-<h3 id="start-topology-in-intellij">3. Start topology in Intellij</h3>
+<h3 id="3-start-topology-in-intellij">3. Start topology in Intellij</h3>
 
 <ul>
   <li><strong>Check topology configuration</strong></li>
@@ -237,19 +237,19 @@ Intellij Ultimate version, Eclipse is another option.</p>
 <div class="footnotes">
   <ol>
     <li id="fn:HADOOP">
-      <p><em>All mentions of “hadoop” on this page represent Apache Hadoop.</em> <a href="#fnref:HADOOP" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “hadoop” on this page represent Apache Hadoop.</em>&nbsp;<a href="#fnref:HADOOP" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:HBASE">
-      <p><em>All mentions of “hbase” on this page represent Apache HBase.</em> <a href="#fnref:HBASE" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “hbase” on this page represent Apache HBase.</em>&nbsp;<a href="#fnref:HBASE" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:ZOOKEEPER">
-      <p><em>Apache ZooKeeper.</em> <a href="#fnref:ZOOKEEPER" class="reversefootnote">&#8617;</a></p>
+      <p><em>Apache ZooKeeper.</em>&nbsp;<a href="#fnref:ZOOKEEPER" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:KAFKA">
-      <p><em>All mentions of “kafka” on this page represent Apache Kafka.</em> <a href="#fnref:KAFKA" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “kafka” on this page represent Apache Kafka.</em>&nbsp;<a href="#fnref:KAFKA" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:STORM">
-      <p><em>All mentions of “storm” on this page represent Apache Storm.</em> <a href="#fnref:STORM" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “storm” on this page represent Apache Storm.</em>&nbsp;<a href="#fnref:STORM" class="reversefootnote">&#8617;</a></p>
     </li>
   </ol>
 </div>

http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/development-in-macosx.html
----------------------------------------------------------------------
diff --git a/_site/docs/development-in-macosx.html b/_site/docs/development-in-macosx.html
index 671be47..21f6e1d 100644
--- a/_site/docs/development-in-macosx.html
+++ b/_site/docs/development-in-macosx.html
@@ -159,7 +159,7 @@
         <h2 id="how-to-setup-apache-eagle-development-environment-on-mac-osx">How to Setup Apache Eagle Development Environment on Mac OSX</h2>
 
 <p><em>Apache Eagle will be called Eagle in the following.</em><br />
-This tutorial is based <code>Mac OS X</code>. It can be used as a reference guide for other OS like Linux or Windows as well.  To save your time of jumping back and forth between different web pages, all necessary references will be pointed out.</p>
+This tutorial is based <code class="highlighter-rouge">Mac OS X</code>. It can be used as a reference guide for other OS like Linux or Windows as well.  To save your time of jumping back and forth between different web pages, all necessary references will be pointed out.</p>
 
 <h3 id="prerequisite">Prerequisite</h3>
 
@@ -169,8 +169,9 @@ This tutorial is based <code>Mac OS X</code>. It can be used as a reference guid
 
 <p>Make sure you have HomeBrew installed on your mac. If not, please run:</p>
 
-<pre><code>$ ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
+<div class="highlighter-rouge"><pre class="highlight"><code>$ ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
 </code></pre>
+</div>
 
 <p>you can find more information about HomeBrew at http://brew.sh/ .</p>
 
@@ -180,9 +181,10 @@ This tutorial is based <code>Mac OS X</code>. It can be used as a reference guid
 
 <p>Some core eagle modules are written with scala. To install Scala and SBT, just run:</p>
 
-<pre><code> $ brew install scala
+<div class="highlighter-rouge"><pre class="highlight"><code> $ brew install scala
  $ brew install sbt
 </code></pre>
+</div>
 
 <ul>
   <li><strong>NPM</strong></li>
@@ -190,8 +192,9 @@ This tutorial is based <code>Mac OS X</code>. It can be used as a reference guid
 
 <p>Eagle-webservice module uses npm. To install it, run:</p>
 
-<pre><code> $ brew install npm
+<div class="highlighter-rouge"><pre class="highlight"><code> $ brew install npm
 </code></pre>
+</div>
 
 <ul>
   <li><strong>Apache Maven</strong></li>
@@ -199,8 +202,9 @@ This tutorial is based <code>Mac OS X</code>. It can be used as a reference guid
 
 <p>Eagle is built with maven:</p>
 
-<pre><code> $ brew install maven
+<div class="highlighter-rouge"><pre class="highlight"><code> $ brew install maven
 </code></pre>
+</div>
 
 <ul>
   <li>
@@ -210,8 +214,9 @@ This tutorial is based <code>Mac OS X</code>. It can be used as a reference guid
       <li>
         <p>Install HomeBrew Cask:</p>
 
-        <pre><code>$ brew install caskroom/cask/brew-cask
+        <div class="highlighter-rouge"><pre class="highlight"><code>$ brew install caskroom/cask/brew-cask
 </code></pre>
+        </div>
       </li>
       <li>
         <p>Next, install JDK via HomeBrew:</p>
@@ -224,15 +229,18 @@ This tutorial is based <code>Mac OS X</code>. It can be used as a reference guid
 
 <p>you will see all available JDK versions and you can install multiple JDK versions in this way. For eagle please choose java7 to install:</p>
 
-<pre><code> $ brew cask install java7
+<div class="highlighter-rouge"><pre class="highlight"><code> $ brew cask install java7
 </code></pre>
+</div>
 
-<p><strong>Note:</strong>
-- During this writing SBT has issue with JDK 8. This issue has been tested confirmed by using: 
-- Java 1.8.0_66
-- Maven 3.3.9
-- Scala 2.11.7
-- Sbt 0.13.9</p>
+<p><strong>Note:</strong></p>
+<ul>
+  <li>During this writing SBT has issue with JDK 8. This issue has been tested confirmed by using:</li>
+  <li>Java 1.8.0_66</li>
+  <li>Maven 3.3.9</li>
+  <li>Scala 2.11.7</li>
+  <li>Sbt 0.13.9</li>
+</ul>
 
 <p>you can find more information about HomeBrew Cask at <a href="http://caskroom.io">http://caskroom.io</a></p>
 
@@ -242,44 +250,53 @@ This tutorial is based <code>Mac OS X</code>. It can be used as a reference guid
 
 <p>you can use Jenv to manage installed multiple Java versions. To install it:</p>
 
-<pre><code>$ brew install https://raw.githubusercontent.com/entrypass/jenv/homebrew/homebrew/jenv.rb
+<div class="highlighter-rouge"><pre class="highlight"><code>$ brew install https://raw.githubusercontent.com/entrypass/jenv/homebrew/homebrew/jenv.rb
 </code></pre>
+</div>
 
 <p>and make sure activate it automatically:</p>
 
-<pre><code>$ echo 'eval "$(jenv init -)"' &gt;&gt; ~/.bash_profile
+<div class="highlighter-rouge"><pre class="highlight"><code>$ echo 'eval "$(jenv init -)"' &gt;&gt; ~/.bash_profile
 </code></pre>
+</div>
 
-<p><strong>Note:</strong>
-- There is a known issue at this writing: https://github.com/gcuisinier/jenv/wiki/Trouble-Shooting
-- Please make sure JENV_ROOT has been set before jenv init:
-- $ export JENV_ROOT=/usr/local/opt/jenv</p>
+<p><strong>Note:</strong></p>
+<ul>
+  <li>There is a known issue at this writing: https://github.com/gcuisinier/jenv/wiki/Trouble-Shooting</li>
+  <li>Please make sure JENV_ROOT has been set before jenv init:</li>
+  <li>$ export JENV_ROOT=/usr/local/opt/jenv</li>
+</ul>
 
 <p>Now let Jenv manage JDK versions (remember In OSX all JVMs are located at /Library/Java/JavaVirtualMachines):</p>
 
-<pre><code>$ jenv add /Library/Java/JavaVirtualMachines/jdk1.8.0_66.jdk/Contents/Home/
+<div class="highlighter-rouge"><pre class="highlight"><code>$ jenv add /Library/Java/JavaVirtualMachines/jdk1.8.0_66.jdk/Contents/Home/
 $ jenv add /Library/Java/JavaVirtualMachines/jdk1.7.0_80.jdk/Contents/Home/
 </code></pre>
+</div>
 
 <p>and</p>
 
-<pre><code>$ jenv rehash
+<div class="highlighter-rouge"><pre class="highlight"><code>$ jenv rehash
 </code></pre>
+</div>
 
 <p>You can see all managed JDK versions:</p>
 
-<pre><code>$ jenv versions
+<div class="highlighter-rouge"><pre class="highlight"><code>$ jenv versions
 </code></pre>
+</div>
 
 <p>set global java version:</p>
 
-<pre><code>$ jenv global oracle64-1.8.0.66
+<div class="highlighter-rouge"><pre class="highlight"><code>$ jenv global oracle64-1.8.0.66
 </code></pre>
+</div>
 
 <p>switch to your eagle home directory and set the local JDK version for eagle:</p>
 
-<pre><code>$ jenv local oracle64-1.7.0.80
+<div class="highlighter-rouge"><pre class="highlight"><code>$ jenv local oracle64-1.7.0.80
 </code></pre>
+</div>
 
 <p>you can find more information about Jenv at https://github.com/rbenv/rbenv and http://hanxue-it.blogspot.com/2014/05/installing-java-8-managing-multiple.html.</p>
 
@@ -287,8 +304,9 @@ $ jenv add /Library/Java/JavaVirtualMachines/jdk1.7.0_80.jdk/Contents/Home/
 
 <p>Go to Eagle home directory and run:</p>
 
-<pre><code>mvn -DskipTests clean package
+<div class="highlighter-rouge"><pre class="highlight"><code>mvn -DskipTests clean package
 </code></pre>
+</div>
 
 <p>That’s all. Now you have runnable eagle on your Mac. Have fun. :-)</p>
 

http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/download-latest.html
----------------------------------------------------------------------
diff --git a/_site/docs/download-latest.html b/_site/docs/download-latest.html
index 32afb60..639b18f 100644
--- a/_site/docs/download-latest.html
+++ b/_site/docs/download-latest.html
@@ -157,58 +157,29 @@
       <div class="col-xs-6 col-sm-9 page-main-content" style="margin-left: -15px" id="loadcontent">
         <h1 class="page-header" style="margin-top: 0px">Apache Eagle Latest Download</h1>
         <blockquote>
-  <p>Version <code>0.4.0-incubating</code> is the latest release and <code>0.5.0-SNAPSHOT</code> is under active development on <a href="https://github.com/apache/eagle/tree/master">master</a> branch.</p>
+  <p>Version <strong>0.5.0</strong> is the latest release and 0.5.0-SNAPSHOT is under active development on <a href="https://github.com/apache/eagle/tree/master">master</a> branch.</p>
 
   <p>You can verify your download by following these <a href="https://www.apache.org/info/verification.html">procedures</a> and using these <a href="https://dist.apache.org/repos/dist/release/eagle/KEYS">KEYS</a>.</p>
 </blockquote>
 
-<h1 id="snapshot">0.5.0-SNAPSHOT</h1>
-
-<blockquote>
-  <p>The first GA version <code>v0.5.0</code> with fantastic improvements and features is coming soon!</p>
-</blockquote>
-
-<ul>
-  <li>
-    <p>Build from source code:</p>
-
-    <pre><code>  git clone https://github.com/apache/eagle.git
-</code></pre>
-  </li>
-  <li>Release notes for preview:
-    <ul>
-      <li><a href="https://cwiki.apache.org/confluence/display/EAG/Eagle+Version+0.5.0">Eagle 0.5.0 Release Notes</a></li>
-    </ul>
-  </li>
-  <li>Documentation:
-    <ul>
-      <li><a href="/docs/latest/">Eagle 0.5.0 Documentations</a></li>
-    </ul>
-  </li>
-</ul>
-
-<h1 id="incubating">0.4.0-incubating</h1>
-
-<p><a href="http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.apache.eagle%22%20AND%20a%3A%22eagle-parent%22"><img src="https://maven-badges.herokuapp.com/maven-central/org.apache.eagle/eagle-parent/badge.svg" alt="Eagle Latest Maven Release" /></a></p>
-
+<h1 id="050">0.5.0</h1>
 <ul>
   <li>Release notes:
     <ul>
-      <li><a href="https://git-wip-us.apache.org/repos/asf?p=eagle.git;a=blob_plain;f=CHANGELOG.txt;hb=refs/tags/v0.4.0-incubating">Eagle 0.4.0 Release Notes</a></li>
+      <li><a href="https://git-wip-us.apache.org/repos/asf?p=eagle.git;a=blob_plain;f=CHANGELOG.txt;hb=refs/tags/v0.5.0">Eagle 0.5.0 Release Notes</a></li>
     </ul>
   </li>
   <li>Source download:
     <ul>
-      <li><a href="http://www.apache.org/dyn/closer.cgi?path=/eagle/apache-eagle-0.4.0-incubating">apache-eagle-0.4.0-incubating-src.tar.gz</a></li>
-      <li><a href="https://dist.apache.org/repos/dist/release/eagle/apache-eagle-0.4.0-incubating/apache-eagle-0.4.0-incubating-src.tar.gz.md5">apache-eagle-0.4.0-incubating-src.tar.gz.md5</a></li>
-      <li><a href="https://dist.apache.org/repos/dist/release/eagle/apache-eagle-0.4.0-incubating/apache-eagle-0.4.0-incubating-src.tar.gz.sha1">apache-eagle-0.4.0-incubating-src.tar.gz.sha1</a></li>
-      <li><a href="https://dist.apache.org/repos/dist/release/eagle/apache-eagle-0.4.0-incubating/apache-eagle-0.4.0-incubating-src.tar.gz.asc">apache-eagle-0.4.0-incubating-src.tar.gz.asc</a></li>
+      <li><a href="http://www.apache.org/dyn/closer.cgi?path=/incubator/eagle/apache-eagle-0.5.0-incubating">apache-eagle-0.5.0-src.tar.gz</a></li>
+      <li><a href="https://dist.apache.org/repos/dist/release/eagle/apache-eagle-0.5.0/apache-eagle-0.5.0-src.tar.gz.md5">apache-eagle-0.5.0-src.tar.gz.md5</a></li>
+      <li><a href="https://dist.apache.org/repos/dist/release/eagle/apache-eagle-0.5.0/apache-eagle-0.5.0-src.tar.gz.sha1">apache-eagle-0.5.0-src.tar.gz.sha1</a></li>
+      <li><a href="https://dist.apache.org/repos/dist/release/eagle/apache-eagle-0.5.0/apache-eagle-0.5.0-src.tar.gz.asc">apache-eagle-0.5.0-src.tar.gz.asc</a></li>
     </ul>
   </li>
   <li>Git revision:
     <ul>
-      <li>tag: <a href="https://git-wip-us.apache.org/repos/asf?p=eagle.git;a=commit;h=refs/tags/v0.4.0-incubating">v0.4.0-incubating</a></li>
-      <li>commit: <a href="https://git-wip-us.apache.org/repos/asf?p=eagle.git;a=commit;h=eac0f27958f2ed8c6842938dad0a995a87fd0715">eac0f27958f2ed8c6842938dad0a995a87fd0715</a></li>
+      <li>tag: <a href="https://git-wip-us.apache.org/repos/asf?p=eagle.git;a=commit;h=refs/tags/v0.5.0">v0.5.0</a></li>
     </ul>
   </li>
 </ul>
@@ -216,6 +187,7 @@
 <h1 id="previous-releases">Previous Releases</h1>
 
 <ul>
+  <li><a href="/docs/download.html#0.4.0-incubating">Eagle 0.4.0-incubating</a></li>
   <li><a href="/docs/download.html#0.3.0-incubating">Eagle 0.3.0-incubating</a></li>
 </ul>
 

http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/download.html
----------------------------------------------------------------------
diff --git a/_site/docs/download.html b/_site/docs/download.html
index bc50dad..fdfa701 100644
--- a/_site/docs/download.html
+++ b/_site/docs/download.html
@@ -160,7 +160,30 @@
   <p>You can verify your download by following these <a href="https://www.apache.org/info/verification.html">procedures</a> and using these <a href="https://dist.apache.org/repos/dist/release/eagle/KEYS">KEYS</a>.</p>
 </blockquote>
 
-<h1 id="incubating">0.4.0-incubating</h1>
+<h1 id="050">0.5.0</h1>
+<ul>
+  <li>Release notes:
+    <ul>
+      <li><a href="https://git-wip-us.apache.org/repos/asf?p=eagle.git;a=blob_plain;f=CHANGELOG.txt;hb=refs/tags/v0.5.0">Eagle 0.5.0 Release Notes</a></li>
+    </ul>
+  </li>
+  <li>Source download:
+    <ul>
+      <li><a href="http://www.apache.org/dyn/closer.cgi?path=/incubator/eagle/apache-eagle-0.5.0-incubating">apache-eagle-0.5.0-src.tar.gz</a></li>
+      <li><a href="https://dist.apache.org/repos/dist/release/eagle/apache-eagle-0.5.0/apache-eagle-0.5.0-src.tar.gz.md5">apache-eagle-0.5.0-src.tar.gz.md5</a></li>
+      <li><a href="https://dist.apache.org/repos/dist/release/eagle/apache-eagle-0.5.0/apache-eagle-0.5.0-src.tar.gz.sha1">apache-eagle-0.5.0-src.tar.gz.sha1</a></li>
+      <li><a href="https://dist.apache.org/repos/dist/release/eagle/apache-eagle-0.5.0/apache-eagle-0.5.0-src.tar.gz.asc">apache-eagle-0.5.0-src.tar.gz.asc</a></li>
+    </ul>
+  </li>
+  <li>Git revision:
+    <ul>
+      <li>tag: <a href="https://git-wip-us.apache.org/repos/asf?p=eagle.git;a=commit;h=refs/tags/v0.5.0">v0.5.0</a></li>
+      <li>commit: <a href="https://git-wip-us.apache.org/repos/asf?p=eagle.git;a=commit;h=c930a7ab4a4fb78a1cbacd8f8419de3b5dbf1bd7">c930a7ab4a4fb78a1cbacd8f8419de3b5dbf1bd7</a></li>
+    </ul>
+  </li>
+</ul>
+
+<h1 id="040-incubating">0.4.0-incubating</h1>
 <ul>
   <li>Release notes:
     <ul>
@@ -183,7 +206,7 @@
   </li>
 </ul>
 
-<h1 id="incubating-1">0.3.0-incubating</h1>
+<h1 id="030-incubating">0.3.0-incubating</h1>
 
 <ul>
   <li>Release notes:

http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/hbase-auth-activity-monitoring.html
----------------------------------------------------------------------
diff --git a/_site/docs/hbase-auth-activity-monitoring.html b/_site/docs/hbase-auth-activity-monitoring.html
index af4c45c..bda9f52 100644
--- a/_site/docs/hbase-auth-activity-monitoring.html
+++ b/_site/docs/hbase-auth-activity-monitoring.html
@@ -160,11 +160,11 @@
 
 <p>Please follow below steps to enable HBase authorization auditing in HDP sandbox and Cloudera</p>
 
-<h4 id="in-hbase-sitexml">1. in hbase-site.xml</h4>
+<h4 id="1-in-hbase-sitexml">1. in hbase-site.xml</h4>
 
 <p>Note: when testing in HDP sandbox, sometimes Apache Ranger will take over access controll for HBase, so maybe you need change that back to native hbase access controller, i.e. change com.xasecure.authorization.hbase.XaSecureAuthorizationCoprocessor to org.apache.hadoop.hbase.security.access.AccessController</p>
 
-<pre><code>&lt;property&gt;
+<div class="highlighter-rouge"><pre class="highlight"><code>&lt;property&gt;
      &lt;name&gt;hbase.security.authorization&lt;/name&gt;
      &lt;value&gt;true&lt;/value&gt;
 &lt;/property&gt;
@@ -177,10 +177,11 @@
      &lt;value&gt;org.apache.hadoop.hbase.security.token.TokenProvider,org.apache.hadoop.hbase.security.access.AccessController&lt;/value&gt;
 &lt;/property&gt;
 </code></pre>
+</div>
 
-<h4 id="log4jproperties">2. log4j.properties</h4>
+<h4 id="2-log4jproperties">2. log4j.properties</h4>
 
-<pre><code>#
+<div class="highlighter-rouge"><pre class="highlight"><code>#
 # Security audit appender
 #
 hbase.security.log.file=SecurityAuth.audit
@@ -196,6 +197,7 @@ log4j.category.SecurityLogger=${hbase.security.logger}
 log4j.additivity.SecurityLogger=false
 log4j.logger.SecurityLogger.org.apache.hadoop.hbase.security.access.AccessController=TRACE
 </code></pre>
+</div>
 
 <hr />
 
@@ -204,7 +206,7 @@ log4j.logger.SecurityLogger.org.apache.hadoop.hbase.security.access.AccessContro
 <div class="footnotes">
   <ol>
     <li id="fn:HBASE">
-      <p><em>All mentions of “hbase” on this page represent Apache HBase.</em> <a href="#fnref:HBASE" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “hbase” on this page represent Apache HBase.</em>&nbsp;<a href="#fnref:HBASE" class="reversefootnote">&#8617;</a></p>
     </li>
   </ol>
 </div>

http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/hbase-data-activity-monitoring.html
----------------------------------------------------------------------
diff --git a/_site/docs/hbase-data-activity-monitoring.html b/_site/docs/hbase-data-activity-monitoring.html
index 119a7bd..63a4e77 100644
--- a/_site/docs/hbase-data-activity-monitoring.html
+++ b/_site/docs/hbase-data-activity-monitoring.html
@@ -174,15 +174,16 @@
 
 <ol>
   <li>
-    <p>edit Advanced hbase-log4j via Ambari<sup id="fnref:AMBARI"><a href="#fn:AMBARI" class="footnote">3</a></sup> UI, and append below sentence to <code>Security audit appender</code></p>
+    <p>edit Advanced hbase-log4j via Ambari<sup id="fnref:AMBARI"><a href="#fn:AMBARI" class="footnote">3</a></sup> UI, and append below sentence to <code class="highlighter-rouge">Security audit appender</code></p>
 
-    <pre><code> log4j.logger.SecurityLogger.org.apache.hadoop.hbase.security.access.AccessController=TRACE,RFAS
+    <div class="highlighter-rouge"><pre class="highlight"><code> log4j.logger.SecurityLogger.org.apache.hadoop.hbase.security.access.AccessController=TRACE,RFAS
 </code></pre>
+    </div>
   </li>
   <li>
     <p>edit Advanced hbase-site.xml</p>
 
-    <pre><code> &lt;property&gt;
+    <div class="highlighter-rouge"><pre class="highlight"><code> &lt;property&gt;
    &lt;name&gt;hbase.security.authorization&lt;/name&gt;
    &lt;value&gt;true&lt;/value&gt;
  &lt;/property&gt;
@@ -197,6 +198,7 @@
    &lt;value&gt;org.apache.hadoop.hbase.security.access.AccessController&lt;/value&gt;
  &lt;/property&gt;
 </code></pre>
+    </div>
   </li>
   <li>
     <p>Save and restart HBase</p>
@@ -208,21 +210,22 @@
 <h3 id="how-to-add-a-kafka-log4j-appender">How to add a Kafka log4j appender</h3>
 
 <blockquote>
-  <p>Notice: if you are willing to use sample logs under <code>eagle-security-hbase-security/test/resources/securityAuditLog</code>, please skip this part.</p>
+  <p>Notice: if you are willing to use sample logs under <code class="highlighter-rouge">eagle-security-hbase-security/test/resources/securityAuditLog</code>, please skip this part.</p>
 </blockquote>
 
 <ol>
   <li>
-    <p>create Kafka topic <code>sandbox_hbase_security_log</code></p>
+    <p>create Kafka topic <code class="highlighter-rouge">sandbox_hbase_security_log</code></p>
 
-    <pre><code> $ /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic sandbox_hbase_security_log
+    <div class="highlighter-rouge"><pre class="highlight"><code> $ /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic sandbox_hbase_security_log
 </code></pre>
+    </div>
   </li>
   <li>
-    <p>add below “KAFKA_HBASE_AUDIT” log4j appender to <code>Security audit appender</code>
+    <p>add below “KAFKA_HBASE_AUDIT” log4j appender to <code class="highlighter-rouge">Security audit appender</code>
 Please refer to http://goeagle.io/docs/import-hdfs-auditLog.html.</p>
 
-    <pre><code> log4j.appender.KAFKA_HBASE_AUDIT=org.apache.eagle.log4j.kafka.KafkaLog4jAppender
+    <div class="highlighter-rouge"><pre class="highlight"><code> log4j.appender.KAFKA_HBASE_AUDIT=org.apache.eagle.log4j.kafka.KafkaLog4jAppender
  log4j.appender.KAFKA_HBASE_AUDIT.Topic=sandbox_hbase_security_log
  log4j.appender.KAFKA_HBASE_AUDIT.BrokerList=sandbox.hortonworks.com:6667
  log4j.appender.KAFKA_HBASE_AUDIT.Layout=org.apache.log4j.PatternLayout
@@ -231,18 +234,21 @@ Please refer to http://goeagle.io/docs/import-hdfs-auditLog.html.</p>
  log4j.appender.KAFKA_HDFS_AUDIT.KeyClass=org.apache.eagle.log4j.kafka.hadoop.GenericLogKeyer
  log4j.appender.KAFKA_HDFS_AUDIT.KeyPattern=user=(\\w+),\\s+
 </code></pre>
+    </div>
   </li>
   <li>
     <p>add the reference to KAFKA_HBASE_AUDIT to log4j appender</p>
 
-    <pre><code> log4j.logger.SecurityLogger.org.apache.hadoop.hbase.security.access.AccessController=TRACE,RFAS,KAFKA_HBASE_AUDIT
+    <div class="highlighter-rouge"><pre class="highlight"><code> log4j.logger.SecurityLogger.org.apache.hadoop.hbase.security.access.AccessController=TRACE,RFAS,KAFKA_HBASE_AUDIT
 </code></pre>
+    </div>
   </li>
   <li>
     <p>add Eagle log4j appender jars into HBASE_CLASSPATH BY editing Advanced hbase-env via Ambari UI</p>
 
-    <pre><code> export HBASE_CLASSPATH=${HBASE_CLASSPATH}:/usr/hdp/current/eagle/lib/log4jkafka/lib/*
+    <div class="highlighter-rouge"><pre class="highlight"><code> export HBASE_CLASSPATH=${HBASE_CLASSPATH}:/usr/hdp/current/eagle/lib/log4jkafka/lib/*
 </code></pre>
+    </div>
   </li>
   <li>
     <p>Save and restart HBase</p>
@@ -253,32 +259,36 @@ Please refer to http://goeagle.io/docs/import-hdfs-auditLog.html.</p>
 
 <ol>
   <li>
-    <p>create tables (<code>skip if you do not use hbase</code>)</p>
+    <p>create tables (<code class="highlighter-rouge">skip if you do not use hbase</code>)</p>
 
-    <pre><code> bin/eagle-service-init.sh 
+    <div class="highlighter-rouge"><pre class="highlight"><code> bin/eagle-service-init.sh 
 </code></pre>
+    </div>
   </li>
   <li>
     <p>start Eagle service</p>
 
-    <pre><code> bin/eagle-service.sh start
+    <div class="highlighter-rouge"><pre class="highlight"><code> bin/eagle-service.sh start
 </code></pre>
+    </div>
   </li>
   <li>
     <p>import metadata</p>
 
-    <pre><code> bin/eagle-topology-init.sh
+    <div class="highlighter-rouge"><pre class="highlight"><code> bin/eagle-topology-init.sh
 </code></pre>
+    </div>
   </li>
   <li>
     <p>submit topology</p>
 
-    <pre><code> bin/eagle-topology.sh --main org.apache.eagle.security.hbase.HbaseAuditLogProcessorMain --config conf/sandbox-hbaseSecurityLog-application.conf start
+    <div class="highlighter-rouge"><pre class="highlight"><code> bin/eagle-topology.sh --main org.apache.eagle.security.hbase.HbaseAuditLogProcessorMain --config conf/sandbox-hbaseSecurityLog-application.conf start
 </code></pre>
+    </div>
   </li>
 </ol>
 
-<p>(sample sensitivity data at <code>examples/sample-sensitivity-resource-create.sh</code>)</p>
+<p>(sample sensitivity data at <code class="highlighter-rouge">examples/sample-sensitivity-resource-create.sh</code>)</p>
 
 <h3 id="q--a">Q &amp; A</h3>
 
@@ -293,13 +303,13 @@ Please refer to http://goeagle.io/docs/import-hdfs-auditLog.html.</p>
 <div class="footnotes">
   <ol>
     <li id="fn:HBASE">
-      <p><em>All mentions of “hbase” on this page represent Apache HBase.</em> <a href="#fnref:HBASE" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “hbase” on this page represent Apache HBase.</em>&nbsp;<a href="#fnref:HBASE" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:KAFKA">
-      <p><em>All mentions of “kafka” on this page represent Apache Kafka.</em> <a href="#fnref:KAFKA" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “kafka” on this page represent Apache Kafka.</em>&nbsp;<a href="#fnref:KAFKA" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:AMBARI">
-      <p><em>All mentions of “ambari” on this page represent Apache Ambari.</em> <a href="#fnref:AMBARI" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “ambari” on this page represent Apache Ambari.</em>&nbsp;<a href="#fnref:AMBARI" class="reversefootnote">&#8617;</a></p>
     </li>
   </ol>
 </div>

http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/hdfs-auth-activity-monitoring.html
----------------------------------------------------------------------
diff --git a/_site/docs/hdfs-auth-activity-monitoring.html b/_site/docs/hdfs-auth-activity-monitoring.html
index bf7500f..d8b77c5 100644
--- a/_site/docs/hdfs-auth-activity-monitoring.html
+++ b/_site/docs/hdfs-auth-activity-monitoring.html
@@ -160,23 +160,25 @@
 
 <h4 id="sample-authorization-logs">Sample authorization logs</h4>
 
-<pre><code>2016-06-08 02:55:07,742 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for hdfs (auth:SIMPLE) for protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol
+<div class="highlighter-rouge"><pre class="highlight"><code>2016-06-08 02:55:07,742 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for hdfs (auth:SIMPLE) for protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol
 2016-06-08 02:55:35,304 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for hdfs (auth:SIMPLE) for protocol=interface org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol
 2016-06-08 02:55:36,862 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for hive (auth:SIMPLE) for protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol
 </code></pre>
+</div>
 
 <p>Steps for enabling service-level authorization activity</p>
 
-<h4 id="enable-hdfs-authorization-security-in-core-sitexml">1. Enable HDFS Authorization Security in core-site.xml</h4>
+<h4 id="1-enable-hdfs-authorization-security-in-core-sitexml">1. Enable HDFS Authorization Security in core-site.xml</h4>
 
-<pre><code>  &lt;property&gt;
+<div class="highlighter-rouge"><pre class="highlight"><code>  &lt;property&gt;
       &lt;name&gt;hadoop.security.authorization&lt;/name&gt;
       &lt;value&gt;true&lt;/value&gt;
   &lt;/property&gt;
 </code></pre>
+</div>
 
-<h4 id="enable-hdfs-security-log-in-log4jproperties">2. Enable HDFS security log in log4j.properties</h4>
-<pre><code>#
+<h4 id="2-enable-hdfs-security-log-in-log4jproperties">2. Enable HDFS security log in log4j.properties</h4>
+<div class="highlighter-rouge"><pre class="highlight"><code>#
 #Security audit appender
 #
 hadoop.security.logger=INFO,DRFAS
@@ -190,6 +192,7 @@ log4j.appender.DRFAS.layout=org.apache.log4j.PatternLayout
 log4j.appender.DRFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
 log4j.appender.DRFAS.DatePattern=.yyyy-MM-dd
 </code></pre>
+</div>
 
       </div><!--end of loadcontent-->  
     </div>

http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/hdfs-data-activity-monitoring.html
----------------------------------------------------------------------
diff --git a/_site/docs/hdfs-data-activity-monitoring.html b/_site/docs/hdfs-data-activity-monitoring.html
index b0a694e..eeefe05 100644
--- a/_site/docs/hdfs-data-activity-monitoring.html
+++ b/_site/docs/hdfs-data-activity-monitoring.html
@@ -181,7 +181,7 @@
   <li>
     <p><strong>Step 1</strong>: Configure Advanced hdfs-log4j via <a href="http://localhost:8080/#/main/services/HDFS/configs" target="_blank">Ambari UI</a><sup id="fnref:AMBARI"><a href="#fn:AMBARI" class="footnote">2</a></sup>, by adding below “KAFKA_HDFS_AUDIT” log4j appender to hdfs audit logging.</p>
 
-    <pre><code> log4j.appender.KAFKA_HDFS_AUDIT=org.apache.eagle.log4j.kafka.KafkaLog4jAppender
+    <div class="highlighter-rouge"><pre class="highlight"><code> log4j.appender.KAFKA_HDFS_AUDIT=org.apache.eagle.log4j.kafka.KafkaLog4jAppender
  log4j.appender.KAFKA_HDFS_AUDIT.Topic=sandbox_hdfs_audit_log
  log4j.appender.KAFKA_HDFS_AUDIT.BrokerList=sandbox.hortonworks.com:6667
  log4j.appender.KAFKA_HDFS_AUDIT.KeyClass=org.apache.eagle.log4j.kafka.hadoop.AuditLogKeyer
@@ -189,22 +189,25 @@
  log4j.appender.KAFKA_HDFS_AUDIT.Layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
  log4j.appender.KAFKA_HDFS_AUDIT.ProducerType=async
 </code></pre>
+    </div>
 
     <p><img src="/images/docs/hdfs-log4j-conf.png" alt="HDFS LOG4J Configuration" title="hdfslog4jconf" /></p>
   </li>
   <li>
     <p><strong>Step 2</strong>: Edit Advanced hadoop-env via <a href="http://localhost:8080/#/main/services/HDFS/configs" target="_blank">Ambari UI</a>, and add the reference to KAFKA_HDFS_AUDIT to HADOOP_NAMENODE_OPTS.</p>
 
-    <pre><code>-Dhdfs.audit.logger=INFO,DRFAAUDIT,KAFKA_HDFS_AUDIT
+    <div class="highlighter-rouge"><pre class="highlight"><code>-Dhdfs.audit.logger=INFO,DRFAAUDIT,KAFKA_HDFS_AUDIT
 </code></pre>
+    </div>
 
     <p><img src="/images/docs/hdfs-env-conf.png" alt="HDFS Environment Configuration" title="hdfsenvconf" /></p>
   </li>
   <li>
     <p><strong>Step 3</strong>: Edit Advanced hadoop-env via <a href="http://localhost:8080/#/main/services/HDFS/configs" target="_blank">Ambari UI</a>, and append the following command to it.</p>
 
-    <pre><code>export HADOOP_CLASSPATH=${HADOOP_CLASSPATH}:/usr/hdp/current/eagle/lib/log4jkafka/lib/*
+    <div class="highlighter-rouge"><pre class="highlight"><code>export HADOOP_CLASSPATH=${HADOOP_CLASSPATH}:/usr/hdp/current/eagle/lib/log4jkafka/lib/*
 </code></pre>
+    </div>
 
     <p><img src="/images/docs/hdfs-env-conf2.png" alt="HDFS Environment Configuration" title="hdfsenvconf2" /></p>
   </li>
@@ -223,10 +226,11 @@
 
 <ul>
   <li>
-    <p><strong>Step 7</strong>: Check whether logs from “/var/log/hadoop/hdfs/hdfs-audit.log” are flowing into topic <code>sandbox_hdfs_audit_log</code></p>
+    <p><strong>Step 7</strong>: Check whether logs from “/var/log/hadoop/hdfs/hdfs-audit.log” are flowing into topic <code class="highlighter-rouge">sandbox_hdfs_audit_log</code></p>
 
-    <pre><code>  $ /usr/hdp/2.2.4.2-2/kafka/bin/kafka-console-consumer.sh --zookeeper sandbox.hortonworks.com:2181 --topic sandbox_hdfs_audit_log      
+    <div class="highlighter-rouge"><pre class="highlight"><code>  $ /usr/hdp/2.2.4.2-2/kafka/bin/kafka-console-consumer.sh --zookeeper sandbox.hortonworks.com:2181 --topic sandbox_hdfs_audit_log      
 </code></pre>
+    </div>
   </li>
 </ul>
 
@@ -260,13 +264,13 @@
 <div class="footnotes">
   <ol>
     <li id="fn:KAFKA">
-      <p><em>All mentions of “kafka” on this page represent Apache Kafka.</em> <a href="#fnref:KAFKA" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “kafka” on this page represent Apache Kafka.</em>&nbsp;<a href="#fnref:KAFKA" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:AMBARI">
-      <p><em>All mentions of “ambari” on this page represent Apache Ambari.</em> <a href="#fnref:AMBARI" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “ambari” on this page represent Apache Ambari.</em>&nbsp;<a href="#fnref:AMBARI" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:STORM">
-      <p><em>Apache Storm.</em> <a href="#fnref:STORM" class="reversefootnote">&#8617;</a></p>
+      <p><em>Apache Storm.</em>&nbsp;<a href="#fnref:STORM" class="reversefootnote">&#8617;</a></p>
     </li>
   </ol>
 </div>

http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/hive-query-activity-monitoring.html
----------------------------------------------------------------------
diff --git a/_site/docs/hive-query-activity-monitoring.html b/_site/docs/hive-query-activity-monitoring.html
index d128562..cc19300 100644
--- a/_site/docs/hive-query-activity-monitoring.html
+++ b/_site/docs/hive-query-activity-monitoring.html
@@ -187,12 +187,13 @@
   </li>
 </ul>
 
-<pre><code>$ su hive
+<div class="highlighter-rouge"><pre class="highlight"><code>$ su hive
 $ hive
 $ set hive.execution.engine=mr;
 $ use xademo;
 $ select a.phone_number from customer_details a, call_detail_records b where a.phone_number=b.phone_number;
 </code></pre>
+</div>
 
 <p>From UI click on alert tab and you should see alert for your attempt to read restricted column.</p>
 
@@ -203,7 +204,7 @@ $ select a.phone_number from customer_details a, call_detail_records b where a.p
 <div class="footnotes">
   <ol>
     <li id="fn:HIVE">
-      <p><em>All mentions of “hive” on this page represent Apache Hive.</em> <a href="#fnref:HIVE" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “hive” on this page represent Apache Hive.</em>&nbsp;<a href="#fnref:HIVE" class="reversefootnote">&#8617;</a></p>
     </li>
   </ol>
 </div>

http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/import-hdfs-auditLog.html
----------------------------------------------------------------------
diff --git a/_site/docs/import-hdfs-auditLog.html b/_site/docs/import-hdfs-auditLog.html
index 5a18853..f23ddfd 100644
--- a/_site/docs/import-hdfs-auditLog.html
+++ b/_site/docs/import-hdfs-auditLog.html
@@ -169,9 +169,10 @@ install a <strong>namenode log4j Kafka appender</strong>.</p>
 
     <p>Here is an sample Kafka command to create topic ‘sandbox_hdfs_audit_log’</p>
 
-    <pre><code>cd &lt;kafka-home&gt;
+    <div class="highlighter-rouge"><pre class="highlight"><code>cd &lt;kafka-home&gt;
 bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic sandbox_hdfs_audit_log
 </code></pre>
+    </div>
   </li>
   <li>
     <p><strong>Step 2</strong>: Install Logstash-kafka plugin</p>
@@ -189,7 +190,7 @@ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 -
   <li>
     <p><strong>Step 3</strong>: Create a Logstash configuration file under ${LOGSTASH_HOME}/conf. Here is a sample.</p>
 
-    <pre><code>  input {
+    <div class="highlighter-rouge"><pre class="highlight"><code>  input {
       file {
           type =&gt; "hdp-nn-audit"
           path =&gt; "/path/to/audit.log"
@@ -230,15 +231,17 @@ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 -
       }
   }
 </code></pre>
+    </div>
   </li>
   <li>
     <p><strong>Step 4</strong>: Start Logstash</p>
 
-    <pre><code>bin/logstash -f conf/sample.conf
+    <div class="highlighter-rouge"><pre class="highlight"><code>bin/logstash -f conf/sample.conf
 </code></pre>
+    </div>
   </li>
   <li>
-    <p><strong>Step 5</strong>: Check whether logs are flowing into the kafka topic specified by <code>topic_id</code></p>
+    <p><strong>Step 5</strong>: Check whether logs are flowing into the kafka topic specified by <code class="highlighter-rouge">topic_id</code></p>
   </li>
 </ul>
 
@@ -252,14 +255,15 @@ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 -
   <li>
     <p><strong>Step 1</strong>: Create a Kafka topic. Here is a example Kafka command for creating topic “sandbox_hdfs_audit_log”</p>
 
-    <pre><code>cd &lt;kafka-home&gt;
+    <div class="highlighter-rouge"><pre class="highlight"><code>cd &lt;kafka-home&gt;
 bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic sandbox_hdfs_audit_log
 </code></pre>
+    </div>
   </li>
   <li>
     <p><strong>Step 2</strong>: Configure $HADOOP_CONF_DIR/log4j.properties, and add a log4j appender “KAFKA_HDFS_AUDIT” to hdfs audit logging</p>
 
-    <pre><code>log4j.appender.KAFKA_HDFS_AUDIT=org.apache.eagle.log4j.kafka.KafkaLog4jAppender
+    <div class="highlighter-rouge"><pre class="highlight"><code>log4j.appender.KAFKA_HDFS_AUDIT=org.apache.eagle.log4j.kafka.KafkaLog4jAppender
 log4j.appender.KAFKA_HDFS_AUDIT.Topic=sandbox_hdfs_audit_log
 log4j.appender.KAFKA_HDFS_AUDIT.BrokerList=sandbox.hortonworks.com:6667
 log4j.appender.KAFKA_HDFS_AUDIT.KeyClass=org.apache.eagle.log4j.kafka.hadoop.AuditLogKeyer
@@ -269,22 +273,25 @@ log4j.appender.KAFKA_HDFS_AUDIT.ProducerType=async
 #log4j.appender.KAFKA_HDFS_AUDIT.BatchSize=1
 #log4j.appender.KAFKA_HDFS_AUDIT.QueueSize=1
 </code></pre>
+    </div>
 
     <p><img src="/images/docs/hdfs-log4j-conf.png" alt="HDFS LOG4J Configuration" title="hdfslog4jconf" /></p>
   </li>
   <li>
     <p><strong>Step 3</strong>: Edit $HADOOP_CONF_DIR/hadoop-env.sh, and add the reference to KAFKA_HDFS_AUDIT to HADOOP_NAMENODE_OPTS.</p>
 
-    <pre><code>-Dhdfs.audit.logger=INFO,DRFAAUDIT,KAFKA_HDFS_AUDIT
+    <div class="highlighter-rouge"><pre class="highlight"><code>-Dhdfs.audit.logger=INFO,DRFAAUDIT,KAFKA_HDFS_AUDIT
 </code></pre>
+    </div>
 
     <p><img src="/images/docs/hdfs-env-conf.png" alt="HDFS Environment Configuration" title="hdfsenvconf" /></p>
   </li>
   <li>
     <p><strong>Step 4</strong>: Edit $HADOOP_CONF_DIR/hadoop-env.sh, and append the following command to it.</p>
 
-    <pre><code>export HADOOP_CLASSPATH=${HADOOP_CLASSPATH}:/path/to/eagle/lib/log4jkafka/lib/*
+    <div class="highlighter-rouge"><pre class="highlight"><code>export HADOOP_CLASSPATH=${HADOOP_CLASSPATH}:/path/to/eagle/lib/log4jkafka/lib/*
 </code></pre>
+    </div>
 
     <p><img src="/images/docs/hdfs-env-conf2.png" alt="HDFS Environment Configuration" title="hdfsenvconf2" /></p>
   </li>
@@ -294,8 +301,9 @@ log4j.appender.KAFKA_HDFS_AUDIT.ProducerType=async
   <li>
     <p><strong>Step 6</strong>: Check whether logs are flowing into Topic sandbox_hdfs_audit_log</p>
 
-    <pre><code>$ /usr/hdp/current/kafka-broker/bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic sandbox_hdfs_audit_log
+    <div class="highlighter-rouge"><pre class="highlight"><code>$ /usr/hdp/current/kafka-broker/bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic sandbox_hdfs_audit_log
 </code></pre>
+    </div>
   </li>
 </ul>
 
@@ -306,10 +314,10 @@ log4j.appender.KAFKA_HDFS_AUDIT.ProducerType=async
 <div class="footnotes">
   <ol>
     <li id="fn:KAFKA">
-      <p><em>All mentions of “kafka” on this page represent Apache Kafka.</em> <a href="#fnref:KAFKA" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “kafka” on this page represent Apache Kafka.</em>&nbsp;<a href="#fnref:KAFKA" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:AMBARI">
-      <p><em>all mentions of “ambari” on this page represent Apache Ambari.</em> <a href="#fnref:AMBARI" class="reversefootnote">&#8617;</a></p>
+      <p><em>all mentions of “ambari” on this page represent Apache Ambari.</em>&nbsp;<a href="#fnref:AMBARI" class="reversefootnote">&#8617;</a></p>
     </li>
   </ol>
 </div>

http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/index.html
----------------------------------------------------------------------
diff --git a/_site/docs/index.html b/_site/docs/index.html
index 7e607b4..c3ff024 100644
--- a/_site/docs/index.html
+++ b/_site/docs/index.html
@@ -204,13 +204,13 @@
 <div class="footnotes">
   <ol>
     <li id="fn:HADOOP">
-      <p><em>All mentions of “hadoop” on this page represent Apache Hadoop.</em> <a href="#fnref:HADOOP" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “hadoop” on this page represent Apache Hadoop.</em>&nbsp;<a href="#fnref:HADOOP" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:SPARK">
-      <p><em>All mentions of “spark” on this page represent Apache Spark.</em> <a href="#fnref:SPARK" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “spark” on this page represent Apache Spark.</em>&nbsp;<a href="#fnref:SPARK" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:HIVE">
-      <p><em>All mentions of “hive” on this page represent Apache HIVE.</em> <a href="#fnref:HIVE" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “hive” on this page represent Apache HIVE.</em>&nbsp;<a href="#fnref:HIVE" class="reversefootnote">&#8617;</a></p>
     </li>
   </ol>
 </div>


[2/3] eagle git commit: Update site

Posted by ha...@apache.org.
http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/installation.html
----------------------------------------------------------------------
diff --git a/_site/docs/installation.html b/_site/docs/installation.html
index adf5c4c..6d3b8b7 100644
--- a/_site/docs/installation.html
+++ b/_site/docs/installation.html
@@ -180,16 +180,19 @@
 <h4 id="install-eagle">Install Eagle</h4>
 
 <ul>
-  <li>
-    <p><strong>Step 1</strong>: Clone stable version from <a href="https://github.com/apache/eagle/releases/tag/v0.4.0-incubating">eagle github</a>
-&gt;       Build project mvn clean install -DskipTests=true</p>
+  <li><strong>Step 1</strong>: Clone stable version from <a href="https://github.com/apache/eagle/releases/tag/v0.4.0-incubating">eagle github</a>
+    <blockquote>
+      <div class="highlighter-rouge"><pre class="highlight"><code>  Build project mvn clean install -DskipTests=true
+</code></pre>
+      </div>
+    </blockquote>
   </li>
   <li>
     <p><strong>Step 2</strong>:  Download eagle-bin-0.1.0.tar.gz package from successful build into your HDP sandbox.</p>
 
     <ul>
       <li>
-        <p>Option 1: <code>scp -P 2222  eagle/eagle-assembly/target/eagle-0.1.0-bin.tar.gz root@127.0.0.1:/usr/hdp/current/</code></p>
+        <p>Option 1: <code class="highlighter-rouge">scp -P 2222  eagle/eagle-assembly/target/eagle-0.1.0-bin.tar.gz root@127.0.0.1:/usr/hdp/current/</code></p>
       </li>
       <li>
         <p>Option 2: Create shared directory between host and Sandbox, and restart Sandbox. Then you can find the shared directory under /media in Sandbox.</p>
@@ -199,27 +202,30 @@
   <li>
     <p><strong>Step 3</strong>: Extract eagle tarball package</p>
 
-    <pre><code>$ cd /usr/hdp/current
+    <div class="highlighter-rouge"><pre class="highlight"><code>$ cd /usr/hdp/current
 $ tar -zxvf eagle-0.1.0-bin.tar.gz
 $ mv eagle-0.1.0 eagle
 </code></pre>
+    </div>
   </li>
   <li>
     <p><strong>Step 4</strong>: Add root as a HBase<sup id="fnref:HBASE"><a href="#fn:HBASE" class="footnote">1</a></sup> superuser via <a href="http://127.0.0.1:8080/#/main/services/HBASE/configs">Ambari</a> (Optional, a user can operate HBase by sudo su hbase, as an alternative).</p>
   </li>
-  <li>
-    <p><strong>Step 5</strong>: Install Eagle Ambari<sup id="fnref:AMBARI"><a href="#fn:AMBARI" class="footnote">2</a></sup> service 
-&gt;
-  /usr/hdp/current/eagle/bin/eagle-ambari.sh install.</p>
+  <li><strong>Step 5</strong>: Install Eagle Ambari<sup id="fnref:AMBARI"><a href="#fn:AMBARI" class="footnote">2</a></sup> service
+    <blockquote>
+
+      <p>/usr/hdp/current/eagle/bin/eagle-ambari.sh install.</p>
+    </blockquote>
   </li>
   <li>
     <p><strong>Step 6</strong>: Restart <a href="http://127.0.0.1:8000/">Ambari</a> click on disable and enable Ambari back.</p>
   </li>
-  <li>
-    <p><strong>Step 7</strong>: Start HBase &amp; Storm<sup id="fnref:STORM"><a href="#fn:STORM" class="footnote">3</a></sup> &amp; Kafka<sup id="fnref:KAFKA"><a href="#fn:KAFKA" class="footnote">4</a></sup>
-From Ambari UI, restart any suggested components(“Restart button on top”) &amp; Start Storm (Start “Nimbus” ,”Supervisor” &amp; “Storm UI Server”), Kafka (Start “Kafka Broker”) , HBase (Start “RegionServer”  and “ HBase Master”) 
-&gt;
-<img src="/images/docs/Services.png" alt="Restart Services" title="Services" /></p>
+  <li><strong>Step 7</strong>: Start HBase &amp; Storm<sup id="fnref:STORM"><a href="#fn:STORM" class="footnote">3</a></sup> &amp; Kafka<sup id="fnref:KAFKA"><a href="#fn:KAFKA" class="footnote">4</a></sup>
+From Ambari UI, restart any suggested components(“Restart button on top”) &amp; Start Storm (Start “Nimbus” ,”Supervisor” &amp; “Storm UI Server”), Kafka (Start “Kafka Broker”) , HBase (Start “RegionServer”  and “ HBase Master”)
+    <blockquote>
+
+      <p><img src="/images/docs/Services.png" alt="Restart Services" title="Services" /></p>
+    </blockquote>
   </li>
   <li>
     <p><strong>Step 8</strong>: Add Eagle Service To Ambari. (Click For Video)</p>
@@ -241,9 +247,10 @@ EagleServiceSuccess</p>
   <li>
     <p><strong>Step 9</strong>: Add Policies and meta data required by running below script.</p>
 
-    <pre><code>$ /usr/hdp/current/eagle/examples/sample-sensitivity-resource-create.sh 
+    <div class="highlighter-rouge"><pre class="highlight"><code>$ /usr/hdp/current/eagle/examples/sample-sensitivity-resource-create.sh 
 $ /usr/hdp/current/eagle/examples/sample-policy-create.sh
 </code></pre>
+    </div>
   </li>
 </ul>
 
@@ -254,16 +261,16 @@ $ /usr/hdp/current/eagle/examples/sample-policy-create.sh
 <div class="footnotes">
   <ol>
     <li id="fn:HBASE">
-      <p><em>All mentions of “hbase” on this page represent Apache HBase.</em> <a href="#fnref:HBASE" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “hbase” on this page represent Apache HBase.</em>&nbsp;<a href="#fnref:HBASE" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:AMBARI">
-      <p><em>All mentions of “ambari” on this page represent Apache Ambari.</em> <a href="#fnref:AMBARI" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “ambari” on this page represent Apache Ambari.</em>&nbsp;<a href="#fnref:AMBARI" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:STORM">
-      <p><em>All mentions of “storm” on this page represent Apache Storm.</em> <a href="#fnref:STORM" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “storm” on this page represent Apache Storm.</em>&nbsp;<a href="#fnref:STORM" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:KAFKA">
-      <p><em>All mentions of “kafka” on this page represent Apache Kafka.</em> <a href="#fnref:KAFKA" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “kafka” on this page represent Apache Kafka.</em>&nbsp;<a href="#fnref:KAFKA" class="reversefootnote">&#8617;</a></p>
     </li>
   </ol>
 </div>

http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/jmx-metric-monitoring.html
----------------------------------------------------------------------
diff --git a/_site/docs/jmx-metric-monitoring.html b/_site/docs/jmx-metric-monitoring.html
index bd6b2d8..9574f54 100644
--- a/_site/docs/jmx-metric-monitoring.html
+++ b/_site/docs/jmx-metric-monitoring.html
@@ -177,9 +177,10 @@
 <h3 id="setup"><strong>Setup</strong></h3>
 <p>From Hortonworks sandbox just run below setup script to Install Pyton JMX script, Create Kafka topic, update Apache Hbase tables and deploy “hadoopjmx” Storm topology.</p>
 
-<pre><code>$ /usr/hdp/current/eagle/examples/hadoop-metric-sandbox-starter.sh
+<div class="highlighter-rouge"><pre class="highlight"><code>$ /usr/hdp/current/eagle/examples/hadoop-metric-sandbox-starter.sh
 $ /usr/hdp/current/eagle/examples/hadoop-metric-policy-create.sh  
 </code></pre>
+</div>
 
 <p><br /></p>
 
@@ -204,15 +205,17 @@ $ /usr/hdp/current/eagle/examples/hadoop-metric-policy-create.sh
   <li>
     <p>First make sure that Kafka topic “nn_jmx_metric_sandbox” is populated with JMX metric data periodically.(To make sure that python script is running)</p>
 
-    <pre><code>  $ /usr/hdp/2.2.4.2-2/kafka/bin/kafka-console-consumer.sh --zookeeper sandbox.hortonworks.com:2181 --topic nn_jmx_metric_sandbox
+    <div class="highlighter-rouge"><pre class="highlight"><code>  $ /usr/hdp/2.2.4.2-2/kafka/bin/kafka-console-consumer.sh --zookeeper sandbox.hortonworks.com:2181 --topic nn_jmx_metric_sandbox
 </code></pre>
+    </div>
   </li>
   <li>
     <p>Genrate Alert by producing alert triggering message into Kafka topic.</p>
 
-    <pre><code>  $ /usr/hdp/2.2.4.2-2/kafka/bin/kafka-console-producer.sh --broker-list sandbox.hortonworks.com:6667 --topic nn_jmx_metric_sandbox
+    <div class="highlighter-rouge"><pre class="highlight"><code>  $ /usr/hdp/2.2.4.2-2/kafka/bin/kafka-console-producer.sh --broker-list sandbox.hortonworks.com:6667 --topic nn_jmx_metric_sandbox
   $ {"host": "localhost", "timestamp": 1457033916718, "metric": "hadoop.namenode.fsnamesystemstate.fsstate", "component": "namenode", "site": "sandbox", "value": 1.0}
 </code></pre>
+    </div>
   </li>
 </ul>
 
@@ -223,10 +226,10 @@ $ /usr/hdp/current/eagle/examples/hadoop-metric-policy-create.sh
 <div class="footnotes">
   <ol>
     <li id="fn:KAFKA">
-      <p><em>All mentions of “kafka” on this page represent Apache Kafka.</em> <a href="#fnref:KAFKA" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “kafka” on this page represent Apache Kafka.</em>&nbsp;<a href="#fnref:KAFKA" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:STORM">
-      <p><em>All mentions of “storm” on this page represent Apache Storm.</em> <a href="#fnref:STORM" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “storm” on this page represent Apache Storm.</em>&nbsp;<a href="#fnref:STORM" class="reversefootnote">&#8617;</a></p>
     </li>
   </ol>
 </div>

http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/mapr-integration.html
----------------------------------------------------------------------
diff --git a/_site/docs/mapr-integration.html b/_site/docs/mapr-integration.html
index bb0a9dd..116d119 100644
--- a/_site/docs/mapr-integration.html
+++ b/_site/docs/mapr-integration.html
@@ -171,84 +171,97 @@
 <p>Here are the steps to follow:</p>
 
 <h4 id="step1-enable-audit-logs-for-filesystem-operations-and-table-operations-in-mapr">Step1: Enable audit logs for FileSystem Operations and Table Operations in MapR</h4>
-<p>First we need to enable data auditing at all three levels: cluster level, volume level and directory,file or table level. 
-##### Cluster level:</p>
+<p>First we need to enable data auditing at all three levels: cluster level, volume level and directory,file or table level.</p>
+<h5 id="cluster-level">Cluster level:</h5>
 
-<pre><code>       $ maprcli audit data -cluster &lt;cluster name&gt; -enabled true 
+<div class="highlighter-rouge"><pre class="highlight"><code>       $ maprcli audit data -cluster &lt;cluster name&gt; -enabled true 
                            [ -maxsize &lt;GB, defaut value is 32. When size of audit logs exceed this number, an alarm will be sent to the dashboard in the MapR Control Service &gt; ]
                            [ -retention &lt;number of Days&gt; ]
 </code></pre>
+</div>
 <p>Example:</p>
 
-<pre><code>        $ maprcli audit data -cluster mapr.cluster.com -enabled true -maxsize 30 -retention 30
+<div class="highlighter-rouge"><pre class="highlight"><code>        $ maprcli audit data -cluster mapr.cluster.com -enabled true -maxsize 30 -retention 30
 </code></pre>
+</div>
 
 <h5 id="volume-level">Volume level:</h5>
 
-<pre><code>       $ maprcli volume audit -cluster &lt;cluster name&gt; -enabled true 
+<div class="highlighter-rouge"><pre class="highlight"><code>       $ maprcli volume audit -cluster &lt;cluster name&gt; -enabled true 
                             -name &lt;volume name&gt;
                             [ -coalesce &lt;interval in minutes, the interval of time during which READ, WRITE, or GETATTR operations on one file from one client IP address are logged only once, if auditing is enabled&gt; ]
 </code></pre>
+</div>
 
 <p>Example:</p>
 
-<pre><code>        $ maprcli volume audit -cluster mapr.cluster.com -name mapr.tmp -enabled true
+<div class="highlighter-rouge"><pre class="highlight"><code>        $ maprcli volume audit -cluster mapr.cluster.com -name mapr.tmp -enabled true
 </code></pre>
+</div>
 
 <p>To verify that auditing is enabled for a particular volume, use this command:</p>
 
-<pre><code>        $ maprcli volume info -name &lt;volume name&gt; -json | grep -i 'audited\|coalesce'
+<div class="highlighter-rouge"><pre class="highlight"><code>        $ maprcli volume info -name &lt;volume name&gt; -json | grep -i 'audited\|coalesce'
 </code></pre>
+</div>
 <p>and you should see something like this:</p>
 
-<pre><code>                        "audited":1,
+<div class="highlighter-rouge"><pre class="highlight"><code>                        "audited":1,
                         "coalesceInterval":60
 </code></pre>
+</div>
 <p>If “audited” is ‘1’ then auditing is enabled for this volume.</p>
 
 <h5 id="directory-file-or-mapr-db-table-level">Directory, file, or MapR-DB table level:</h5>
 
-<pre><code>        $ hadoop mfs -setaudit on &lt;directory|file|table&gt;
+<div class="highlighter-rouge"><pre class="highlight"><code>        $ hadoop mfs -setaudit on &lt;directory|file|table&gt;
 </code></pre>
+</div>
 
-<p>To check whether Auditing is Enabled for a Directory, File, or MapR-DB Table, use <code>$ hadoop mfs -ls</code>
+<p>To check whether Auditing is Enabled for a Directory, File, or MapR-DB Table, use <code class="highlighter-rouge">$ hadoop mfs -ls</code>
 Example:
-Before enable the audit log on file <code>/tmp/dir</code>, try <code>$ hadoop mfs -ls /tmp/dir</code>, you should see something like this:</p>
+Before enable the audit log on file <code class="highlighter-rouge">/tmp/dir</code>, try <code class="highlighter-rouge">$ hadoop mfs -ls /tmp/dir</code>, you should see something like this:</p>
 
-<pre><code>drwxr-xr-x Z U U   - root root          0 2016-03-02 15:02  268435456 /tmp/dir
+<div class="highlighter-rouge"><pre class="highlight"><code>drwxr-xr-x Z U U   - root root          0 2016-03-02 15:02  268435456 /tmp/dir
                p 2050.32.131328  mapr2.da.dg:5660 mapr1.da.dg:5660
 </code></pre>
+</div>
 
-<p>The second <code>U</code> means auditing on this file is not enabled. 
+<p>The second <code class="highlighter-rouge">U</code> means auditing on this file is not enabled. 
 Enable auditing with this command:</p>
 
-<pre><code>$ hadoop mfs -setaudit on /tmp/dir
+<div class="highlighter-rouge"><pre class="highlight"><code>$ hadoop mfs -setaudit on /tmp/dir
 </code></pre>
+</div>
 
 <p>Then check the auditing bit with :</p>
 
-<pre><code>$ hadoop mfs -ls /tmp/dir
+<div class="highlighter-rouge"><pre class="highlight"><code>$ hadoop mfs -ls /tmp/dir
 </code></pre>
+</div>
 
 <p>you should see something like this:</p>
 
-<pre><code>drwxr-xr-x Z U A   - root root          0 2016-03-02 15:02  268435456 /tmp/dir
+<div class="highlighter-rouge"><pre class="highlight"><code>drwxr-xr-x Z U A   - root root          0 2016-03-02 15:02  268435456 /tmp/dir
                p 2050.32.131328  mapr2.da.dg:5660 mapr1.da.dg:5660
 </code></pre>
+</div>
 
-<p>We can see the previous <code>U</code> has been changed to <code>A</code> which indicates auditing on this file is enabled.</p>
+<p>We can see the previous <code class="highlighter-rouge">U</code> has been changed to <code class="highlighter-rouge">A</code> which indicates auditing on this file is enabled.</p>
 
-<p><code>Important</code>:
+<p><code class="highlighter-rouge">Important</code>:
 When a directory has been enabled auditing,  directories/files located in this dir won’t inherit auditing, but a newly created file/dir (after enabling the auditing on this dir) in this directory will.</p>
 
 <h4 id="step2-stream-log-data-into-kafka-by-using-logstash">Step2: Stream log data into Kafka by using Logstash</h4>
-<p>As MapR do not have name node, instead it use CLDB service, we have to use logstash to stream log data into Kafka.
-- First find out the nodes that have CLDB service
-- Then find out the location of audit log files, eg: <code>/mapr/mapr.cluster.com/var/mapr/local/mapr1.da.dg/audit/</code>, file names should be in this format: <code>FSAudit.log-2016-05-04-001.json</code> 
-- Created a logstash conf file and run it, following this doc<a href="https://github.com/apache/eagle/blob/master/eagle-assembly/src/main/docs/logstash-kafka-conf.md">Logstash-kafka</a></p>
+<p>As MapR do not have name node, instead it use CLDB service, we have to use logstash to stream log data into Kafka.</p>
+<ul>
+  <li>First find out the nodes that have CLDB service</li>
+  <li>Then find out the location of audit log files, eg: <code class="highlighter-rouge">/mapr/mapr.cluster.com/var/mapr/local/mapr1.da.dg/audit/</code>, file names should be in this format: <code class="highlighter-rouge">FSAudit.log-2016-05-04-001.json</code></li>
+  <li>Created a logstash conf file and run it, following this doc<a href="https://github.com/apache/eagle/blob/master/eagle-assembly/src/main/docs/logstash-kafka-conf.md">Logstash-kafka</a></li>
+</ul>
 
 <h4 id="step3-set-up-maprfsauditlog-applicaiton-in-eagle-service">Step3: Set up maprFSAuditLog applicaiton in Eagle Service</h4>
-<p>After Eagle Service gets started, create mapFSAuditLog application using:  <code>$ ./maprFSAuditLog-init.sh</code>. By default it will create maprFSAuditLog in site “sandbox”, you may need to change it to your own site.
+<p>After Eagle Service gets started, create mapFSAuditLog application using:  <code class="highlighter-rouge">$ ./maprFSAuditLog-init.sh</code>. By default it will create maprFSAuditLog in site “sandbox”, you may need to change it to your own site.
 After these steps you are good to go.</p>
 
 <p>Have fun!!! :)</p>
@@ -266,7 +279,7 @@ After these steps you are good to go.</p>
 <div class="footnotes">
   <ol>
     <li id="fn:KAFKA">
-      <p><em>All mentions of “kafka” on this page represent Apache Kafka.</em> <a href="#fnref:KAFKA" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “kafka” on this page represent Apache Kafka.</em>&nbsp;<a href="#fnref:KAFKA" class="reversefootnote">&#8617;</a></p>
     </li>
   </ol>
 </div>

http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/quick-start-0.3.0.html
----------------------------------------------------------------------
diff --git a/_site/docs/quick-start-0.3.0.html b/_site/docs/quick-start-0.3.0.html
index 4d048fc..1991131 100644
--- a/_site/docs/quick-start-0.3.0.html
+++ b/_site/docs/quick-start-0.3.0.html
@@ -178,21 +178,22 @@
   <li>
     <p>Build manually with <a href="https://maven.apache.org/">Apache Maven</a>:</p>
 
-    <pre><code>$ tar -zxvf apache-eagle-0.3.0-incubating-src.tar.gz
+    <div class="highlighter-rouge"><pre class="highlight"><code>$ tar -zxvf apache-eagle-0.3.0-incubating-src.tar.gz
 $ cd incubator-eagle-release-0.3.0-rc3  
 $ curl -O https://patch-diff.githubusercontent.com/raw/apache/eagle/pull/180.patch
 $ git apply 180.patch
 $ mvn clean package -DskipTests
 </code></pre>
+    </div>
 
-    <p>After building successfully, you will get tarball under <code>eagle-assembly/target/</code> named as <code>eagle-0.3.0-incubating-bin.tar.gz</code>
+    <p>After building successfully, you will get tarball under <code class="highlighter-rouge">eagle-assembly/target/</code> named as <code class="highlighter-rouge">eagle-0.3.0-incubating-bin.tar.gz</code>
 <br /></p>
   </li>
 </ul>
 
 <h3 id="install-eagle"><strong>Install Eagle</strong></h3>
 
-<pre><code> $ scp -P 2222  eagle-assembly/target/eagle-0.3.0-incubating-bin.tar.gz root@127.0.0.1:/root/
+<div class="highlighter-rouge"><pre class="highlight"><code> $ scp -P 2222  eagle-assembly/target/eagle-0.3.0-incubating-bin.tar.gz root@127.0.0.1:/root/
  $ ssh root@127.0.0.1 -p 2222 (password is hadoop)
  $ tar -zxvf eagle-0.3.0-incubating-bin.tar.gz
  $ mv eagle-0.3.0-incubating eagle
@@ -200,6 +201,7 @@ $ mvn clean package -DskipTests
  $ cd /usr/hdp/current/eagle
  $ examples/eagle-sandbox-starter.sh
 </code></pre>
+</div>
 
 <p><br /></p>
 
@@ -217,7 +219,7 @@ $ mvn clean package -DskipTests
 <div class="footnotes">
   <ol>
     <li id="fn:HADOOP">
-      <p><em>All mentions of “hadoop” on this page represent Apache Hadoop.</em> <a href="#fnref:HADOOP" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “hadoop” on this page represent Apache Hadoop.</em>&nbsp;<a href="#fnref:HADOOP" class="reversefootnote">&#8617;</a></p>
     </li>
   </ol>
 </div>

http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/quick-start.html
----------------------------------------------------------------------
diff --git a/_site/docs/quick-start.html b/_site/docs/quick-start.html
index 6575409..18ae834 100644
--- a/_site/docs/quick-start.html
+++ b/_site/docs/quick-start.html
@@ -179,21 +179,22 @@
   <li>
     <p>Build manually with <a href="https://maven.apache.org/">Apache Maven</a>:</p>
 
-    <pre><code>$ tar -zxvf apache-eagle-0.4.0-incubating-src.tar.gz
+    <div class="highlighter-rouge"><pre class="highlight"><code>$ tar -zxvf apache-eagle-0.4.0-incubating-src.tar.gz
 $ cd apache-eagle-0.4.0-incubating-src 
 $ curl -O https://patch-diff.githubusercontent.com/raw/apache/eagle/pull/268.patch
 $ git apply 268.patch
 $ mvn clean package -DskipTests
 </code></pre>
+    </div>
 
-    <p>After building successfully, you will get a tarball under <code>eagle-assembly/target/</code> named <code>apache-eagle-0.4.0-incubating-bin.tar.gz</code>
+    <p>After building successfully, you will get a tarball under <code class="highlighter-rouge">eagle-assembly/target/</code> named <code class="highlighter-rouge">apache-eagle-0.4.0-incubating-bin.tar.gz</code>
 <br /></p>
   </li>
 </ul>
 
 <h3 id="install-eagle"><strong>Install Eagle</strong></h3>
 
-<pre><code> $ scp -P 2222 eagle-assembly/target/apache-eagle-0.4.0-incubating-bin.tar.gz root@127.0.0.1:/root/
+<div class="highlighter-rouge"><pre class="highlight"><code> $ scp -P 2222 eagle-assembly/target/apache-eagle-0.4.0-incubating-bin.tar.gz root@127.0.0.1:/root/
  $ ssh root@127.0.0.1 -p 2222 (password is hadoop)
  $ tar -zxvf apache-eagle-0.4.0-incubating-bin.tar.gz
  $ mv apache-eagle-0.4.0-incubating eagle
@@ -201,20 +202,22 @@ $ mvn clean package -DskipTests
  $ cd /usr/hdp/current/eagle
  $ examples/eagle-sandbox-starter.sh
 </code></pre>
+</div>
 
 <p><br /></p>
 
 <h3 id="sample-application-hive-query-activity-monitoring-in-sandbox"><strong>Sample Application: Hive query activity monitoring in sandbox</strong></h3>
-<p>After executing <code>examples/eagle-sandbox-starter.sh</code>, you have a sample application (topology) running on the Apache Storm (check with <a href="http://sandbox.hortonworks.com:8744/index.html">storm ui</a>), and a sample policy of Hive activity monitoring defined.</p>
+<p>After executing <code class="highlighter-rouge">examples/eagle-sandbox-starter.sh</code>, you have a sample application (topology) running on the Apache Storm (check with <a href="http://sandbox.hortonworks.com:8744/index.html">storm ui</a>), and a sample policy of Hive activity monitoring defined.</p>
 
 <p>Next you can trigger an alert by running a Hive query.</p>
 
-<pre><code>$ su hive
+<div class="highlighter-rouge"><pre class="highlight"><code>$ su hive
 $ hive
 $ set hive.execution.engine=mr;
 $ use xademo;
 $ select a.phone_number from customer_details a, call_detail_records b where a.phone_number=b.phone_number;
 </code></pre>
+</div>
 <p><br /></p>
 
 <hr />
@@ -224,10 +227,10 @@ $ select a.phone_number from customer_details a, call_detail_records b where a.p
 <div class="footnotes">
   <ol>
     <li id="fn:HADOOP">
-      <p><em>Apache Hadoop.</em> <a href="#fnref:HADOOP" class="reversefootnote">&#8617;</a></p>
+      <p><em>Apache Hadoop.</em>&nbsp;<a href="#fnref:HADOOP" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:HIVE">
-      <p><em>All mentions of “hive” on this page represent Apache Hive.</em> <a href="#fnref:HIVE" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “hive” on this page represent Apache Hive.</em>&nbsp;<a href="#fnref:HIVE" class="reversefootnote">&#8617;</a></p>
     </li>
   </ol>
 </div>

http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/security.html
----------------------------------------------------------------------
diff --git a/_site/docs/security.html b/_site/docs/security.html
index 0fc1599..34794a1 100644
--- a/_site/docs/security.html
+++ b/_site/docs/security.html
@@ -158,7 +158,7 @@
         <h1 class="page-header" style="margin-top: 0px">Apache Eagle Security</h1>
         <p>The Apache Software Foundation takes a very active stance in eliminating security problems in its software products. Apache Eagle is also responsive to such issues around its features.</p>
 
-<p>If you have any concern regarding to Eagle’s Security or you believe a vulnerability is discovered, don’t hesitate to get connected with Aapche Security Team by sending emails to <a href="&#109;&#097;&#105;&#108;&#116;&#111;:&#115;&#101;&#099;&#117;&#114;&#105;&#116;&#121;&#064;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;">&#115;&#101;&#099;&#117;&#114;&#105;&#116;&#121;&#064;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;</a>. In the message, you can indicate the project name is Eagle, provide a description of the issue, and you are recommended to give the way of reproducing it. The security team and eagle community will get back to you after assessing the findings.</p>
+<p>If you have any concern regarding to Eagle’s Security or you believe a vulnerability is discovered, don’t hesitate to get connected with Aapche Security Team by sending emails to <a href="mailto:security@apache.org">security@apache.org</a>. In the message, you can indicate the project name is Eagle, provide a description of the issue, and you are recommended to give the way of reproducing it. The security team and eagle community will get back to you after assessing the findings.</p>
 
 <blockquote>
   <p><strong>PLEASE PAY ATTENTION</strong> to report any security problem to the security email address before disclosing it publicly.</p>

http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/serviceconfiguration.html
----------------------------------------------------------------------
diff --git a/_site/docs/serviceconfiguration.html b/_site/docs/serviceconfiguration.html
index 2519ae8..3ecd741 100644
--- a/_site/docs/serviceconfiguration.html
+++ b/_site/docs/serviceconfiguration.html
@@ -171,7 +171,7 @@ description of Eagle Service configuration.</p>
   <li>for hbase</li>
 </ul>
 
-<pre><code>eagle {
+<div class="highlighter-rouge"><pre class="highlight"><code>eagle {
 	service{
 		storage-type="hbase"
 		hbase-zookeeper-quorum="sandbox.hortonworks.com"
@@ -182,12 +182,13 @@ description of Eagle Service configuration.</p>
 	}
       }
 </code></pre>
+</div>
 
 <ul>
   <li>for mysql</li>
 </ul>
 
-<pre><code>eagle {
+<div class="highlighter-rouge"><pre class="highlight"><code>eagle {
 	service {
 		storage-type="jdbc"
 		storage-adapter="mysql"
@@ -201,12 +202,13 @@ description of Eagle Service configuration.</p>
 	}
 }
 </code></pre>
+</div>
 
 <ul>
   <li>for derby</li>
 </ul>
 
-<pre><code>eagle {
+<div class="highlighter-rouge"><pre class="highlight"><code>eagle {
 	service {
 		storage-type="jdbc"
 		storage-adapter="derby"
@@ -220,6 +222,7 @@ description of Eagle Service configuration.</p>
 	}
 }
 </code></pre>
+</div>
 <p><br /></p>
 
       </div><!--end of loadcontent-->  

http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/terminology.html
----------------------------------------------------------------------
diff --git a/_site/docs/terminology.html b/_site/docs/terminology.html
index 1ee3156..27c99ca 100644
--- a/_site/docs/terminology.html
+++ b/_site/docs/terminology.html
@@ -193,10 +193,10 @@ They are basic knowledge of Eagle which also will help to well understand Eagle.
 <div class="footnotes">
   <ol>
     <li id="fn:HADOOP">
-      <p><em>All mentions of “hadoop” on this page represent Apache Hadoop.</em> <a href="#fnref:HADOOP" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “hadoop” on this page represent Apache Hadoop.</em>&nbsp;<a href="#fnref:HADOOP" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:HIVE">
-      <p><em>Apache Hive.</em> <a href="#fnref:HIVE" class="reversefootnote">&#8617;</a></p>
+      <p><em>Apache Hive.</em>&nbsp;<a href="#fnref:HIVE" class="reversefootnote">&#8617;</a></p>
     </li>
   </ol>
 </div>

http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/tutorial/classification.html
----------------------------------------------------------------------
diff --git a/_site/docs/tutorial/classification.html b/_site/docs/tutorial/classification.html
index e79dc14..64e6c49 100644
--- a/_site/docs/tutorial/classification.html
+++ b/_site/docs/tutorial/classification.html
@@ -182,30 +182,33 @@ Currently this feature is available ONLY for applications monitoring HDFS, Hive<
 
         <p>You may configure the default path for Apache Hadoop clients to connect remote hdfs namenode.</p>
 
-        <pre><code>  classification.fs.defaultFS=hdfs://sandbox.hortonworks.com:8020
+        <div class="highlighter-rouge"><pre class="highlight"><code>  classification.fs.defaultFS=hdfs://sandbox.hortonworks.com:8020
 </code></pre>
+        </div>
       </li>
       <li>
         <p>HA case</p>
 
         <p>Basically, you point your fs.defaultFS at your nameservice and let the client know how its configured (the backing namenodes) and how to fail over between them under the HA mode</p>
 
-        <pre><code>  classification.fs.defaultFS=hdfs://nameservice1
+        <div class="highlighter-rouge"><pre class="highlight"><code>  classification.fs.defaultFS=hdfs://nameservice1
   classification.dfs.nameservices=nameservice1
   classification.dfs.ha.namenodes.nameservice1=namenode1,namenode2
   classification.dfs.namenode.rpc-address.nameservice1.namenode1=hadoopnamenode01:8020
   classification.dfs.namenode.rpc-address.nameservice1.namenode2=hadoopnamenode02:8020
   classification.dfs.client.failover.proxy.provider.nameservice1=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
 </code></pre>
+        </div>
       </li>
       <li>
         <p>Kerberos-secured cluster</p>
 
         <p>For Kerberos-secured cluster, you need to get a keytab file and the principal from your admin, and configure “eagle.keytab.file” and “eagle.kerberos.principal” to authenticate its access.</p>
 
-        <pre><code>  classification.eagle.keytab.file=/EAGLE-HOME/.keytab/eagle.keytab
+        <div class="highlighter-rouge"><pre class="highlight"><code>  classification.eagle.keytab.file=/EAGLE-HOME/.keytab/eagle.keytab
   classification.eagle.kerberos.principal=eagle@SOMEWHERE.COM
 </code></pre>
+        </div>
 
         <p>If there is an exception about “invalid server principal name”, you may need to check the DNS resolver, or the data transfer , such as “dfs.encrypt.data.transfer”, “dfs.encrypt.data.transfer.algorithm”, “dfs.trustedchannel.resolver.class”, “dfs.datatransfer.client.encrypt”.</p>
       </li>
@@ -216,12 +219,13 @@ Currently this feature is available ONLY for applications monitoring HDFS, Hive<
       <li>
         <p>Basic</p>
 
-        <pre><code>  classification.accessType=metastoredb_jdbc
+        <div class="highlighter-rouge"><pre class="highlight"><code>  classification.accessType=metastoredb_jdbc
   classification.password=hive
   classification.user=hive
   classification.jdbcDriverClassName=com.mysql.jdbc.Driver
   classification.jdbcUrl=jdbc:mysql://sandbox.hortonworks.com/hive?createDatabaseIfNotExist=true
 </code></pre>
+        </div>
       </li>
     </ul>
   </li>
@@ -234,16 +238,17 @@ Currently this feature is available ONLY for applications monitoring HDFS, Hive<
 
         <p>You need to sett “hbase.zookeeper.quorum”:”localhost” property and “hbase.zookeeper.property.clientPort” property.</p>
 
-        <pre><code>  classification.hbase.zookeeper.property.clientPort=2181
+        <div class="highlighter-rouge"><pre class="highlight"><code>  classification.hbase.zookeeper.property.clientPort=2181
   classification.hbase.zookeeper.quorum=localhost
 </code></pre>
+        </div>
       </li>
       <li>
         <p>Kerberos-secured cluster</p>
 
         <p>According to your environment, you can add or remove some of the following properties. Here is the reference.</p>
 
-        <pre><code>  classification.hbase.zookeeper.property.clientPort=2181
+        <div class="highlighter-rouge"><pre class="highlight"><code>  classification.hbase.zookeeper.property.clientPort=2181
   classification.hbase.zookeeper.quorum=localhost
   classification.hbase.security.authentication=kerberos
   classification.hbase.master.kerberos.principal=hadoop/_HOST@EXAMPLE.COM
@@ -251,6 +256,7 @@ Currently this feature is available ONLY for applications monitoring HDFS, Hive<
   classification.eagle.keytab.file=/EAGLE-HOME/.keytab/eagle.keytab
   classification.eagle.kerberos.principal=eagle@EXAMPLE.COM
 </code></pre>
+        </div>
       </li>
     </ul>
   </li>
@@ -321,10 +327,10 @@ Currently this feature is available ONLY for applications monitoring HDFS, Hive<
 <div class="footnotes">
   <ol>
     <li id="fn:HIVE">
-      <p><em>All mentions of “hive” on this page represent Apache Hive.</em> <a href="#fnref:HIVE" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “hive” on this page represent Apache Hive.</em>&nbsp;<a href="#fnref:HIVE" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:HBASE">
-      <p><em>All mentions of “hbase” on this page represent Apache HBase.</em> <a href="#fnref:HBASE" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “hbase” on this page represent Apache HBase.</em>&nbsp;<a href="#fnref:HBASE" class="reversefootnote">&#8617;</a></p>
     </li>
   </ol>
 </div>

http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/tutorial/ldap.html
----------------------------------------------------------------------
diff --git a/_site/docs/tutorial/ldap.html b/_site/docs/tutorial/ldap.html
index 7cc814e..918fe8c 100644
--- a/_site/docs/tutorial/ldap.html
+++ b/_site/docs/tutorial/ldap.html
@@ -160,7 +160,7 @@
 
 <p>Step 1: edit configuration under conf/ldap.properties.</p>
 
-<pre><code>ldap.server=ldap://localhost:10389
+<div class="highlighter-rouge"><pre class="highlight"><code>ldap.server=ldap://localhost:10389
 ldap.username=uid=admin,ou=system
 ldap.password=secret
 ldap.user.searchBase=ou=Users,o=mojo
@@ -169,12 +169,13 @@ ldap.user.groupSearchBase=ou=groups,o=mojo
 acl.adminRole=
 acl.defaultRole=ROLE_USER
 </code></pre>
+</div>
 
 <p>acl.adminRole and acl.defaultRole are two customized properties for Eagle. Eagle manages admin users with groups. If you set acl.adminRole as ROLE_{EAGLE-ADMIN-GROUP-NAME}, members in this group have the admin privilege. acl.defaultRole is ROLE_USER.</p>
 
 <p>Step 2: edit conf/eagle-service.conf, and add springActiveProfile=”default”</p>
 
-<pre><code>eagle{
+<div class="highlighter-rouge"><pre class="highlight"><code>eagle{
     service{
         storage-type="hbase"
         hbase-zookeeper-quorum="localhost"
@@ -184,6 +185,7 @@ acl.defaultRole=ROLE_USER
     }
 }
 </code></pre>
+</div>
 
 
       </div><!--end of loadcontent-->  

http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/tutorial/notificationplugin.html
----------------------------------------------------------------------
diff --git a/_site/docs/tutorial/notificationplugin.html b/_site/docs/tutorial/notificationplugin.html
index 627b7b1..db26f69 100644
--- a/_site/docs/tutorial/notificationplugin.html
+++ b/_site/docs/tutorial/notificationplugin.html
@@ -183,12 +183,12 @@
   </li>
 </ul>
 
-<p><img src="/images/notificationPlugin.png" alt="notificationPlugin" />
-### Customized Notification Plugin</p>
+<p><img src="/images/notificationPlugin.png" alt="notificationPlugin" /></p>
+<h3 id="customized-notification-plugin">Customized Notification Plugin</h3>
 
 <p>To integrate a customized notification plugin, we must implement an interface</p>
 
-<pre><code>public interface NotificationPlugin {
+<div class="highlighter-rouge"><pre class="highlight"><code>public interface NotificationPlugin {
 /**
  * for initialization
  * @throws Exception
@@ -218,24 +218,26 @@ void onAlert(AlertAPIEntity alertEntity) throws  Exception;
 List&lt;NotificationStatus&gt; getStatusList();
 } Examples: AlertKafkaPlugin, AlertEmailPlugin, and AlertEagleStorePlugin.
 </code></pre>
+</div>
 
 <p>The second and crucial step is to register the configurations of the customized plugin. In other words, we need persist the configuration template into database in order to expose the configurations to users in the front end.</p>
 
 <p>Examples:</p>
 
-<pre><code>{
-   "prefix": "alertNotifications",
-   "tags": {
-     "notificationType": "kafka"
-   },
-   "className": "org.apache.eagle.notification.plugin.AlertKafkaPlugin",
-   "description": "send alert to kafka bus",
-   "enabled":true,
-   "fields": "[{\"name\":\"kafka_broker\",\"value\":\"sandbox.hortonworks.com:6667\"},{\"name\":\"topic\"}]"
-}
-</code></pre>
+<div class="highlighter-rouge"><pre class="highlight"><code><span class="p">{</span><span class="w">
+   </span><span class="nt">"prefix"</span><span class="p">:</span><span class="w"> </span><span class="s2">"alertNotifications"</span><span class="p">,</span><span class="w">
+   </span><span class="nt">"tags"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
+     </span><span class="nt">"notificationType"</span><span class="p">:</span><span class="w"> </span><span class="s2">"kafka"</span><span class="w">
+   </span><span class="p">},</span><span class="w">
+   </span><span class="nt">"className"</span><span class="p">:</span><span class="w"> </span><span class="s2">"org.apache.eagle.notification.plugin.AlertKafkaPlugin"</span><span class="p">,</span><span class="w">
+   </span><span class="nt">"description"</span><span class="p">:</span><span class="w"> </span><span class="s2">"send alert to kafka bus"</span><span class="p">,</span><span class="w">
+   </span><span class="nt">"enabled"</span><span class="p">:</span><span class="kc">true</span><span class="p">,</span><span class="w">
+   </span><span class="nt">"fields"</span><span class="p">:</span><span class="w"> </span><span class="s2">"[{\"name\":\"kafka_broker\",\"value\":\"sandbox.hortonworks.com:6667\"},{\"name\":\"topic\"}]"</span><span class="w">
+</span><span class="p">}</span><span class="w">
+</span></code></pre>
+</div>
 
-<p><strong>Note</strong>: <code>fields</code> is the configuration for notification type <code>kafka</code></p>
+<p><strong>Note</strong>: <code class="highlighter-rouge">fields</code> is the configuration for notification type <code class="highlighter-rouge">kafka</code></p>
 
 <p>How can we do that? <a href="https://github.com/apache/eagle/blob/master/eagle-assembly/src/main/bin/eagle-topology-init.sh">Here</a> are Eagle other notification plugin configurations. Just append yours to it, and run this script when Eagle service is up.</p>
 
@@ -246,7 +248,7 @@ List&lt;NotificationStatus&gt; getStatusList();
 <div class="footnotes">
   <ol>
     <li id="fn:KAFKA">
-      <p><em>All mentions of “kafka” on this page represent Apache Kafka.</em> <a href="#fnref:KAFKA" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “kafka” on this page represent Apache Kafka.</em>&nbsp;<a href="#fnref:KAFKA" class="reversefootnote">&#8617;</a></p>
     </li>
   </ol>
 </div>

http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/tutorial/policy.html
----------------------------------------------------------------------
diff --git a/_site/docs/tutorial/policy.html b/_site/docs/tutorial/policy.html
index e732eb0..657cf2d 100644
--- a/_site/docs/tutorial/policy.html
+++ b/_site/docs/tutorial/policy.html
@@ -183,12 +183,13 @@
   <li>
     <p><strong>Step 2</strong>: Eagle supports a variety of properties for match critera where users can set different values. Eagle also supports window functions to extend policies with time functions.</p>
 
-    <pre><code>command = delete 
+    <div class="highlighter-rouge"><pre class="highlight"><code>command = delete 
 (Eagle currently supports the following commands open, delete, copy, append, copy from local, get, move, mkdir, create, list, change permissions)
 	
 source = /tmp/private 
 (Eagle supports wildcarding for property values for example /tmp/*)
 </code></pre>
+    </div>
 
     <p><img src="/images/docs/hdfs-policy2.png" alt="HDFS Policies" /></p>
   </li>
@@ -215,12 +216,13 @@ source = /tmp/private
   <li>
     <p><strong>Step 2</strong>: Eagle support a variety of properties for match critera where users can set different values. Eagle also supports window functions to extend policies with time functions.</p>
 
-    <pre><code>command = Select 
+    <div class="highlighter-rouge"><pre class="highlight"><code>command = Select 
 (Eagle currently supports the following commands DDL statements Create, Drop, Alter, Truncate, Show)
 	
 sensitivity type = PHONE_NUMBER
 (Eagle supports classifying data in Hive with different sensitivity types. Users can use these sensitivity types to create policies)
 </code></pre>
+    </div>
 
     <p><img src="/images/docs/hive-policy2.png" alt="Hive Policies" /></p>
   </li>
@@ -238,7 +240,7 @@ sensitivity type = PHONE_NUMBER
 <div class="footnotes">
   <ol>
     <li id="fn:HIVE">
-      <p><em>All mentions of “hive” on this page represent Apache Hive.</em> <a href="#fnref:HIVE" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “hive” on this page represent Apache Hive.</em>&nbsp;<a href="#fnref:HIVE" class="reversefootnote">&#8617;</a></p>
     </li>
   </ol>
 </div>

http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/tutorial/site-0.3.0.html
----------------------------------------------------------------------
diff --git a/_site/docs/tutorial/site-0.3.0.html b/_site/docs/tutorial/site-0.3.0.html
index a542128..45784d9 100644
--- a/_site/docs/tutorial/site-0.3.0.html
+++ b/_site/docs/tutorial/site-0.3.0.html
@@ -180,32 +180,35 @@ Here we give configuration examples for HDFS, HBASE, and Hive.</p>
 
         <p>You may configure the default path for Hadoop clients to connect remote hdfs namenode.</p>
 
-        <pre><code>  {"fs.defaultFS":"hdfs://sandbox.hortonworks.com:8020"}
-</code></pre>
+        <div class="highlighter-rouge"><pre class="highlight"><code><span class="w">  </span><span class="p">{</span><span class="nt">"fs.defaultFS"</span><span class="p">:</span><span class="s2">"hdfs://sandbox.hortonworks.com:8020"</span><span class="p">}</span><span class="w">
+</span></code></pre>
+        </div>
       </li>
       <li>
         <p>HA case</p>
 
         <p>Basically, you point your fs.defaultFS at your nameservice and let the client know how its configured (the backing namenodes) and how to fail over between them under the HA mode</p>
 
-        <pre><code>  {"fs.defaultFS":"hdfs://nameservice1",
-   "dfs.nameservices": "nameservice1",
-   "dfs.ha.namenodes.nameservice1":"namenode1,namenode2",
-   "dfs.namenode.rpc-address.nameservice1.namenode1": "hadoopnamenode01:8020",
-   "dfs.namenode.rpc-address.nameservice1.namenode2": "hadoopnamenode02:8020",
-   "dfs.client.failover.proxy.provider.nameservice1": "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"
-  }
-</code></pre>
+        <div class="highlighter-rouge"><pre class="highlight"><code><span class="w">  </span><span class="p">{</span><span class="nt">"fs.defaultFS"</span><span class="p">:</span><span class="s2">"hdfs://nameservice1"</span><span class="p">,</span><span class="w">
+   </span><span class="nt">"dfs.nameservices"</span><span class="p">:</span><span class="w"> </span><span class="s2">"nameservice1"</span><span class="p">,</span><span class="w">
+   </span><span class="nt">"dfs.ha.namenodes.nameservice1"</span><span class="p">:</span><span class="s2">"namenode1,namenode2"</span><span class="p">,</span><span class="w">
+   </span><span class="nt">"dfs.namenode.rpc-address.nameservice1.namenode1"</span><span class="p">:</span><span class="w"> </span><span class="s2">"hadoopnamenode01:8020"</span><span class="p">,</span><span class="w">
+   </span><span class="nt">"dfs.namenode.rpc-address.nameservice1.namenode2"</span><span class="p">:</span><span class="w"> </span><span class="s2">"hadoopnamenode02:8020"</span><span class="p">,</span><span class="w">
+   </span><span class="nt">"dfs.client.failover.proxy.provider.nameservice1"</span><span class="p">:</span><span class="w"> </span><span class="s2">"org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"</span><span class="w">
+  </span><span class="p">}</span><span class="w">
+</span></code></pre>
+        </div>
       </li>
       <li>
         <p>Kerberos-secured cluster</p>
 
         <p>For Kerberos-secured cluster, you need to get a keytab file and the principal from your admin, and configure “eagle.keytab.file” and “eagle.kerberos.principal” to authenticate its access.</p>
 
-        <pre><code>  { "eagle.keytab.file":"/EAGLE-HOME/.keytab/eagle.keytab",
-    "eagle.kerberos.principal":"eagle@SOMEWHERE.COM"
-  }
-</code></pre>
+        <div class="highlighter-rouge"><pre class="highlight"><code><span class="w">  </span><span class="p">{</span><span class="w"> </span><span class="nt">"eagle.keytab.file"</span><span class="p">:</span><span class="s2">"/EAGLE-HOME/.keytab/eagle.keytab"</span><span class="p">,</span><span class="w">
+    </span><span class="nt">"eagle.kerberos.principal"</span><span class="p">:</span><span class="s2">"eagle@SOMEWHERE.COM"</span><span class="w">
+  </span><span class="p">}</span><span class="w">
+</span></code></pre>
+        </div>
 
         <p>If there is an exception about “invalid server principal name”, you may need to check the DNS resolver, or the data transfer , such as “dfs.encrypt.data.transfer”, “dfs.encrypt.data.transfer.algorithm”, “dfs.trustedchannel.resolver.class”, “dfs.datatransfer.client.encrypt”.</p>
       </li>
@@ -216,14 +219,15 @@ Here we give configuration examples for HDFS, HBASE, and Hive.</p>
       <li>
         <p>Basic</p>
 
-        <pre><code>  {
-    "accessType": "metastoredb_jdbc",
-    "password": "hive",
-    "user": "hive",
-    "jdbcDriverClassName": "com.mysql.jdbc.Driver",
-    "jdbcUrl": "jdbc:mysql://sandbox.hortonworks.com/hive?createDatabaseIfNotExist=true"
-  }
-</code></pre>
+        <div class="highlighter-rouge"><pre class="highlight"><code><span class="w">  </span><span class="p">{</span><span class="w">
+    </span><span class="nt">"accessType"</span><span class="p">:</span><span class="w"> </span><span class="s2">"metastoredb_jdbc"</span><span class="p">,</span><span class="w">
+    </span><span class="nt">"password"</span><span class="p">:</span><span class="w"> </span><span class="s2">"hive"</span><span class="p">,</span><span class="w">
+    </span><span class="nt">"user"</span><span class="p">:</span><span class="w"> </span><span class="s2">"hive"</span><span class="p">,</span><span class="w">
+    </span><span class="nt">"jdbcDriverClassName"</span><span class="p">:</span><span class="w"> </span><span class="s2">"com.mysql.jdbc.Driver"</span><span class="p">,</span><span class="w">
+    </span><span class="nt">"jdbcUrl"</span><span class="p">:</span><span class="w"> </span><span class="s2">"jdbc:mysql://sandbox.hortonworks.com/hive?createDatabaseIfNotExist=true"</span><span class="w">
+  </span><span class="p">}</span><span class="w">
+</span></code></pre>
+        </div>
       </li>
     </ul>
   </li>
@@ -236,27 +240,29 @@ Here we give configuration examples for HDFS, HBASE, and Hive.</p>
 
         <p>You need to sett “hbase.zookeeper.quorum”:”localhost” property and “hbase.zookeeper.property.clientPort” property.</p>
 
-        <pre><code>  {
-      "hbase.zookeeper.property.clientPort":"2181",
-      "hbase.zookeeper.quorum":"localhost"
-  }
-</code></pre>
+        <div class="highlighter-rouge"><pre class="highlight"><code><span class="w">  </span><span class="p">{</span><span class="w">
+      </span><span class="nt">"hbase.zookeeper.property.clientPort"</span><span class="p">:</span><span class="s2">"2181"</span><span class="p">,</span><span class="w">
+      </span><span class="nt">"hbase.zookeeper.quorum"</span><span class="p">:</span><span class="s2">"localhost"</span><span class="w">
+  </span><span class="p">}</span><span class="w">
+</span></code></pre>
+        </div>
       </li>
       <li>
         <p>Kerberos-secured cluster</p>
 
         <p>According to your environment, you can add or remove some of the following properties. Here is the reference.</p>
 
-        <pre><code>  {
-      "hbase.zookeeper.property.clientPort":"2181",
-      "hbase.zookeeper.quorum":"localhost",
-      "hbase.security.authentication":"kerberos",
-      "hbase.master.kerberos.principal":"hadoop/_HOST@EXAMPLE.COM",
-      "zookeeper.znode.parent":"/hbase",
-      "eagle.keytab.file":"/EAGLE-HOME/.keytab/eagle.keytab",
-      "eagle.kerberos.principal":"eagle@EXAMPLE.COM"
-  }
-</code></pre>
+        <div class="highlighter-rouge"><pre class="highlight"><code><span class="w">  </span><span class="p">{</span><span class="w">
+      </span><span class="nt">"hbase.zookeeper.property.clientPort"</span><span class="p">:</span><span class="s2">"2181"</span><span class="p">,</span><span class="w">
+      </span><span class="nt">"hbase.zookeeper.quorum"</span><span class="p">:</span><span class="s2">"localhost"</span><span class="p">,</span><span class="w">
+      </span><span class="nt">"hbase.security.authentication"</span><span class="p">:</span><span class="s2">"kerberos"</span><span class="p">,</span><span class="w">
+      </span><span class="nt">"hbase.master.kerberos.principal"</span><span class="p">:</span><span class="s2">"hadoop/_HOST@EXAMPLE.COM"</span><span class="p">,</span><span class="w">
+      </span><span class="nt">"zookeeper.znode.parent"</span><span class="p">:</span><span class="s2">"/hbase"</span><span class="p">,</span><span class="w">
+      </span><span class="nt">"eagle.keytab.file"</span><span class="p">:</span><span class="s2">"/EAGLE-HOME/.keytab/eagle.keytab"</span><span class="p">,</span><span class="w">
+      </span><span class="nt">"eagle.kerberos.principal"</span><span class="p">:</span><span class="s2">"eagle@EXAMPLE.COM"</span><span class="w">
+  </span><span class="p">}</span><span class="w">
+</span></code></pre>
+        </div>
       </li>
     </ul>
   </li>
@@ -274,13 +280,13 @@ Here we give configuration examples for HDFS, HBASE, and Hive.</p>
 <div class="footnotes">
   <ol>
     <li id="fn:HADOOP">
-      <p><em>All mentions of “hadoop” on this page represent Apache Hadoop.</em> <a href="#fnref:HADOOP" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “hadoop” on this page represent Apache Hadoop.</em>&nbsp;<a href="#fnref:HADOOP" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:HIVE">
-      <p><em>Apache Hive.</em> <a href="#fnref:HIVE" class="reversefootnote">&#8617;</a></p>
+      <p><em>Apache Hive.</em>&nbsp;<a href="#fnref:HIVE" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:HBASE">
-      <p><em>Apache HBase.</em> <a href="#fnref:HBASE" class="reversefootnote">&#8617;</a></p>
+      <p><em>Apache HBase.</em>&nbsp;<a href="#fnref:HBASE" class="reversefootnote">&#8617;</a></p>
     </li>
   </ol>
 </div>

http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/tutorial/topologymanagement.html
----------------------------------------------------------------------
diff --git a/_site/docs/tutorial/topologymanagement.html b/_site/docs/tutorial/topologymanagement.html
index 2a95034..d635def 100644
--- a/_site/docs/tutorial/topologymanagement.html
+++ b/_site/docs/tutorial/topologymanagement.html
@@ -168,7 +168,7 @@
 <p>Application manager consists of a daemon scheduler and an execution module. The scheduler periodically loads user operations(start/stop) from database, and the execution module executes these operations. For more details, please refer to <a href="https://cwiki.apache.org/confluence/display/EAG/Application+Management">here</a>.</p>
 
 <h3 id="configurations">Configurations</h3>
-<p>The configuration file <code>eagle-scheduler.conf</code> defines scheduler parameters, execution platform settings and parts of default topology configuration.</p>
+<p>The configuration file <code class="highlighter-rouge">eagle-scheduler.conf</code> defines scheduler parameters, execution platform settings and parts of default topology configuration.</p>
 
 <ul>
   <li>
@@ -262,7 +262,7 @@
   <li>
     <p>Editing eagle-scheduler.conf, and start Eagle service</p>
 
-    <pre><code> # enable application manager       
+    <div class="highlighter-rouge"><pre class="highlight"><code> # enable application manager       
  appCommandLoaderEnabled = true
     
  # provide jar path
@@ -272,9 +272,10 @@
  envContextConfig.url = "http://sandbox.hortonworks.com:8744"
  envContextConfig.nimbusHost = "sandbox.hortonworks.com"
 </code></pre>
+    </div>
 
     <p>For more configurations, please back to <a href="/docs/configuration.html">Application Configuration</a>. <br />
- After the configuration is ready, start Eagle service <code>bin/eagle-service.sh start</code>.</p>
+ After the configuration is ready, start Eagle service <code class="highlighter-rouge">bin/eagle-service.sh start</code>.</p>
   </li>
   <li>
     <p>Go to admin page 
@@ -296,11 +297,11 @@
   <li>
     <p>Go to site page, and add topology configurations.</p>
 
-    <p><strong>NOTICE</strong> topology configurations defined here are REQUIRED an extra prefix <code>.app</code></p>
+    <p><strong>NOTICE</strong> topology configurations defined here are REQUIRED an extra prefix <code class="highlighter-rouge">.app</code></p>
 
     <p>Blow are some example configurations for [site=sandbox, applicatoin=hbaseSecurityLog].</p>
 
-    <pre><code> classification.hbase.zookeeper.property.clientPort=2181
+    <div class="highlighter-rouge"><pre class="highlight"><code> classification.hbase.zookeeper.property.clientPort=2181
  classification.hbase.zookeeper.quorum=sandbox.hortonworks.com
     
  app.envContextConfig.env=storm
@@ -329,6 +330,7 @@
  app.eagleProps.eagleService.username=admin
  app.eagleProps.eagleService.password=secret
 </code></pre>
+    </div>
 
     <p><img src="/images/appManager/topology-configuration-1.png" alt="topology-configuration-1" />
 <img src="/images/appManager/topology-configuration-2.png" alt="topology-configuration-2" /></p>
@@ -351,7 +353,7 @@
 <div class="footnotes">
   <ol>
     <li id="fn:STORM">
-      <p><em>All mentions of “storm” on this page represent Apache Storm.</em> <a href="#fnref:STORM" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “storm” on this page represent Apache Storm.</em>&nbsp;<a href="#fnref:STORM" class="reversefootnote">&#8617;</a></p>
     </li>
   </ol>
 </div>

http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/tutorial/userprofile.html
----------------------------------------------------------------------
diff --git a/_site/docs/tutorial/userprofile.html b/_site/docs/tutorial/userprofile.html
index fbf55e6..5b442ab 100644
--- a/_site/docs/tutorial/userprofile.html
+++ b/_site/docs/tutorial/userprofile.html
@@ -173,9 +173,10 @@ is started.</p>
       <li>
         <p>Option 1: command line</p>
 
-        <pre><code>$ cd &lt;eagle-home&gt;/bin
+        <div class="highlighter-rouge"><pre class="highlight"><code>$ cd &lt;eagle-home&gt;/bin
 $ bin/eagle-userprofile-scheduler.sh --site sandbox start
 </code></pre>
+        </div>
       </li>
       <li>
         <p>Option 2: start via Apache Ambari
@@ -203,8 +204,9 @@ $ bin/eagle-userprofile-scheduler.sh --site sandbox start
 
     <p>submit userProfiles topology if it’s not on <a href="http://sandbox.hortonworks.com:8744">topology UI</a></p>
 
-    <pre><code>$ bin/eagle-topology.sh --main org.apache.eagle.security.userprofile.UserProfileDetectionMain --config conf/sandbox-userprofile-topology.conf start
+    <div class="highlighter-rouge"><pre class="highlight"><code>$ bin/eagle-topology.sh --main org.apache.eagle.security.userprofile.UserProfileDetectionMain --config conf/sandbox-userprofile-topology.conf start
 </code></pre>
+    </div>
   </li>
   <li>
     <p><strong>Option 2</strong>: Apache Ambari</p>
@@ -219,23 +221,26 @@ $ bin/eagle-userprofile-scheduler.sh --site sandbox start
   <li>Prepare sample data for ML training and validation sample data
     <ul>
       <li>a. Download following sample data to be used for training</li>
-      <li><a href="/data/user1.hdfs-audit.2015-10-11-00.txt"><code>user1.hdfs-audit.2015-10-11-00.txt</code></a></li>
-      <li><a href="/data/user1.hdfs-audit.2015-10-11-01.txt"><code>user1.hdfs-audit.2015-10-11-01.txt</code></a></li>
-      <li>b. Downlaod <a href="/data/userprofile-validate.txt"><code>userprofile-validate.txt</code></a>file which contains data points that you can try to test the models</li>
+    </ul>
+    <ul>
+      <li><a href="/data/user1.hdfs-audit.2015-10-11-00.txt"><code class="highlighter-rouge">user1.hdfs-audit.2015-10-11-00.txt</code></a></li>
+      <li><a href="/data/user1.hdfs-audit.2015-10-11-01.txt"><code class="highlighter-rouge">user1.hdfs-audit.2015-10-11-01.txt</code></a>
+    * b. Downlaod <a href="/data/userprofile-validate.txt"><code class="highlighter-rouge">userprofile-validate.txt</code></a>file which contains data points that you can try to test the models</li>
     </ul>
   </li>
   <li>Copy the files (downloaded in the previous step) into a location in sandbox 
-For example: <code>/usr/hdp/current/eagle/lib/userprofile/data/</code></li>
-  <li>Modify <code>&lt;Eagle-home&gt;/conf/sandbox-userprofile-scheduler.conf </code>
-update <code>training-audit-path</code> to set to the path for training data sample (the path you used for Step 1.a)
+For example: <code class="highlighter-rouge">/usr/hdp/current/eagle/lib/userprofile/data/</code></li>
+  <li>Modify <code class="highlighter-rouge">&lt;Eagle-home&gt;/conf/sandbox-userprofile-scheduler.conf </code>
+update <code class="highlighter-rouge">training-audit-path</code> to set to the path for training data sample (the path you used for Step 1.a)
 update detection-audit-path to set to the path for validation (the path you used for Step 1.b)</li>
   <li>Run ML training program from eagle UI</li>
   <li>
     <p>Produce Apache Kafka data using the contents from validate file (Step 1.b)
-Run the command (assuming the eagle configuration uses Kafka topic <code>sandbox_hdfs_audit_log</code>)</p>
+Run the command (assuming the eagle configuration uses Kafka topic <code class="highlighter-rouge">sandbox_hdfs_audit_log</code>)</p>
 
-    <pre><code> ./kafka-console-producer.sh --broker-list sandbox.hortonworks.com:6667 --topic sandbox_hdfs_audit_log
+    <div class="highlighter-rouge"><pre class="highlight"><code> ./kafka-console-producer.sh --broker-list sandbox.hortonworks.com:6667 --topic sandbox_hdfs_audit_log
 </code></pre>
+    </div>
   </li>
   <li>Paste few lines of data from file validate onto kafka-console-producer 
 Check <a href="http://localhost:9099/eagle-service/#/dam/alertList">http://localhost:9099/eagle-service/#/dam/alertList</a> for generated alerts</li>

http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/usecases.html
----------------------------------------------------------------------
diff --git a/_site/docs/usecases.html b/_site/docs/usecases.html
index 056c0ce..ebb0a23 100644
--- a/_site/docs/usecases.html
+++ b/_site/docs/usecases.html
@@ -218,16 +218,16 @@
 <div class="footnotes">
   <ol>
     <li id="fn:HADOOP">
-      <p><em>All mentions of “hadoop” on this page represent Apache Hadoop.</em> <a href="#fnref:HADOOP" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “hadoop” on this page represent Apache Hadoop.</em>&nbsp;<a href="#fnref:HADOOP" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:HIVE">
-      <p><em>All mentions of “hive” on this page represent Apache Hive.</em> <a href="#fnref:HIVE" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “hive” on this page represent Apache Hive.</em>&nbsp;<a href="#fnref:HIVE" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:SPARK">
-      <p><em>All mentions of “spark” on this page represent Apache Spark.</em> <a href="#fnref:SPARK" class="reversefootnote">&#8617;</a></p>
+      <p><em>All mentions of “spark” on this page represent Apache Spark.</em>&nbsp;<a href="#fnref:SPARK" class="reversefootnote">&#8617;</a></p>
     </li>
     <li id="fn:CASSANDRA">
-      <p><em>Apache Cassandra.</em> <a href="#fnref:CASSANDRA" class="reversefootnote">&#8617;</a></p>
+      <p><em>Apache Cassandra.</em>&nbsp;<a href="#fnref:CASSANDRA" class="reversefootnote">&#8617;</a></p>
     </li>
   </ol>
 </div>

http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/feed.xml
----------------------------------------------------------------------
diff --git a/_site/feed.xml b/_site/feed.xml
index bd59283..a2720f7 100644
--- a/_site/feed.xml
+++ b/_site/feed.xml
@@ -5,9 +5,9 @@
     <description>Eagle - Analyze Big Data Platforms for Security and Performance</description>
     <link>http://goeagle.io/</link>
     <atom:link href="http://goeagle.io/feed.xml" rel="self" type="application/rss+xml"/>
-    <pubDate>Mon, 03 Apr 2017 19:55:40 +0800</pubDate>
-    <lastBuildDate>Mon, 03 Apr 2017 19:55:40 +0800</lastBuildDate>
-    <generator>Jekyll v2.5.3</generator>
+    <pubDate>Wed, 22 Nov 2017 13:52:35 +0800</pubDate>
+    <lastBuildDate>Wed, 22 Nov 2017 13:52:35 +0800</lastBuildDate>
+    <generator>Jekyll v3.4.3</generator>
     
       <item>
         <title>Apache Eagle 正式发布:分布式实时Hadoop数据安全方案</title>
@@ -17,7 +17,7 @@
 
 &lt;p&gt;日前,eBay公司隆重宣布正式向开源业界推出分布式实时安全监控方案 - Apache Eagle (http://goeagle.io),该项目已于2015年10月26日正式加入Apache 成为孵化器项目。Apache Eagle提供一套高效分布式的流式策略引擎,具有高实时、可伸缩、易扩展、交互友好等特点,同时集成机器学习对用户行为建立Profile以实现智能实时地保护Hadoop生态系统中大数据的安全。&lt;/p&gt;
 
-&lt;h2 id=&quot;section&quot;&gt;背景&lt;/h2&gt;
+&lt;h2 id=&quot;背景&quot;&gt;背景&lt;/h2&gt;
 &lt;p&gt;随着大数据的发展,越来越多的成功企业或者组织开始采取数据驱动商业的运作模式。在eBay,我们拥有数万名工程师、分析师和数据科学家,他们每天访问分析数PB级的数据,以为我们的用户带来无与伦比的体验。在全球业务中,我们也广泛地利用海量大数据来连接我们数以亿计的用户。&lt;/p&gt;
 
 &lt;p&gt;近年来,Hadoop已经逐渐成为大数据分析领域最受欢迎的解决方案,eBay也一直在使用Hadoop技术从数据中挖掘价值,例如,我们通过大数据提高用户的搜索体验,识别和优化精准广告投放,充实我们的产品目录,以及通过点击流分析以理解用户如何使用我们的在线市场平台等。&lt;/p&gt;
@@ -54,20 +54,20 @@
   &lt;li&gt;&lt;strong&gt;开源&lt;/strong&gt;:Eagle一直根据开源的标准开发,并构建于诸多大数据领域的开源产品之上,因此我们决定以Apache许可证开源Eagle,以回馈社区,同时也期待获得社区的反馈、协作与支持。&lt;/li&gt;
 &lt;/ul&gt;
 
-&lt;h2 id=&quot;eagle&quot;&gt;Eagle概览&lt;/h2&gt;
+&lt;h2 id=&quot;eagle概览&quot;&gt;Eagle概览&lt;/h2&gt;
 
 &lt;p&gt;&lt;img src=&quot;/images/posts/eagle-group.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
 
-&lt;h4 id=&quot;data-collection-and-storage&quot;&gt;数据流接入和存储(Data Collection and Storage)&lt;/h4&gt;
+&lt;h4 id=&quot;数据流接入和存储data-collection-and-storage&quot;&gt;数据流接入和存储(Data Collection and Storage)&lt;/h4&gt;
 &lt;p&gt;Eagle提供高度可扩展的编程API,可以支持将任何类型的数据源集成到Eagle的策略执行引擎中。例如,在Eagle HDFS 审计事件(Audit)监控模块中,通过Kafka来实时接收来自Namenode Log4j Appender 或者 Logstash Agent 收集的数据;在Eagle Hive 监控模块中,通过YARN API 收集正在运行Job的Hive 查询日志,并保证比较高的可伸缩性和容错性。&lt;/p&gt;
 
-&lt;h4 id=&quot;data-processing&quot;&gt;数据实时处理(Data Processing)&lt;/h4&gt;
+&lt;h4 id=&quot;数据实时处理data-processing&quot;&gt;数据实时处理(Data Processing)&lt;/h4&gt;
 
 &lt;p&gt;&lt;strong&gt;流处理API(Stream Processing API)Eagle&lt;/strong&gt; 提供独立于物理平台而高度抽象的流处理API,目前默认支持Apache Storm,但是也允许扩展到其他任意流处理引擎,比如Flink 或者 Samza等。该层抽象允许开发者在定义监控数据处理逻辑时,无需在物理执行层绑定任何特定流处理平台,而只需通过复用、拼接和组装例如数据转换、过滤、外部数据Join等组件,以实现满足需求的DAG(有向无环图),同时,开发者也可以很容易地以编程地方式将业务逻辑流程和Eagle 策略引擎框架集成起来。Eagle框架内部会将描述业务逻辑的DAG编译成底层流处理架构的原生应用,例如Apache Storm Topology 等,从事实现平台的独立。&lt;/p&gt;
 
 &lt;p&gt;&lt;strong&gt;以下是一个Eagle如何处理事件和告警的示例:&lt;/strong&gt;&lt;/p&gt;
 
-&lt;pre&gt;&lt;code&gt;StormExecutionEnvironment env = ExecutionEnvironmentFactory.getStorm(config); // storm env
+&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;StormExecutionEnvironment env = ExecutionEnvironmentFactory.getStorm(config); // storm env
 StreamProducer producer = env.newSource(new KafkaSourcedSpoutProvider().getSpout(config)).renameOutputFields(1) // declare kafka source
        .flatMap(new AuditLogTransformer()) // transform event
        .groupBy(Arrays.asList(0))  // group by 1st field
@@ -75,6 +75,7 @@ StreamProducer producer = env.newSource(new KafkaSourcedSpoutProvider().getSpout
        .alertWithConsumer(“userActivity“,”userProfileExecutor“) // ML policy evaluation
 env.execute(); // execute stream processing and alert
 &lt;/code&gt;&lt;/pre&gt;
+&lt;/div&gt;
 
 &lt;p&gt;&lt;strong&gt;告警框架(Alerting Framework)Eagle&lt;/strong&gt;告警框架由流元数据API、策略引擎服务提供API、策略Partitioner API 以及预警去重框架等组成:&lt;/p&gt;
 
@@ -84,7 +85,7 @@ env.execute(); // execute stream processing and alert
   &lt;li&gt;
     &lt;p&gt;&lt;strong&gt;扩展性&lt;/strong&gt; Eagle的策略引擎服务提供API允许你插入新的策略引擎&lt;/p&gt;
 
-    &lt;pre&gt;&lt;code&gt;  public interface PolicyEvaluatorServiceProvider {
+    &lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;  public interface PolicyEvaluatorServiceProvider {
     public String getPolicyType();         // literal string to identify one type of policy
     public Class&amp;lt;? extends PolicyEvaluator&amp;gt; getPolicyEvaluator(); // get policy evaluator implementation
     public List&amp;lt;Module&amp;gt; getBindingModules();  // policy text with json format to object mapping
@@ -95,15 +96,17 @@ env.execute(); // execute stream processing and alert
     public void onPolicyDelete(); // invoked when policy is deleted
   }
 &lt;/code&gt;&lt;/pre&gt;
+    &lt;/div&gt;
   &lt;/li&gt;
   &lt;li&gt;&lt;strong&gt;策略Partitioner API&lt;/strong&gt; 允许策略在不同的物理节点上并行执行。也允许你自定义策略Partitioner类。这些功能使得策略和事件完全以分布式的方式执行。&lt;/li&gt;
   &lt;li&gt;
     &lt;p&gt;&lt;strong&gt;可伸缩性&lt;/strong&gt; Eagle 通过支持策略的分区接口来实现大量的策略可伸缩并发地运行&lt;/p&gt;
 
-    &lt;pre&gt;&lt;code&gt;  public interface PolicyPartitioner extends Serializable {
+    &lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;  public interface PolicyPartitioner extends Serializable {
     int partition(int numTotalPartitions, String policyType, String policyId); // method to distribute policies
   }
 &lt;/code&gt;&lt;/pre&gt;
+    &lt;/div&gt;
 
     &lt;p&gt;&lt;img src=&quot;/images/posts/policy-partition.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
 
@@ -160,26 +163,29 @@ Eagle 支持根据用户在Hadoop平台上历史使用行为习惯来定义行
   &lt;li&gt;
     &lt;p&gt;单一事件执行策略(用户访问Hive中的敏感数据列)&lt;/p&gt;
 
-    &lt;pre&gt;&lt;code&gt;  from hiveAccessLogStream[sensitivityType==&#39;PHONE_NUMBER&#39;] select * insert into outputStream;
+    &lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;  from hiveAccessLogStream[sensitivityType=='PHONE_NUMBER'] select * insert into outputStream;
 &lt;/code&gt;&lt;/pre&gt;
+    &lt;/div&gt;
   &lt;/li&gt;
   &lt;li&gt;
     &lt;p&gt;基于窗口的策略(用户在10分钟内访问目录 /tmp/private 多余 5次)&lt;/p&gt;
 
-    &lt;pre&gt;&lt;code&gt;  hdfsAuditLogEventStream[(src == &#39;/tmp/private&#39;)]#window.externalTime(timestamp,10 min) select user, count(timestamp) as aggValue group by user having aggValue &amp;gt;= 5 insert into outputStream;
+    &lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;  hdfsAuditLogEventStream[(src == '/tmp/private')]#window.externalTime(timestamp,10 min) select user, count(timestamp) as aggValue group by user having aggValue &amp;gt;= 5 insert into outputStream;
 &lt;/code&gt;&lt;/pre&gt;
+    &lt;/div&gt;
   &lt;/li&gt;
 &lt;/ul&gt;
 
 &lt;p&gt;&lt;strong&gt;查询服务(Query Service)&lt;/strong&gt; Eagle 提供类SQL的REST API用来实现针对海量数据集的综合计算、查询和分析的能力,支持例如过滤、聚合、直方运算、排序、top、算术表达式以及分页等。Eagle优先支持HBase 作为其默认数据存储,但是同时也支持基JDBC的关系型数据库。特别是当选择以HBase作为存储时,Eagle便原生拥有了HBase存储和查询海量监控数据的能力,Eagle 查询框架会将用户提供的类SQL查询语法最终编译成为HBase 原生的Filter 对象,并支持通过HBase Coprocessor进一步提升响应速度。&lt;/p&gt;
 
-&lt;pre&gt;&lt;code&gt;query=AlertDefinitionService[@dataSource=&quot;hiveQueryLog&quot;]{@policyDef}&amp;amp;pageSize=100000
+&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;query=AlertDefinitionService[@dataSource=&quot;hiveQueryLog&quot;]{@policyDef}&amp;amp;pageSize=100000
 &lt;/code&gt;&lt;/pre&gt;
+&lt;/div&gt;
 
-&lt;h2 id=&quot;eagleebay&quot;&gt;Eagle在eBay的使用场景&lt;/h2&gt;
+&lt;h2 id=&quot;eagle在ebay的使用场景&quot;&gt;Eagle在eBay的使用场景&lt;/h2&gt;
 &lt;p&gt;目前,Eagle的数据行为监控系统已经部署到一个拥有2500多个节点的Hadoop集群之上,用以保护数百PB数据的安全,并正计划于今年年底之前扩展到其他上十个Hadoop集群上,从而覆盖eBay 所有主要Hadoop的10000多台节点。在我们的生产环境中,我们已针对HDFS、Hive 等集群中的数据配置了一些基础的安全策略,并将于年底之前不断引入更多的策略,以确保重要数据的绝对安全。目前,Eagle的策略涵盖多种模式,包括从访问模式、频繁访问数据集,预定义查询类型、Hive 表和列、HBase 表以及基于机器学习模型生成的用户Profile相关的所有策略等。同时,我们也有广泛的策略来防止数据的丢失、数据被拷贝到不安全地点、敏感数据被未授权区域访问等。Eagle策略定义上极大的灵活性和扩展性使得我们未来可以轻易地继续扩展更多更复杂的策略以支持更多多元
 化的用例场景。&lt;/p&gt;
 
-&lt;h2 id=&quot;section-1&quot;&gt;后续计划&lt;/h2&gt;
+&lt;h2 id=&quot;后续计划&quot;&gt;后续计划&lt;/h2&gt;
 &lt;p&gt;过去两年中,在eBay 除了被用于数据行为监控以外,Eagle 核心框架还被广泛用于监控节点健康状况、Hadoop应用性能指标、Hadoop 核心服务以及整个Hadoop集群的健康状况等诸多领域。我们还建立一系列的自动化机制,例如节点修复等,帮助我们平台部门极大得节省了我们人工劳力,并有效地提升了整个集群资源地利用率。&lt;/p&gt;
 
 &lt;p&gt;以下是我们目前正在开发中地一些特性:&lt;/p&gt;
@@ -196,7 +202,7 @@ Eagle 支持根据用户在Hadoop平台上历史使用行为习惯来定义行
   &lt;/li&gt;
 &lt;/ul&gt;
 
-&lt;h2 id=&quot;section-2&quot;&gt;关于作者&lt;/h2&gt;
+&lt;h2 id=&quot;关于作者&quot;&gt;关于作者&lt;/h2&gt;
 &lt;p&gt;&lt;a href=&quot;https://github.com/haoch&quot;&gt;陈浩&lt;/a&gt;,Apache Eagle Committer 和 PMC 成员,eBay 分析平台基础架构部门高级软件工程师,负责Eagle的产品设计、技术架构、核心实现以及开源社区推广等。&lt;/p&gt;
 
 &lt;p&gt;感谢以下来自Apache Eagle社区和eBay公司的联合作者们对本文的贡献:&lt;/p&gt;
@@ -210,7 +216,7 @@ Eagle 支持根据用户在Hadoop平台上历史使用行为习惯来定义行
 
 &lt;p&gt;eBay 分析平台基础架构部(Analytics Data Infrastructure)是eBay的全球数据及分析基础架构部门,负责eBay在数据库、数据仓库、Hadoop、商务智能以及机器学习等各个数据平台开发、管理等,支持eBay全球各部门运用高端的数据分析解决方案作出及时有效的作业决策,为遍布全球的业务用户提供数据分析解决方案。&lt;/p&gt;
 
-&lt;h2 id=&quot;section-3&quot;&gt;参考资料&lt;/h2&gt;
+&lt;h2 id=&quot;参考资料&quot;&gt;参考资料&lt;/h2&gt;
 
 &lt;ul&gt;
   &lt;li&gt;Apache Eagle 文档:&lt;a href=&quot;http://goeagle.io&quot;&gt;http://goeagle.io&lt;/a&gt;&lt;/li&gt;
@@ -218,7 +224,7 @@ Eagle 支持根据用户在Hadoop平台上历史使用行为习惯来定义行
   &lt;li&gt;Apache Eagle 项目:&lt;a href=&quot;http://incubator.apache.org/projects/eagle.html&quot;&gt;http://incubator.apache.org/projects/eagle.html&lt;/a&gt;&lt;/li&gt;
 &lt;/ul&gt;
 
-&lt;h2 id=&quot;section-4&quot;&gt;引用链接&lt;/h2&gt;
+&lt;h2 id=&quot;引用链接&quot;&gt;引用链接&lt;/h2&gt;
 &lt;ul&gt;
   &lt;li&gt;&lt;strong&gt;CSDN&lt;/strong&gt;: &lt;a href=&quot;http://www.csdn.net/article/2015-10-29/2826076&quot;&gt;http://www.csdn.net/article/2015-10-29/2826076&lt;/a&gt;&lt;/li&gt;
   &lt;li&gt;&lt;strong&gt;OSCHINA&lt;/strong&gt;: &lt;a href=&quot;http://www.oschina.net/news/67515/apache-eagle&quot;&gt;http://www.oschina.net/news/67515/apache-eagle&lt;/a&gt;&lt;/li&gt;

http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/post/2015/10/27/apache-eagle-announce-cn.html
----------------------------------------------------------------------
diff --git a/_site/post/2015/10/27/apache-eagle-announce-cn.html b/_site/post/2015/10/27/apache-eagle-announce-cn.html
index 5d87315..088e22d 100644
--- a/_site/post/2015/10/27/apache-eagle-announce-cn.html
+++ b/_site/post/2015/10/27/apache-eagle-announce-cn.html
@@ -93,7 +93,7 @@
 
 <p>日前,eBay公司隆重宣布正式向开源业界推出分布式实时安全监控方案 - Apache Eagle (http://goeagle.io),该项目已于2015年10月26日正式加入Apache 成为孵化器项目。Apache Eagle提供一套高效分布式的流式策略引擎,具有高实时、可伸缩、易扩展、交互友好等特点,同时集成机器学习对用户行为建立Profile以实现智能实时地保护Hadoop生态系统中大数据的安全。</p>
 
-<h2 id="section">背景</h2>
+<h2 id="背景">背景</h2>
 <p>随着大数据的发展,越来越多的成功企业或者组织开始采取数据驱动商业的运作模式。在eBay,我们拥有数万名工程师、分析师和数据科学家,他们每天访问分析数PB级的数据,以为我们的用户带来无与伦比的体验。在全球业务中,我们也广泛地利用海量大数据来连接我们数以亿计的用户。</p>
 
 <p>近年来,Hadoop已经逐渐成为大数据分析领域最受欢迎的解决方案,eBay也一直在使用Hadoop技术从数据中挖掘价值,例如,我们通过大数据提高用户的搜索体验,识别和优化精准广告投放,充实我们的产品目录,以及通过点击流分析以理解用户如何使用我们的在线市场平台等。</p>
@@ -130,20 +130,20 @@
   <li><strong>开源</strong>:Eagle一直根据开源的标准开发,并构建于诸多大数据领域的开源产品之上,因此我们决定以Apache许可证开源Eagle,以回馈社区,同时也期待获得社区的反馈、协作与支持。</li>
 </ul>
 
-<h2 id="eagle">Eagle概览</h2>
+<h2 id="eagle概览">Eagle概览</h2>
 
 <p><img src="/images/posts/eagle-group.png" alt="" /></p>
 
-<h4 id="data-collection-and-storage">数据流接入和存储(Data Collection and Storage)</h4>
+<h4 id="数据流接入和存储data-collection-and-storage">数据流接入和存储(Data Collection and Storage)</h4>
 <p>Eagle提供高度可扩展的编程API,可以支持将任何类型的数据源集成到Eagle的策略执行引擎中。例如,在Eagle HDFS 审计事件(Audit)监控模块中,通过Kafka来实时接收来自Namenode Log4j Appender 或者 Logstash Agent 收集的数据;在Eagle Hive 监控模块中,通过YARN API 收集正在运行Job的Hive 查询日志,并保证比较高的可伸缩性和容错性。</p>
 
-<h4 id="data-processing">数据实时处理(Data Processing)</h4>
+<h4 id="数据实时处理data-processing">数据实时处理(Data Processing)</h4>
 
 <p><strong>流处理API(Stream Processing API)Eagle</strong> 提供独立于物理平台而高度抽象的流处理API,目前默认支持Apache Storm,但是也允许扩展到其他任意流处理引擎,比如Flink 或者 Samza等。该层抽象允许开发者在定义监控数据处理逻辑时,无需在物理执行层绑定任何特定流处理平台,而只需通过复用、拼接和组装例如数据转换、过滤、外部数据Join等组件,以实现满足需求的DAG(有向无环图),同时,开发者也可以很容易地以编程地方式将业务逻辑流程和Eagle 策略引擎框架集成起来。Eagle框架内部会将描述业务逻辑的DAG编译成底层流处理架构的原生应用,例如Apache Storm Topology 等,从事实现平台的独立。</p>
 
 <p><strong>以下是一个Eagle如何处理事件和告警的示例:</strong></p>
 
-<pre><code>StormExecutionEnvironment env = ExecutionEnvironmentFactory.getStorm(config); // storm env
+<div class="highlighter-rouge"><pre class="highlight"><code>StormExecutionEnvironment env = ExecutionEnvironmentFactory.getStorm(config); // storm env
 StreamProducer producer = env.newSource(new KafkaSourcedSpoutProvider().getSpout(config)).renameOutputFields(1) // declare kafka source
        .flatMap(new AuditLogTransformer()) // transform event
        .groupBy(Arrays.asList(0))  // group by 1st field
@@ -151,6 +151,7 @@ StreamProducer producer = env.newSource(new KafkaSourcedSpoutProvider().getSpout
        .alertWithConsumer(“userActivity“,”userProfileExecutor“) // ML policy evaluation
 env.execute(); // execute stream processing and alert
 </code></pre>
+</div>
 
 <p><strong>告警框架(Alerting Framework)Eagle</strong>告警框架由流元数据API、策略引擎服务提供API、策略Partitioner API 以及预警去重框架等组成:</p>
 
@@ -160,7 +161,7 @@ env.execute(); // execute stream processing and alert
   <li>
     <p><strong>扩展性</strong> Eagle的策略引擎服务提供API允许你插入新的策略引擎</p>
 
-    <pre><code>  public interface PolicyEvaluatorServiceProvider {
+    <div class="highlighter-rouge"><pre class="highlight"><code>  public interface PolicyEvaluatorServiceProvider {
     public String getPolicyType();         // literal string to identify one type of policy
     public Class&lt;? extends PolicyEvaluator&gt; getPolicyEvaluator(); // get policy evaluator implementation
     public List&lt;Module&gt; getBindingModules();  // policy text with json format to object mapping
@@ -171,15 +172,17 @@ env.execute(); // execute stream processing and alert
     public void onPolicyDelete(); // invoked when policy is deleted
   }
 </code></pre>
+    </div>
   </li>
   <li><strong>策略Partitioner API</strong> 允许策略在不同的物理节点上并行执行。也允许你自定义策略Partitioner类。这些功能使得策略和事件完全以分布式的方式执行。</li>
   <li>
     <p><strong>可伸缩性</strong> Eagle 通过支持策略的分区接口来实现大量的策略可伸缩并发地运行</p>
 
-    <pre><code>  public interface PolicyPartitioner extends Serializable {
+    <div class="highlighter-rouge"><pre class="highlight"><code>  public interface PolicyPartitioner extends Serializable {
     int partition(int numTotalPartitions, String policyType, String policyId); // method to distribute policies
   }
 </code></pre>
+    </div>
 
     <p><img src="/images/posts/policy-partition.png" alt="" /></p>
 
@@ -236,26 +239,29 @@ Eagle 支持根据用户在Hadoop平台上历史使用行为习惯来定义行
   <li>
     <p>单一事件执行策略(用户访问Hive中的敏感数据列)</p>
 
-    <pre><code>  from hiveAccessLogStream[sensitivityType=='PHONE_NUMBER'] select * insert into outputStream;
+    <div class="highlighter-rouge"><pre class="highlight"><code>  from hiveAccessLogStream[sensitivityType=='PHONE_NUMBER'] select * insert into outputStream;
 </code></pre>
+    </div>
   </li>
   <li>
     <p>基于窗口的策略(用户在10分钟内访问目录 /tmp/private 多余 5次)</p>
 
-    <pre><code>  hdfsAuditLogEventStream[(src == '/tmp/private')]#window.externalTime(timestamp,10 min) select user, count(timestamp) as aggValue group by user having aggValue &gt;= 5 insert into outputStream;
+    <div class="highlighter-rouge"><pre class="highlight"><code>  hdfsAuditLogEventStream[(src == '/tmp/private')]#window.externalTime(timestamp,10 min) select user, count(timestamp) as aggValue group by user having aggValue &gt;= 5 insert into outputStream;
 </code></pre>
+    </div>
   </li>
 </ul>
 
 <p><strong>查询服务(Query Service)</strong> Eagle 提供类SQL的REST API用来实现针对海量数据集的综合计算、查询和分析的能力,支持例如过滤、聚合、直方运算、排序、top、算术表达式以及分页等。Eagle优先支持HBase 作为其默认数据存储,但是同时也支持基JDBC的关系型数据库。特别是当选择以HBase作为存储时,Eagle便原生拥有了HBase存储和查询海量监控数据的能力,Eagle 查询框架会将用户提供的类SQL查询语法最终编译成为HBase 原生的Filter 对象,并支持通过HBase Coprocessor进一步提升响应速度。</p>
 
-<pre><code>query=AlertDefinitionService[@dataSource="hiveQueryLog"]{@policyDef}&amp;pageSize=100000
+<div class="highlighter-rouge"><pre class="highlight"><code>query=AlertDefinitionService[@dataSource="hiveQueryLog"]{@policyDef}&amp;pageSize=100000
 </code></pre>
+</div>
 
-<h2 id="eagleebay">Eagle在eBay的使用场景</h2>
+<h2 id="eagle在ebay的使用场景">Eagle在eBay的使用场景</h2>
 <p>目前,Eagle的数据行为监控系统已经部署到一个拥有2500多个节点的Hadoop集群之上,用以保护数百PB数据的安全,并正计划于今年年底之前扩展到其他上十个Hadoop集群上,从而覆盖eBay 所有主要Hadoop的10000多台节点。在我们的生产环境中,我们已针对HDFS、Hive 等集群中的数据配置了一些基础的安全策略,并将于年底之前不断引入更多的策略,以确保重要数据的绝对安全。目前,Eagle的策略涵盖多种模式,包括从访问模式、频繁访问数据集,预定义查询类型、Hive 表和列、HBase 表以及基于机器学习模型生成的用户Profile相关的所有策略等。同时,我们也有广泛的策略来防止数据的丢失、数据被拷贝到不安全地点、敏感数据被未授权区域访问等。Eagle策略定义上极大的灵活性和扩展性使得我们未来可以轻易地继续扩展更多更复杂的策略以支持更多多元化的
 用例场景。</p>
 
-<h2 id="section-1">后续计划</h2>
+<h2 id="后续计划">后续计划</h2>
 <p>过去两年中,在eBay 除了被用于数据行为监控以外,Eagle 核心框架还被广泛用于监控节点健康状况、Hadoop应用性能指标、Hadoop 核心服务以及整个Hadoop集群的健康状况等诸多领域。我们还建立一系列的自动化机制,例如节点修复等,帮助我们平台部门极大得节省了我们人工劳力,并有效地提升了整个集群资源地利用率。</p>
 
 <p>以下是我们目前正在开发中地一些特性:</p>
@@ -272,7 +278,7 @@ Eagle 支持根据用户在Hadoop平台上历史使用行为习惯来定义行
   </li>
 </ul>
 
-<h2 id="section-2">关于作者</h2>
+<h2 id="关于作者">关于作者</h2>
 <p><a href="https://github.com/haoch">陈浩</a>,Apache Eagle Committer 和 PMC 成员,eBay 分析平台基础架构部门高级软件工程师,负责Eagle的产品设计、技术架构、核心实现以及开源社区推广等。</p>
 
 <p>感谢以下来自Apache Eagle社区和eBay公司的联合作者们对本文的贡献:</p>
@@ -286,7 +292,7 @@ Eagle 支持根据用户在Hadoop平台上历史使用行为习惯来定义行
 
 <p>eBay 分析平台基础架构部(Analytics Data Infrastructure)是eBay的全球数据及分析基础架构部门,负责eBay在数据库、数据仓库、Hadoop、商务智能以及机器学习等各个数据平台开发、管理等,支持eBay全球各部门运用高端的数据分析解决方案作出及时有效的作业决策,为遍布全球的业务用户提供数据分析解决方案。</p>
 
-<h2 id="section-3">参考资料</h2>
+<h2 id="参考资料">参考资料</h2>
 
 <ul>
   <li>Apache Eagle 文档:<a href="http://goeagle.io">http://goeagle.io</a></li>
@@ -294,7 +300,7 @@ Eagle 支持根据用户在Hadoop平台上历史使用行为习惯来定义行
   <li>Apache Eagle 项目:<a href="http://incubator.apache.org/projects/eagle.html">http://incubator.apache.org/projects/eagle.html</a></li>
 </ul>
 
-<h2 id="section-4">引用链接</h2>
+<h2 id="引用链接">引用链接</h2>
 <ul>
   <li><strong>CSDN</strong>: <a href="http://www.csdn.net/article/2015-10-29/2826076">http://www.csdn.net/article/2015-10-29/2826076</a></li>
   <li><strong>OSCHINA</strong>: <a href="http://www.oschina.net/news/67515/apache-eagle">http://www.oschina.net/news/67515/apache-eagle</a></li>