You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@metron.apache.org by mm...@apache.org on 2017/04/28 13:52:45 UTC

incubator-metron git commit: METRON-899: Fix bad Kerberos doc merge in METRON-835 (mmiklavc) closes apache/incubator-metron#554

Repository: incubator-metron
Updated Branches:
  refs/heads/master 68bd6c520 -> f36db22eb


METRON-899: Fix bad Kerberos doc merge in METRON-835 (mmiklavc) closes apache/incubator-metron#554


Project: http://git-wip-us.apache.org/repos/asf/incubator-metron/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-metron/commit/f36db22e
Tree: http://git-wip-us.apache.org/repos/asf/incubator-metron/tree/f36db22e
Diff: http://git-wip-us.apache.org/repos/asf/incubator-metron/diff/f36db22e

Branch: refs/heads/master
Commit: f36db22eba256209d494f2ab3eda4f0c2b2448d7
Parents: 68bd6c5
Author: mmiklavc <mi...@gmail.com>
Authored: Fri Apr 28 07:52:18 2017 -0600
Committer: Michael Miklavcic <mi...@gmail.com>
Committed: Fri Apr 28 07:52:18 2017 -0600

----------------------------------------------------------------------
 metron-deployment/Kerberos-ambari-setup.md  |  33 ++
 metron-deployment/Kerberos-manual-setup.md  | 395 ++++++++++++++++------
 metron-deployment/README.md                 |   4 +-
 metron-deployment/vagrant/Kerberos-setup.md | 411 -----------------------
 metron-deployment/vagrant/README.md         |   1 -
 site-book/bin/generate-md.sh                |  27 +-
 6 files changed, 349 insertions(+), 522 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-metron/blob/f36db22e/metron-deployment/Kerberos-ambari-setup.md
----------------------------------------------------------------------
diff --git a/metron-deployment/Kerberos-ambari-setup.md b/metron-deployment/Kerberos-ambari-setup.md
new file mode 100644
index 0000000..149e8b2
--- /dev/null
+++ b/metron-deployment/Kerberos-ambari-setup.md
@@ -0,0 +1,33 @@
+# Setting Up Kerberos in Vagrant Full Dev
+**Note:** These are instructions for Kerberizing Metron Storm topologies from Kafka to Kafka. This does not cover the sensor connections or MAAS.
+General Kerberization notes can be found in the metron-deployment [README.md](../README.md)
+
+## Setup a KDC
+See [Setup a KDC](Kerberos-manual-setup.md#setup-a-kdc)
+
+## Ambari Setup
+1. Kerberize the cluster via Ambari. More detailed documentation can be found [here](http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_security/content/_enabling_kerberos_security_in_ambari.html).
+
+    a. For this exercise, choose existing MIT KDC (this is what we setup and installed in the previous steps.)
+
+    ![enable keberos](readme-images/enable-kerberos.png)
+
+    ![enable keberos get started](readme-images/enable-kerberos-started.png)
+
+    b. Setup Kerberos configuration. Realm is EXAMPLE.COM. The admin principal will end up as admin/admin@EXAMPLE.COM when testing the KDC. Use the password you entered during the step for adding the admin principal.
+
+    ![enable keberos configure](readme-images/enable-kerberos-configure-kerberos.png)
+
+    c. Click through to \u201cStart and Test Services.\u201d Let the cluster spin up.
+
+## Push Data
+1. Kinit with the metron user
+    ```
+    kinit -kt /etc/security/keytabs/metron.headless.keytab metron@EXAMPLE.COM
+    ```
+
+See [Push Data](Kerberos-manual-setup.md#push-data)
+
+### More Information
+
+See [More Information](Kerberos-manual-setup.md#more-information)

http://git-wip-us.apache.org/repos/asf/incubator-metron/blob/f36db22e/metron-deployment/Kerberos-manual-setup.md
----------------------------------------------------------------------
diff --git a/metron-deployment/Kerberos-manual-setup.md b/metron-deployment/Kerberos-manual-setup.md
index 4eaa725..b444b0e 100644
--- a/metron-deployment/Kerberos-manual-setup.md
+++ b/metron-deployment/Kerberos-manual-setup.md
@@ -1,34 +1,103 @@
-# Setting Up Kerberos outside of an Ambari Management Pack
-The Ambari Management pack will manage Kerberization when used.
-**Note:** These are instructions for Kerberizing Metron Storm topologies from Kafka to Kafka. This does not cover the sensor connections or MAAS.
-General Kerberization notes can be found in the metron-deployment [README.md](README.md)
+Kerberos Setup
+==============
 
-## Setup the KDC
-See [Setup the KDC](vagrant/Kerberos-setup.md)
+This document provides instructions for kerberizing Metron's Vagrant-based development environments; "Quick Dev" and "Full Dev".  These instructions do not cover the Ambari MPack or sensors.  General Kerberization notes can be found in the metron-deployment [README.md](../README.md).
 
-4. Setup the admin and metron user principals. You'll kinit as the metron user when running topologies. Make sure to remember the passwords.
-    ```
-    kadmin.local -q "addprinc admin/admin"
-    kadmin.local -q "addprinc metron"
-    ```
+* [Setup](#setup)
+* [Setup a KDC](#setup-a-kdc)
+* [Enable Kerberos](#enable-kerberos)
+* [Kafka Authorization](#kafka-authorization)
+* [HBase Authorization](#hbase-authorization)
+* [Storm Authorization](#storm-authorization)
+* [Start Metron](#start-metron)
+* [Push Data](#push-data)
+* [More Information](#more-information)
 
-## Kerberize Metron
+Setup
+-----
 
-1. Stop all topologies - we will  restart them again once Kerberos has been enabled.
-    ```
-    for topology in bro snort enrichment indexing; do storm kill $topology; done
-    ```
+1. Deploy a Vagrant development environment; either [Full Dev](full-dev-platform) or [Quick Dev](quick-dev-platform).
+
+1. Export the following environment variables.  These need to be set for the remainder of the instructions. Replace `node1` with the appropriate hosts, if you are running Metron anywhere other than Vagrant.
 
-2. Create the metron user HDFS home directory
     ```
-    sudo -u hdfs hdfs dfs -mkdir /user/metron && \
-    sudo -u hdfs hdfs dfs -chown metron:hdfs /user/metron && \
-    sudo -u hdfs hdfs dfs -chmod 770 /user/metron
+    # execute as root
+    sudo su -
+    export KAFKA_HOME="/usr/hdp/current/kafka-broker"
+    export ZOOKEEPER=node1:2181
+    export ELASTICSEARCH=node1:9200
+    export BROKERLIST=node1:6667
+
+    export HDP_HOME="/usr/hdp/current"
+    export KAFKA_HOME="${HDP_HOME}/kafka-broker"
+    export METRON_VERSION="0.4.0"
+    export METRON_HOME="/usr/metron/${METRON_VERSION}"
     ```
 
-3. In [Ambari](http://node1:8080), setup Storm to run with Kerberos and run worker jobs as the submitting user:
+1. Execute the following commands as root.
+
+	```
+	sudo su -
+	```
+
+1. Stop all Metron topologies.  They will be restarted again once Kerberos has been enabled.
+
+  	```
+  	for topology in bro snort enrichment indexing; do
+  		storm kill $topology;
+  	done
+  	```
+
+1. Create the `metron` user's home directory in HDFS.
+
+  	```
+  	sudo -u hdfs hdfs dfs -mkdir /user/metron
+  	sudo -u hdfs hdfs dfs -chown metron:hdfs /user/metron
+  	sudo -u hdfs hdfs dfs -chmod 770 /user/metron
+  	```
+
+Setup a KDC
+-----------
+
+1. Install dependencies.
+
+  	```
+  	yum -y install krb5-server krb5-libs krb5-workstation
+  	```
+
+1. Define the host, `node1`, as the KDC.
+
+  	```
+  	sed -i 's/kerberos.example.com/node1/g' /etc/krb5.conf
+  	cp -f /etc/krb5.conf /var/lib/ambari-server/resources/scripts
+  	```
+
+1. Do not copy/paste this full set of commands as the `kdb5_util` command will not run as expected. Run the commands individually to ensure they all execute.  This step takes a moment. It creates the kerberos database.
+
+  	```
+  	kdb5_util create -s
+
+  	/etc/rc.d/init.d/krb5kdc start
+  	chkconfig krb5kdc on
+
+  	/etc/rc.d/init.d/kadmin start
+  	chkconfig kadmin on
+  	```
+
+1. Setup the `admin` and `metron` principals. You'll `kinit` as the `metron` principal when running topologies. Make sure to remember the passwords.
+
+  	```
+  	kadmin.local -q "addprinc admin/admin"
+  	kadmin.local -q "addprinc metron"
+  	```
+
+Enable Kerberos
+---------------
+
+1. In [Ambari](http://node1:8080), setup Storm to use Kerberos and run worker jobs as the submitting user.
+
+    a. Add the following properties to the custom storm-site:
 
-    a. Add the following properties to custom storm-site:
     ```
     topology.auto-credentials=['org.apache.storm.security.auth.kerberos.AutoTGT']
     nimbus.credential.renewers.classes=['org.apache.storm.security.auth.kerberos.AutoTGT']
@@ -43,7 +112,7 @@ See [Setup the KDC](vagrant/Kerberos-setup.md)
 
     ![custom storm-site properties](readme-images/ambari-storm-site-properties.png)
 
-4. Kerberize the cluster via Ambari. More detailed documentation can be found [here](http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_security/content/_enabling_kerberos_security_in_ambari.html).
+1. Kerberize the cluster via Ambari. More detailed documentation can be found [here](http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_security/content/_enabling_kerberos_security_in_ambari.html).
 
     a. For this exercise, choose existing MIT KDC (this is what we setup and installed in the previous steps.)
 
@@ -59,61 +128,118 @@ See [Setup the KDC](vagrant/Kerberos-setup.md)
 
     ![enable keberos configure](readme-images/custom-storm-site-final.png)
 
-5. Setup Metron keytab
-    ```
-    kadmin.local -q "ktadd -k metron.headless.keytab metron@EXAMPLE.COM" && \
-    cp metron.headless.keytab /etc/security/keytabs && \
-    chown metron:hadoop /etc/security/keytabs/metron.headless.keytab && \
-    chmod 440 /etc/security/keytabs/metron.headless.keytab
-    ```
+1. Create a Metron keytab
 
-6. Kinit with the metron user
     ```
-    kinit -kt /etc/security/keytabs/metron.headless.keytab metron@EXAMPLE.COM
+  	kadmin.local -q "ktadd -k metron.headless.keytab metron@EXAMPLE.COM"
+  	cp metron.headless.keytab /etc/security/keytabs
+  	chown metron:hadoop /etc/security/keytabs/metron.headless.keytab
+  	chmod 440 /etc/security/keytabs/metron.headless.keytab
+  	```
+
+Kafka Authorization
+-------------------
+
+1. Acquire a Kerberos ticket using the `metron` principal.
+
     ```
+  	kinit -kt /etc/security/keytabs/metron.headless.keytab metron@EXAMPLE.COM
+  	```
+
+1. Create any additional Kafka topics that you will need. We need to create the topics before adding the required ACLs. The current full dev installation will deploy bro, snort, enrichments, and indexing only.  For example, you may want to add a topic for 'yaf' telemetry.
 
-7. First create any additional Kafka topics you will need. We need to create the topics before adding the required ACLs. The current full dev installation will deploy bro, snort, enrichments, and indexing only. e.g.
     ```
-    ${HDP_HOME}/kafka-broker/bin/kafka-topics.sh --zookeeper ${ZOOKEEPER}:2181 --create --topic yaf --partitions 1 --replication-factor 1
+  	${KAFKA_HOME}/bin/kafka-topics.sh \
+      --zookeeper ${ZOOKEEPER} \
+      --create \
+      --topic yaf \
+      --partitions 1 \
+      --replication-factor 1
+  	```
+
+1. Setup Kafka ACLs for the `bro`, `snort`, `enrichments`, and `indexing` topics.  Run the same command against any additional topics that you might be using; for example `yaf`.
+
     ```
+  	export KERB_USER=metron
+
+  	for topic in bro snort enrichments indexing; do
+  		${KAFKA_HOME}/bin/kafka-acls.sh \
+          --authorizer kafka.security.auth.SimpleAclAuthorizer \
+          --authorizer-properties zookeeper.connect=${ZOOKEEPER} \
+          --add \
+          --allow-principal User:${KERB_USER} \
+          --topic ${topic}
+  	done
+  	```
+
+1. Setup Kafka ACLs for the consumer groups.  This command sets the ACLs for Bro, Snort, YAF, Enrichments, Indexing, and the Profiler.  Execute the same command for any additional Parsers that you may be running.
 
-8. Setup Kafka ACLs for the topics
     ```
     export KERB_USER=metron
-    for topic in bro enrichments indexing snort; do
-        ${HDP_HOME}/kafka-broker/bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=${ZOOKEEPER}:2181 --add --allow-principal User:${KERB_USER} --topic ${topic}
-    done
-    ```
 
-9. Setup Kafka ACLs for the consumer groups
-    ```
-    ${HDP_HOME}/kafka-broker/bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=${ZOOKEEPER}:2181 --add --allow-principal User:${KERB_USER} --group bro_parser
-    ${HDP_HOME}/kafka-broker/bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=${ZOOKEEPER}:2181 --add --allow-principal User:${KERB_USER} --group snort_parser
-    ${HDP_HOME}/kafka-broker/bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=${ZOOKEEPER}:2181 --add --allow-principal User:${KERB_USER} --group yaf_parser
-    ${HDP_HOME}/kafka-broker/bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=${ZOOKEEPER}:2181 --add --allow-principal User:${KERB_USER} --group enrichments
-    ${HDP_HOME}/kafka-broker/bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=${ZOOKEEPER}:2181 --add --allow-principal User:${KERB_USER} --group indexing
-    ```
+  	for group in bro_parser snort_parser yaf_parser enrichments indexing profiler; do
+  		${KAFKA_HOME}/bin/kafka-acls.sh \
+          --authorizer kafka.security.auth.SimpleAclAuthorizer \
+          --authorizer-properties zookeeper.connect=${ZOOKEEPER} \
+          --add \
+          --allow-principal User:${KERB_USER} \
+          --group ${group}
+  	done
+  	```
+
+1. Add the `metron` principal to the `kafka-cluster` ACL.
 
-10. Add metron user to the Kafka cluster ACL
     ```
-    ${HDP_HOME}/kafka-broker/bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=${ZOOKEEPER}:2181 --add --allow-principal User:${KERB_USER} --cluster kafka-cluster
+  	${KAFKA_HOME}/bin/kafka-acls.sh \
+        --authorizer kafka.security.auth.SimpleAclAuthorizer \
+        --authorizer-properties zookeeper.connect=${ZOOKEEPER} \
+        --add \
+        --allow-principal User:${KERB_USER} \
+        --cluster kafka-cluster
+  	```
+
+HBase Authorization
+-------------------
+
+1. Acquire a Kerberos ticket using the `hbase` principal
+
     ```
+  	kinit -kt /etc/security/keytabs/hbase.headless.keytab hbase-metron_cluster@EXAMPLE.COM
+  	```
+
+1. Grant permissions for the HBase tables used in Metron.
 
-11. We also need to grant permissions to the HBase tables. Kinit as the hbase user and add ACLs for metron.
     ```
-    kinit -kt /etc/security/keytabs/hbase.headless.keytab hbase-metron_cluster@EXAMPLE.COM
-    echo "grant 'metron', 'RW', 'threatintel'" | hbase shell
-    echo "grant 'metron', 'RW', 'enrichment'" | hbase shell
+  	echo "grant 'metron', 'RW', 'threatintel'" | hbase shell
+  	echo "grant 'metron', 'RW', 'enrichment'" | hbase shell
+  	```
+
+1. If you are using the Profiler, do the same for its HBase table.
+
     ```
+  	echo "create 'profiler', 'P'" | hbase shell
+  	echo "grant 'metron', 'RW', 'profiler', 'P'" | hbase shell
+  	```
+
+Storm Authorization
+-------------------
+
+1. Switch to the `metron` user and acquire a Kerberos ticket for the `metron` principal.
 
-12. Create a \u201c.storm\u201d directory in the metron user\u2019s home directory and switch to that directory.
     ```
-    su metron
-    mkdir ~/.storm
-    cd ~/.storm
+  	su metron
+  	kinit -kt /etc/security/keytabs/metron.headless.keytab metron@EXAMPLE.COM
+  	```
+
+1. Create the directory `/home/metron/.storm` and switch to that directory.
+
     ```
+  	mkdir /home/metron/.storm
+  	cd /home/metron/.storm
+  	```
+
+1. Create a client JAAS file at `/home/metron/.storm/client_jaas.conf`.  This should look identical to the Storm client JAAS file located at `/etc/storm/conf/client_jaas.conf` except for the addition of a `Client` stanza. The `Client` stanza is used for Zookeeper. All quotes and semicolons are necessary.
 
-13. Create a custom client jaas file. This should look identical to the Storm client jaas file located in /etc/storm/conf/client_jaas.conf except for the addition of a Client stanza. The Client stanza is used for Zookeeper. All quotes and semicolons are necessary.
     ```
     cat << EOF > client_jaas.conf
     StormClient {
@@ -143,75 +269,152 @@ See [Setup the KDC](vagrant/Kerberos-setup.md)
     EOF
     ```
 
-14. Create a storm.yaml with jaas file info. Set the array of nimbus hosts accordingly.
+1. Create a YAML file at `/home/metron/.storm/storm.yaml`.  This should point to the client JAAS file.  Set the array of nimbus hosts accordingly.
+
     ```
-    cat << EOF > storm.yaml
+    cat << EOF > /home/metron/.storm/storm.yaml
     nimbus.seeds : ['node1']
     java.security.auth.login.config : '/home/metron/.storm/client_jaas.conf'
     storm.thrift.transport : 'org.apache.storm.security.auth.kerberos.KerberosSaslTransportPlugin'
     EOF
     ```
 
-15. Create an auxiliary storm configuration json file in the metron user\u2019s home directory. Note the login config option in the file points to our custom client_jaas.conf.
+1. Create an auxiliary storm configuration file at `/home/metron/storm-config.json`. Note the login config option in the file points to the client JAAS file.
+
     ```
-    cat << EOF > ~/storm-config.json
+    cat << EOF > /home/metron/storm-config.json
     {
         "topology.worker.childopts" : "-Djava.security.auth.login.config=/home/metron/.storm/client_jaas.conf"
     }
     EOF
     ```
 
-16. Setup enrichment and indexing.
+1. Configure the Enrichment, Indexing and the Profiler topologies to use the client JAAS file.  Add the following properties to each of the topology properties files.
 
-    a. Modify enrichment.properties as root located at `${METRON_HOME}/config/enrichment.properties`
-    ```
-    if [[ $EUID -ne 0 ]]; then
-        echo -e "\nERROR:\tYou must be root to run these commands.  You may need to type exit."
-    else
-        sed -i 's/kafka.security.protocol=.*/kafka.security.protocol=PLAINTEXTSASL/' ${METRON_HOME}/config/enrichment.properties
-        sed -i 's/topology.worker.childopts=.*/topology.worker.childopts=-Djava.security.auth.login.config=\/home\/metron\/.storm\/client_jaas.conf/' ${METRON_HOME}/config/enrichment.properties
-    fi
-    ```
+  	```
+  	kafka.security.protocol=PLAINTEXTSASL
+  	topology.worker.childopts=-Djava.security.auth.login.config=/home/metron/.storm/client_jaas.conf
+  	```
+
+    * `${METRON_HOME}/config/enrichment.properties`
+    * `${METRON_HOME}/config/elasticsearch.properties`
+    * `${METRON_HOME}/config/profiler.properties`
+
+    Use the following command to automate this step.
 
-    b. Modify elasticsearch.properties as root located at `${METRON_HOME}/config/elasticsearch.properties`
     ```
-    if [[ $EUID -ne 0 ]]; then
-        echo -e "\nERROR:\tYou must be root to run these commands.  You may need to type exit."
-    else
-        sed -i 's/kafka.security.protocol=.*/kafka.security.protocol=PLAINTEXTSASL/' ${METRON_HOME}/config/elasticsearch.properties
-        sed -i 's/topology.worker.childopts=.*/topology.worker.childopts=-Djava.security.auth.login.config=\/home\/metron\/.storm\/client_jaas.conf/' ${METRON_HOME}/config/elasticsearch.properties
-    fi
+    for file in enrichment.properties elasticsearch.properties profiler.properties; do
+      echo ${file}
+      sed -i "s/^kafka.security.protocol=.*/kafka.security.protocol=PLAINTEXTSASL/" "${METRON_HOME}/config/${file}"
+      sed -i "s/^topology.worker.childopts=.*/topology.worker.childopts=-Djava.security.auth.login.config=\/home\/metron\/.storm\/client_jaas.conf/" "${METRON_HOME}/config/${file}"
+    done
     ```
 
-17. Distribute the custom jaas file and the keytab to each supervisor node, in the same locations as above. This ensures that the worker nodes can authenticate.  For a one node cluster, nothing needs to be done.
+Start Metron
+------------
+
+1. Switch to the `metron` user and acquire a Kerberos ticket for the `metron` principal.
 
-18. Kinit with the metron user again
-    ```
-    su metron
-    cd
-    kinit -kt /etc/security/keytabs/metron.headless.keytab metron@EXAMPLE.COM
     ```
+  	su metron
+  	kinit -kt /etc/security/keytabs/metron.headless.keytab metron@EXAMPLE.COM
+  	```
+
+1. Restart the parser topologies. Be sure to pass in the new parameter, `-ksp` or `--kafka_security_protocol`.  The following command will start only the Bro and Snort topologies.  Execute the same command for any other Parsers that you may need, for example `yaf`.
 
-19. Restart the parser topologies. Be sure to pass in the new parameter, \u201c-ksp\u201d or \u201c--kafka_security_protocol.\u201d Run this from the metron home directory.
     ```
     for parser in bro snort; do
-        ${METRON_HOME}/bin/start_parser_topology.sh -z ${ZOOKEEPER}:2181 -s ${parser} -ksp SASL_PLAINTEXT -e storm-config.json
+       ${METRON_HOME}/bin/start_parser_topology.sh \
+               -z ${ZOOKEEPER} \
+               -s ${parser} \
+               -ksp SASL_PLAINTEXT \
+               -e /home/metron/storm-config.json;
     done
     ```
 
-20. Now restart the enrichment and indexing topologies.
+1. Restart the Enrichment and Indexing topologies.
+
+    ```
+  	${METRON_HOME}/bin/start_enrichment_topology.sh
+  	${METRON_HOME}/bin/start_elasticsearch_topology.sh
+  	```
+
+Metron should be ready to receive data.
+
+Push Data
+---------
+1. Push some sample data to one of the parser topics. E.g for Bro we took raw data from [incubator-metron/metron-platform/metron-integration-test/src/main/sample/data/bro/raw/BroExampleOutput](../metron-platform/metron-integration-test/src/main/sample/data/bro/raw/BroExampleOutput)
+
     ```
-    ${METRON_HOME}/bin/start_enrichment_topology.sh
-    ${METRON_HOME}/bin/start_elasticsearch_topology.sh
+  	cat sample-bro.txt | ${KAFKA_HOME}/kafka-broker/bin/kafka-console-producer.sh \
+  	        --broker-list ${BROKERLIST}
+          	--security-protocol SASL_PLAINTEXT \
+            --topic bro
+  	```
+
+1. Wait a few moments for data to flow through the system and then check for data in the Elasticsearch indices. Replace yaf with whichever parser type you\u2019ve chosen.
+
     ```
+  	curl -XGET "${ELASTICSEARCH}/bro*/_search"
+  	curl -XGET "${ELASTICSEARCH}/bro*/_count"
+  	```
+
+1. You should have data flowing from the parsers all the way through to the indexes. This completes the Kerberization instructions
+
+More Information
+----------------
+
+### Kerberos
+
+Unsure of your Kerberos principal associated with a keytab? There are a couple ways to get this. One is via the list of principals that Ambari provides via downloadable csv. If you didn\u2019t download this list, you can also check the principal manually by running the following against the keytab.
+
+```
+klist -kt /etc/security/keytabs/<keytab-file-name>
+```
+
+E.g.
+
+```
+klist -kt /etc/security/keytabs/hbase.headless.keytab
+Keytab name: FILE:/etc/security/keytabs/hbase.headless.keytab
+KVNO Timestamp         Principal
+---- ----------------- --------------------------------------------------------
+   1 03/28/17 19:29:36 hbase-metron_cluster@EXAMPLE.COM
+   1 03/28/17 19:29:36 hbase-metron_cluster@EXAMPLE.COM
+   1 03/28/17 19:29:36 hbase-metron_cluster@EXAMPLE.COM
+   1 03/28/17 19:29:36 hbase-metron_cluster@EXAMPLE.COM
+   1 03/28/17 19:29:36 hbase-metron_cluster@EXAMPLE.COM
+```
+
+### Kafka with Kerberos enabled
+
+#### Write data to a topic with SASL
+
+```
+cat sample-yaf.txt | ${KAFKA_HOME}/bin/kafka-console-producer.sh \
+        --broker-list ${BROKERLIST} \
+        --security-protocol PLAINTEXTSASL \
+        --topic yaf
+```
+
+#### View topic data from latest offset with SASL
 
-Metron should be ready to receieve data.
+```
+${KAFKA_HOME}/bin/kafka-console-consumer.sh \
+        --zookeeper ${ZOOKEEPER} \
+        --security-protocol PLAINTEXTSASL \
+        --topic yaf
+```
 
-## Push Data
-See [Push Data](vagrant/Kerberos-setup.md)
+#### Modify the sensor-stubs to send logs via SASL
+```
+sed -i 's/node1:6667 --topic/node1:6667 --security-protocol PLAINTEXTSASL --topic/' /opt/sensor-stubs/bin/start-*-stub
+for sensorstub in bro snort; do
+    service sensor-stubs stop ${sensorstub};
+    service sensor-stubs start ${sensorstub};
+done
+```
 
-### Other useful commands
-See [Other useful commands](vagrant/Kerberos-setup.md)
+### References
 
-#### References
 * [https://github.com/apache/storm/blob/master/SECURITY.md](https://github.com/apache/storm/blob/master/SECURITY.md)

http://git-wip-us.apache.org/repos/asf/incubator-metron/blob/f36db22e/metron-deployment/README.md
----------------------------------------------------------------------
diff --git a/metron-deployment/README.md b/metron-deployment/README.md
index adb48fd..012b7b6 100644
--- a/metron-deployment/README.md
+++ b/metron-deployment/README.md
@@ -169,10 +169,10 @@ The MPack can allow Metron to be installed and then Kerberized, or installed on
 * Storm (and Metron) must be restarted after Metron is installed on an already Kerberized cluster.  Several Storm configs get updated, and Metron will be unable to write to Kafka without a restart.
   * Kerberizing a cluster with an existing Metron already has restarts of all services during Kerberization, so it's unneeded.
 
-Instructions for setup on Full Dev can be found at [Kerberos-setup.md](vagrant/Kerberos-setup.md).  These instructions can also be used for setting up KDC and testing.
+Instructions for setup on Full Dev can be found at [Kerberos-ambari-setup.md](Kerberos-ambari-setup.md).  These instructions reference the manual install instructions.
 
 ### Kerberos Without an MPack
-Using the MPack is preferred, but instructions for Kerberizing manually can be found at [Kerberos-manual-setup.md](Kerberos-manual-setup.md)
+Using the MPack is preferred, but instructions for Kerberizing manually can be found at [Kerberos-manual-setup.md](Kerberos-manual-setup.md). These instructions are reference by the Ambari Kerberos install instructions and include commands for setting up a KDC.
 
 ## TODO
 - Support Ubuntu deployments

http://git-wip-us.apache.org/repos/asf/incubator-metron/blob/f36db22e/metron-deployment/vagrant/Kerberos-setup.md
----------------------------------------------------------------------
diff --git a/metron-deployment/vagrant/Kerberos-setup.md b/metron-deployment/vagrant/Kerberos-setup.md
deleted file mode 100644
index 759d63d..0000000
--- a/metron-deployment/vagrant/Kerberos-setup.md
+++ /dev/null
@@ -1,411 +0,0 @@
-Kerberos Setup
-==============
-
-This document provides instructions for kerberizing Metron's Vagrant-based development environments; "Quick Dev" and "Full Dev".  These instructions do not cover the Ambari MPack or sensors.  General Kerberization notes can be found in the metron-deployment [README.md](../README.md).
-
-* [Setup](#setup)
-* [Create a KDC](#create-a-kdc)
-* [Enable Kerberos](#enable-kerberos)
-* [Kafka Authorization](#kafka-authorization)
-* [HBase Authorization](#hbase-authorization)
-* [Storm Authorization](#storm-authorization)
-* [Start Metron](#start-metron)
-
-Setup
------
-
-1. Deploy a Vagrant development environment; either [Full Dev](full-dev-platform) or [Quick Dev](quick-dev-platform).
-
-1. Export the following environment variables.  These need to be set for the remainder of the instructions. Replace `node1` with the appropriate hosts, if you are running Metron anywhere other than Vagrant.  
-
-    ```
-    export ZOOKEEPER=node1:2181
-    export ELASTICSEARCH=node1:9200
-    export KAFKA=node1:6667
-    
-    export HDP_HOME="/usr/hdp/current"
-    export KAFKA_HOME="${HDP_HOME}/kafka-broker"
-    export METRON_VERSION="0.4.0"
-    export METRON_HOME="/usr/metron/${METRON_VERSION}"
-    ```
-
-1. Execute the following commands as root.
-	
-	```
-	sudo su -
-	```
-
-1. Stop all Metron topologies.  They will be restarted again once Kerberos has been enabled.
-
-  	```
-  	for topology in bro snort enrichment indexing; do
-  		storm kill $topology;
-  	done
-  	```
-
-1. Create the `metron` user's home directory in HDFS.
-
-  	```
-  	sudo -u hdfs hdfs dfs -mkdir /user/metron
-  	sudo -u hdfs hdfs dfs -chown metron:hdfs /user/metron
-  	sudo -u hdfs hdfs dfs -chmod 770 /user/metron
-  	```
-
-Create a KDC
-------------
-
-1. Install dependencies.
-
-  	```
-  	yum -y install krb5-server krb5-libs krb5-workstation
-  	```
-
-1. Define the host, `node1`, as the KDC.  
-
-  	```
-  	sed -i 's/kerberos.example.com/node1/g' /etc/krb5.conf
-  	cp -f /etc/krb5.conf /var/lib/ambari-server/resources/scripts
-  	```
-
-1. Do not copy/paste this full set of commands as the `kdb5_util` command will not run as expected. Run the commands individually to ensure they all execute.  This step takes a moment. It creates the kerberos database.
-
-  	```
-  	kdb5_util create -s
-
-  	/etc/rc.d/init.d/krb5kdc start
-  	chkconfig krb5kdc on
-
-  	/etc/rc.d/init.d/kadmin start
-  	chkconfig kadmin on
-  	```
-
-1. Setup the `admin` and `metron` principals. You'll `kinit` as the `metron` principal when running topologies. Make sure to remember the passwords.
-
-  	```
-  	kadmin.local -q "addprinc admin/admin"
-  	kadmin.local -q "addprinc metron"
-  	```
-
-Enable Kerberos
----------------
-
-1. In [Ambari](http://node1:8080), setup Storm to use Kerberos and run worker jobs as the submitting user.
-
-    a. Add the following properties to the custom storm-site:
-
-    ```
-    topology.auto-credentials=['org.apache.storm.security.auth.kerberos.AutoTGT']
-    nimbus.credential.renewers.classes=['org.apache.storm.security.auth.kerberos.AutoTGT']
-    supervisor.run.worker.as.user=true
-    ```
-
-    b. In the Storm config section in Ambari, choose \u201cAdd Property\u201d under custom storm-site:
-
-    ![custom storm-site](../readme-images/ambari-storm-site.png)
-
-    c. In the dialog window, choose the \u201cbulk property add mode\u201d toggle button and add the below values:
-
-    ![custom storm-site properties](../readme-images/ambari-storm-site-properties.png)
-
-1. Kerberize the cluster via Ambari. More detailed documentation can be found [here](http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_security/content/_enabling_kerberos_security_in_ambari.html).
-
-    a. For this exercise, choose existing MIT KDC (this is what we setup and installed in the previous steps.)
-
-    ![enable keberos](../readme-images/enable-kerberos.png)
-
-    ![enable keberos get started](../readme-images/enable-kerberos-started.png)
-
-    b. Setup Kerberos configuration. Realm is EXAMPLE.COM. The admin principal will end up as admin/admin@EXAMPLE.COM when testing the KDC. Use the password you entered during the step for adding the admin principal.
-
-    ![enable keberos configure](../readme-images/enable-kerberos-configure-kerberos.png)
-
-    c. Click through to \u201cStart and Test Services.\u201d Let the cluster spin up, but don't worry about starting up Metron via Ambari - we're going to run the parsers manually against the rest of the Hadoop cluster Kerberized. The wizard will fail at starting Metron, but this is OK. Click \u201ccontinue.\u201d When you\u2019re finished, the custom storm-site should look similar to the following:
-
-    ![enable keberos configure](../readme-images/custom-storm-site-final.png)
-
-1. Create a Metron keytab
-
-    ```
-  	kadmin.local -q "ktadd -k metron.headless.keytab metron@EXAMPLE.COM"
-  	cp metron.headless.keytab /etc/security/keytabs
-  	chown metron:hadoop /etc/security/keytabs/metron.headless.keytab
-  	chmod 440 /etc/security/keytabs/metron.headless.keytab
-  	```
-
-Kafka Authorization
--------------------
-
-1. Acquire a Kerberos ticket using the `metron` principal.
-
-    ```
-  	kinit -kt /etc/security/keytabs/metron.headless.keytab metron@EXAMPLE.COM
-  	```
-
-1. Create any additional Kafka topics that you will need. We need to create the topics before adding the required ACLs. The current full dev installation will deploy bro, snort, enrichments, and indexing only.  For example, you may want to add a topic for 'yaf' telemetry.
-
-    ```
-  	${KAFKA_HOME}/bin/kafka-topics.sh \
-      --zookeeper ${ZOOKEEPER} \
-      --create \
-      --topic yaf \
-      --partitions 1 \
-      --replication-factor 1
-  	```
-
-1. Setup Kafka ACLs for the `bro`, `snort`, `enrichments`, and `indexing` topics.  Run the same command against any additional topics that you might be using; for example `yaf`.
-
-    ```
-  	export KERB_USER=metron
-
-  	for topic in bro snort enrichments indexing; do
-  		${KAFKA_HOME}/bin/kafka-acls.sh \
-          --authorizer kafka.security.auth.SimpleAclAuthorizer \
-          --authorizer-properties zookeeper.connect=${ZOOKEEPER} \
-          --add \
-          --allow-principal User:${KERB_USER} \
-          --topic ${topic}
-  	done
-  	```
-
-1. Setup Kafka ACLs for the consumer groups.  This command sets the ACLs for Bro, Snort, YAF, Enrichments, Indexing, and the Profiler.  Execute the same command for any additional Parsers that you may be running.
-
-    ```
-    export KERB_USER=metron
-
-  	for group in bro_parser snort_parser yaf_parser enrichments indexing profiler; do
-  		${KAFKA_HOME}/bin/kafka-acls.sh \
-          --authorizer kafka.security.auth.SimpleAclAuthorizer \
-          --authorizer-properties zookeeper.connect=${ZOOKEEPER} \
-          --add \
-          --allow-principal User:${KERB_USER} \
-          --group ${group}
-  	done
-  	```
-
-1. Add the `metron` principal to the `kafka-cluster` ACL.
-
-    ```
-  	${KAFKA_HOME}/bin/kafka-acls.sh \
-        --authorizer kafka.security.auth.SimpleAclAuthorizer \
-        --authorizer-properties zookeeper.connect=${ZOOKEEPER} \
-        --add \
-        --allow-principal User:${KERB_USER} \
-        --cluster kafka-cluster
-  	```
-
-HBase Authorization
--------------------
-
-1. Acquire a Kerberos ticket using the `hbase` principal
-
-    ```
-  	kinit -kt /etc/security/keytabs/hbase.headless.keytab hbase-metron_cluster@EXAMPLE.COM
-  	```
-
-1. Grant permissions for the HBase tables used in Metron.
-
-    ```
-  	echo "grant 'metron', 'RW', 'threatintel'" | hbase shell
-  	echo "grant 'metron', 'RW', 'enrichment'" | hbase shell
-  	```
-
-1. If you are using the Profiler, do the same for its HBase table.
-
-    ```
-  	echo "create 'profiler', 'P'" | hbase shell
-  	echo "grant 'metron', 'RW', 'profiler', 'P'" | hbase shell
-  	```
-
-Storm Authorization
--------------------
-
-1. Switch to the `metron` user and acquire a Kerberos ticket for the `metron` principal.
-
-    ```
-  	su metron
-  	kinit -kt /etc/security/keytabs/metron.headless.keytab metron@EXAMPLE.COM
-  	```
-
-1. Create the directory `/home/metron/.storm` and switch to that directory.
-
-    ```
-  	mkdir /home/metron/.storm
-  	cd /home/metron/.storm
-  	```
-
-1. Create a client JAAS file at `/home/metron/.storm/client_jaas.conf`.  This should look identical to the Storm client JAAS file located at `/etc/storm/conf/client_jaas.conf` except for the addition of a `Client` stanza. The `Client` stanza is used for Zookeeper. All quotes and semicolons are necessary.
-
-    ```
-    cat << EOF > client_jaas.conf
-    StormClient {
-        com.sun.security.auth.module.Krb5LoginModule required
-        useTicketCache=true
-        renewTicket=true
-        serviceName="nimbus";
-    };
-    Client {
-        com.sun.security.auth.module.Krb5LoginModule required
-        useKeyTab=true
-        keyTab="/etc/security/keytabs/metron.headless.keytab"
-        storeKey=true
-        useTicketCache=false
-        serviceName="zookeeper"
-        principal="metron@EXAMPLE.COM";
-    };
-    KafkaClient {
-        com.sun.security.auth.module.Krb5LoginModule required
-        useKeyTab=true
-        keyTab="/etc/security/keytabs/metron.headless.keytab"
-        storeKey=true
-        useTicketCache=false
-        serviceName="kafka"
-        principal="metron@EXAMPLE.COM";
-    };
-    EOF
-    ```
-
-1. Create a YAML file at `/home/metron/.storm/storm.yaml`.  This should point to the client JAAS file.  Set the array of nimbus hosts accordingly.
-
-    ```
-    cat << EOF > /home/metron/.storm/storm.yaml
-    nimbus.seeds : ['node1']
-    java.security.auth.login.config : '/home/metron/.storm/client_jaas.conf'
-    storm.thrift.transport : 'org.apache.storm.security.auth.kerberos.KerberosSaslTransportPlugin'
-    EOF
-    ```
-
-1. Create an auxiliary storm configuration file at `/home/metron/storm-config.json`. Note the login config option in the file points to the client JAAS file.
-
-    ```
-    cat << EOF > /home/metron/storm-config.json
-    {
-        "topology.worker.childopts" : "-Djava.security.auth.login.config=/home/metron/.storm/client_jaas.conf"
-    }
-    EOF
-    ```
-
-1. Configure the Enrichment, Indexing and the Profiler topologies to use the client JAAS file.  Add the following properties to each of the topology properties files.
-
-  	```
-  	kafka.security.protocol=PLAINTEXTSASL
-  	topology.worker.childopts=-Djava.security.auth.login.config=/home/metron/.storm/client_jaas.conf
-  	```
-
-    * `${METRON_HOME}/config/enrichment.properties`
-    * `${METRON_HOME}/config/elasticsearch.properties`
-    * `${METRON_HOME}/config/profiler.properties`
-
-    Use the following command to automate this step.
-
-    ```
-    for file in enrichment.properties elasticsearch.properties profiler.properties; do
-      echo ${file}
-      sed -i "s/^kafka.security.protocol=.*/kafka.security.protocol=PLAINTEXTSASL/" "${METRON_HOME}/config/${file}"
-      sed -i "s/^topology.worker.childopts=.*/topology.worker.childopts=-Djava.security.auth.login.config=\/home\/metron\/.storm\/client_jaas.conf/" "${METRON_HOME}/config/${file}"
-    done
-    ```
-
-Start Metron
-------------
-
-1. Switch to the `metron` user and acquire a Kerberos ticket for the `metron` principal.
-
-    ```
-  	su metron
-  	kinit -kt /etc/security/keytabs/metron.headless.keytab metron@EXAMPLE.COM
-  	```
-
-1. Restart the parser topologies. Be sure to pass in the new parameter, `-ksp` or `--kafka_security_protocol`.  The following command will start only the Bro and Snort topologies.  Execute the same command for any other Parsers that you may need, for example `yaf`.  
-
-    ```
-    for parser in bro snort; do
-    	${METRON_HOME}/bin/start_parser_topology.sh \
-	    	-z ${ZOOKEEPER} \
-	    	-s ${parser} \
-	    	-ksp SASL_PLAINTEXT \
-	    	-e /home/metron/storm-config.json;
-    done
-    ```
-
-1. Restart the Enrichment and Indexing topologies.
-
-    ```
-  	${METRON_HOME}/bin/start_enrichment_topology.sh
-  	${METRON_HOME}/bin/start_elasticsearch_topology.sh
-  	```
-
-1. Push some sample data to one of the parser topics. E.g for Bro we took raw data from [incubator-metron/metron-platform/metron-integration-test/src/main/sample/data/bro/raw/BroExampleOutput](../../metron-platform/metron-integration-test/src/main/sample/data/bro/raw/BroExampleOutput)
-
-    ```
-  	cat sample-bro.txt | ${KAFKA_HOME}/kafka-broker/bin/kafka-console-producer.sh \
-	  	--broker-list ${KAFKA} \
-	  	--security-protocol SASL_PLAINTEXT \
-	  	--topic bro
-  	```
-
-1. Wait a few moments for data to flow through the system and then check for data in the Elasticsearch indices. Replace yaf with whichever parser type you\u2019ve chosen.
-
-    ```
-  	curl -XGET "${ELASTICSEARCH}/bro*/_search"
-  	curl -XGET "${ELASTICSEARCH}/bro*/_count"
-  	```
-
-1. You should have data flowing from the parsers all the way through to the indexes. This completes the Kerberization instructions
-
-More Information
-----------------
-
-### Kerberos
-
-Unsure of your Kerberos principal associated with a keytab? There are a couple ways to get this. One is via the list of principals that Ambari provides via downloadable csv. If you didn\u2019t download this list, you can also check the principal manually by running the following against the keytab.
-
-```
-klist -kt /etc/security/keytabs/<keytab-file-name>
-```
-
-E.g.
-
-```
-klist -kt /etc/security/keytabs/hbase.headless.keytab
-Keytab name: FILE:/etc/security/keytabs/hbase.headless.keytab
-KVNO Timestamp         Principal
----- ----------------- --------------------------------------------------------
-   1 03/28/17 19:29:36 hbase-metron_cluster@EXAMPLE.COM
-   1 03/28/17 19:29:36 hbase-metron_cluster@EXAMPLE.COM
-   1 03/28/17 19:29:36 hbase-metron_cluster@EXAMPLE.COM
-   1 03/28/17 19:29:36 hbase-metron_cluster@EXAMPLE.COM
-   1 03/28/17 19:29:36 hbase-metron_cluster@EXAMPLE.COM
-```
-
-### Kafka with Kerberos enabled
-
-#### Write data to a topic with SASL
-
-```
-cat sample-yaf.txt | ${KAFKA_HOME}/bin/kafka-console-producer.sh \
-	--broker-list ${KAFKA} \
-	--security-protocol PLAINTEXTSASL \
-	--topic yaf
-```
-
-#### View topic data from latest offset with SASL
-
-```
-${KAFKA_HOME}/bin/kafka-console-consumer.sh \
-	--zookeeper ${ZOOKEEPER} \
-	--security-protocol PLAINTEXTSASL \
-	--topic yaf
-```
-
-#### Modify the sensor-stubs to send logs via SASL
-```
-sed -i 's/node1:6667 --topic/node1:6667 --security-protocol PLAINTEXTSASL --topic/' /opt/sensor-stubs/bin/start-*-stub
-for sensorstub in bro snort; do 
-	service sensor-stubs stop ${sensorstub}; 
-	service sensor-stubs start ${sensorstub}; 
-done
-```
-
-### References
-
-* [https://github.com/apache/storm/blob/master/SECURITY.md](https://github.com/apache/storm/blob/master/SECURITY.md)

http://git-wip-us.apache.org/repos/asf/incubator-metron/blob/f36db22e/metron-deployment/vagrant/README.md
----------------------------------------------------------------------
diff --git a/metron-deployment/vagrant/README.md b/metron-deployment/vagrant/README.md
index ae49285..b629a1f 100644
--- a/metron-deployment/vagrant/README.md
+++ b/metron-deployment/vagrant/README.md
@@ -1,6 +1,5 @@
 # Vagrant Deployment
 
-- Kerberos Setup
 - Codelab Platform
 - Fast CAPA Test Platform
 - Full Dev Platform

http://git-wip-us.apache.org/repos/asf/incubator-metron/blob/f36db22e/site-book/bin/generate-md.sh
----------------------------------------------------------------------
diff --git a/site-book/bin/generate-md.sh b/site-book/bin/generate-md.sh
index d428fbc..686635c 100755
--- a/site-book/bin/generate-md.sh
+++ b/site-book/bin/generate-md.sh
@@ -57,12 +57,12 @@ EXCLUSION_LIST=(
 ## Each entry is a file path, relative to $METRON_SOURCE.
 ## Note: any images in site-book/src/site/src-resources/images/ will also be included.
 RESOURCE_LIST=(
-    metron-deployment/vagrant/readme-images/ambari-storm-site-properties.png
-    metron-deployment/vagrant/readme-images/ambari-storm-site.png
-    metron-deployment/vagrant/readme-images/custom-storm-site-final.png
-    metron-deployment/vagrant/readme-images/enable-kerberos-configure-kerberos.png
-    metron-deployment/vagrant/readme-images/enable-kerberos-started.png
-    metron-deployment/vagrant/readme-images/enable-kerberos.png
+    metron-deployment/readme-images/ambari-storm-site-properties.png
+    metron-deployment/readme-images/ambari-storm-site.png
+    metron-deployment/readme-images/custom-storm-site-final.png
+    metron-deployment/readme-images/enable-kerberos-configure-kerberos.png
+    metron-deployment/readme-images/enable-kerberos-started.png
+    metron-deployment/readme-images/enable-kerberos.png
     metron-platform/metron-parsers/parser_arch.png
     metron-platform/metron-indexing/indexing_arch.png
     metron-platform/metron-enrichment/enrichment_arch.png
@@ -73,12 +73,15 @@ RESOURCE_LIST=(
 ## that needs an href re-written to match a resource in the images/ directory.  Odd fields are the corresponding
 ## one-line sed script, in single quotes, that does the rewrite.  See below for examples.
 HREF_REWRITE_LIST=(
-    metron-deployment/vagrant/Kerberos-setup.md 's#(readme-images/ambari-storm-site-properties.png)#(../../images/ambari-storm-site-properties.png)#g'
-    metron-deployment/vagrant/Kerberos-setup.md 's#(readme-images/ambari-storm-site.png)#(../../images/ambari-storm-site.png)#g'
-    metron-deployment/vagrant/Kerberos-setup.md 's#(readme-images/custom-storm-site-final.png)#(../../images/custom-storm-site-final.png)#g'
-    metron-deployment/vagrant/Kerberos-setup.md 's#(readme-images/enable-kerberos-configure-kerberos.png)#(../../images/enable-kerberos-configure-kerberos.png)#g'
-    metron-deployment/vagrant/Kerberos-setup.md 's#(readme-images/enable-kerberos-started.png)#(../../images/enable-kerberos-started.png)#g'
-    metron-deployment/vagrant/Kerberos-setup.md 's#(readme-images/enable-kerberos.png)#(../../images/enable-kerberos.png)#g'
+    metron-deployment/Kerberos-manual-setup.md 's#(readme-images/ambari-storm-site-properties.png)#(../images/ambari-storm-site-properties.png)#g'
+    metron-deployment/Kerberos-manual-setup.md 's#(readme-images/ambari-storm-site.png)#(../images/ambari-storm-site.png)#g'
+    metron-deployment/Kerberos-manual-setup.md 's#(readme-images/custom-storm-site-final.png)#(../images/custom-storm-site-final.png)#g'
+    metron-deployment/Kerberos-manual-setup.md 's#(readme-images/enable-kerberos-configure-kerberos.png)#(../images/enable-kerberos-configure-kerberos.png)#g'
+    metron-deployment/Kerberos-manual-setup.md 's#(readme-images/enable-kerberos-started.png)#(../images/enable-kerberos-started.png)#g'
+    metron-deployment/Kerberos-manual-setup.md 's#(readme-images/enable-kerberos.png)#(../images/enable-kerberos.png)#g'
+    metron-deployment/Kerberos-ambari-setup.md 's#(readme-images/enable-kerberos-configure-kerberos.png)#(../images/enable-kerberos-configure-kerberos.png)#g'
+    metron-deployment/Kerberos-ambari-setup.md 's#(readme-images/enable-kerberos-started.png)#(../images/enable-kerberos-started.png)#g'
+    metron-deployment/Kerberos-ambari-setup.md 's#(readme-images/enable-kerberos.png)#(../images/enable-kerberos.png)#g'
     metron-platform/metron-enrichment/README.md 's#(enrichment_arch.png)#(../../images/enrichment_arch.png)#g'
     metron-platform/metron-indexing/README.md 's#(indexing_arch.png)#(../../images/indexing_arch.png)#g'
     metron-platform/metron-parsers/README.md 's#(parser_arch.png)#(../../images/parser_arch.png)#g'