You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@slider.apache.org by st...@apache.org on 2014/06/30 17:37:11 UTC

[22/50] [abbrv] SLIDER-121 removed site documentation from git source

http://git-wip-us.apache.org/repos/asf/incubator-slider/blob/209cee43/src/site/markdown/developing/building.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/developing/building.md b/src/site/markdown/developing/building.md
deleted file mode 100644
index ad36393..0000000
--- a/src/site/markdown/developing/building.md
+++ /dev/null
@@ -1,374 +0,0 @@
-<!---
-   Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
--->
-
-# Building Apache Slider
-
-
-Here's how to set this up.
-
-## Before you begin
-
-### Networking
-
-The network on the development system must be functional, with hostname lookup
-of the local host working. Tests will fail without this.
-
-### Java 7
-
-Slider is built on Java 7 -please have a JDK for Java 7 or 8 set up
-
-### Maven
-
-You will need a version of Maven 3.0+, set up with enough memory
-
-    MAVEN_OPTS=-Xms256m -Xmx512m -Djava.awt.headless=true
-
-
-*Important*: As of October 6, 2013, Maven 3.1 is not supported due to
-[version issues](https://cwiki.apache.org/confluence/display/MAVEN/AetherClassNotFound).
-
-### Protoc
-
-You need a copy of the `protoc`  compiler for protobuf compilation
-
-1. OS/X: `brew install protobuf`
-1. Others: consult (Building Hadoop documentation)[http://wiki.apache.org/hadoop/HowToContribute].
-
-The version of `protoc` installed must be the same as that used by Hadoop itself.
-This is absolutely critical to prevent JAR version problems.
-
-## Building a compatible Hadoop version
-
-
-Slider is built against Hadoop 2 -you can download and install
-a copy from the [Apache Hadoop Web Site](http://hadoop.apache.org).
-
-
-During development, its convenient (but not mandatory)
-to have a local version of Hadoop -so that we can find and fix bugs/add features in
-Hadoop as well in Slider.
-
-
-To build and install locally, check out apache svn/github, branch `release-2.4.0`,
-and create a branch off that tag
-
-    git clone git://git.apache.org/hadoop-common.git 
-    cd hadoop-common
-    git remote rename origin apache
-    git fetch --tags apache
-    git checkout release-2.4.0 -- 
-    git checkout -b release-2.4.0
-
-
-For the scripts below, set the `HADOOP_VERSION` variable to the version
-
-    export HADOOP_VERSION=2.4.0
-    
-or, for building against a pre-release version of Hadoop 2.4
- 
-    git checkout branch-2
-    export HADOOP_VERSION=2.4.0-SNAPSHOT
-
-To build and install it locally, skipping the tests:
-
-    mvn clean install -DskipTests
-
-To make a tarball for use in test runs:
-
-    #On  osx
-    mvn clean install package -Pdist -Dtar -DskipTests -Dmaven.javadoc.skip=true 
-    
-    # on linux
-    mvn clean package -Pdist -Pnative -Dtar -DskipTests -Dmaven.javadoc.skip=true 
-
-Then expand this
-
-    pushd hadoop-dist/target/
-    gunzip hadoop-$HADOOP_VERSION.tar.gz 
-    tar -xvf hadoop-$HADOOP_VERSION.tar 
-    popd
-
-This creates an expanded version of Hadoop. You can now actually run Hadoop
-from this directory. Do note that unless you have the native code built for
-your target platform, Hadoop will be slower. 
-
-## building a compatible HBase version
-
-If you need to build a version of HBase -rather than use a released version,
-here are the instructions (for the hbase-0.98 release branch)
-
-Checkout the HBase `trunk` branch from apache svn/github.  
-
-    
-    git clone git://git.apache.org/hbase.git
-    cd hbase
-    git remote rename origin apache
-    git fetch --tags apache
-
-then
-
-    git checkout -b apache/0.98
-or
-
-    git checkout tags/0.98.1
-    
-If you have already been building versions of HBase, remove the existing
-set of artifacts for safety:
-
-    rm -rf ~/.m2/repository/org/apache/hbase/
-    
-The maven command for building hbase artifacts against this hadoop version is 
-
-    mvn clean install assembly:single -DskipTests -Dmaven.javadoc.skip=true
-
-To use a different version of Hadoop from that defined in the `hadoop-two.version`
-property of`/pom.xml`:
-
-    mvn clean install assembly:single -DskipTests -Dmaven.javadoc.skip=true -Dhadoop-two.version=$HADOOP_VERSION
-
-This will create an hbase `tar.gz` file in the directory `hbase-assembly/target/`
-in the hbase source tree. 
-
-    export HBASE_VERSION=0.98.1
-    
-    pushd hbase-assembly/target
-    gunzip hbase-$HBASE_VERSION-bin.tar.gz 
-    tar -xvf hbase-$HBASE_VERSION-bin.tar
-    gzip hbase-$HBASE_VERSION-bin.tar
-    popd
-
-This will create an untarred directory containing
-hbase. Both the `.tar.gz` and untarred file are needed for testing. Most
-tests just work directly with the untarred file as it saves time uploading
-and downloading then expanding the file.
-
-(and if you set `HBASE_VERSION` to something else, you can pick up that version
--making sure that slider is in sync)
-
-For more information (including recommended Maven memory configuration options),
-see [HBase building](http://hbase.apache.org/book/build.html)
-
-For building just the JAR files:
-
-    mvn clean install -DskipTests -Dhadoop.profile=2.0 -Dhadoop-two.version=$HADOOP_VERSION
-
-*Tip:* you can force set a version in Maven by having it update all the POMs:
-
-    mvn versions:set -DnewVersion=0.98.1-SNAPSHOT
-
-## Building Accumulo
-
-Clone accumulo from apache;
-
-    git clone http://git-wip-us.apache.org/repos/asf/accumulo.git
-
-
-Check out branch 1.6.1-SNAPSHOT
-
-    git checkout 1.6.1-SNAPSHOT
-
-In the accumulo project directory, build it
-
-    mvn clean install -Passemble -DskipTests -Dmaven.javadoc.skip=true \
-     -Dhadoop.profile=2
-
-The default Hadoop version for accumulo-1.6.1 is hadoop 2.4.0; to build
-against a different version use the command
-
-    mvn clean install -Passemble -DskipTests -Dmaven.javadoc.skip=true \
-     -Dhadoop.profile=2  -Dhadoop.version=$HADOOP_VERSION
-
-This creates an accumulo tar.gz file in `assemble/target/`. Extract this
-to create an expanded directory
-
-    accumulo/assemble/target/accumulo-1.6.1-SNAPSHOT-bin.tar.gz
-
- This can be done with the command sequence
-
-    export ACCUMULO_VERSION=1.6.1-SNAPSHOT
-
-    pushd assemble/target/
-    gunzip -f accumulo-$ACCUMULO_VERSION-bin.tar.gz
-    tar -xvf accumulo-$ACCUMULO_VERSION-bin.tar.gz
-    popd
-
-Note that the final location of the accumulo files is needed for the configuration,
-it may be directly under target/ or it may be in a subdirectory, with
-a path such as `target/accumulo-$ACCUMULO_VERSION-dev/accumulo-$ACCUMULO_VERSION/`
-
-
-## Testing
-
-### Configuring Slider to locate the relevant artifacts
-
-You must have the file `src/test/resources/slider-test.xml` (this
-is ignored by git), declaring where HBase, accumulo, Hadoop and zookeeper are:
-
-    <configuration>
-    
-      <property>
-        <name>slider.test.hbase.home</name>
-        <value>/home/slider/hbase/hbase-assembly/target/hbase-0.98.0-SNAPSHOT</value>
-        <description>HBASE Home</description>
-      </property>
-    
-      <property>
-        <name>slider.test.hbase.tar</name>
-        <value>/home/slider/hbase/hbase-assembly/target/hbase-0.98.0-SNAPSHOT-bin.tar.gz</value>
-        <description>HBASE archive URI</description>
-      </property> 
-         
-      <property>
-        <name>slider.test.accumulo.home</name>
-        <value>/home/slider/accumulo/assemble/target/accumulo-1.6.1-SNAPSHOT/</value>
-        <description>Accumulo Home</description>
-      </property>
-    
-      <property>
-        <name>slider.test.accumulo.tar</name>
-        <value>/home/slider/accumulo/assemble/target/accumulo-1.6.1-SNAPSHOT-bin.tar.gz</value>
-        <description>Accumulo archive URI</description>
-      </property>
-      
-      <property>
-        <name>zk.home</name>
-        <value>
-          /home/slider/Apps/zookeeper</value>
-        <description>Zookeeper home dir on target systems</description>
-      </property>
-    
-      <property>
-        <name>hadoop.home</name>
-        <value>
-          /home/slider/hadoop-common/hadoop-dist/target/hadoop-2.3.0</value>
-        <description>Hadoop home dir on target systems</description>
-      </property>
-      
-    </configuration>
-    
-
-## Debugging a failing test
-
-1. Locate the directory `target/$TESTNAME` where TESTNAME is the name of the 
-test case and or test method. This directory contains the Mini YARN Cluster
-logs. For example, `TestLiveRegionService` stores its data under 
-`target/TestLiveRegionService`
-
-1. Look under that directory for `-logdir` directories, then an application
-and container containing logs. There may be more than node being simulated;
-every node manager creates its own logdir.
-
-1. Look for the `out.txt` and `err.txt` files for stdout and stderr log output.
-
-1. Slider uses SLF4J to log to `out.txt`; remotely executed processes may use
-either stream for logging
-
-Example:
-
-    target/TestLiveRegionService/TestLiveRegionService-logDir-nm-1_0/application_1376095770244_0001/container_1376095770244_0001_01_000001/out.txt
-
-1. The actual test log from JUnit itself goes to the console and into 
-`target/surefire/`; this shows the events happening in the YARN services as well
- as (if configured) HDFS and Zookeeper. It is noisy -everything after the *teardown*
- message happens during cluster teardown, after the test itself has been completed.
- Exceptions and messages here can generally be ignored.
- 
-This is all a bit complicated -debugging is simpler if a single test is run at a
-time, which is straightforward
-
-    mvn clean test -Dtest=TestLiveRegionService
-
-
-### Building the JAR file
-
-You can create the JAR file and set up its directories with
-
-     mvn package -DskipTests
-
-# Development Notes
-
-<!---
-## Git branch model
-
-
-The git branch model uses is
-[Git Flow](http://nvie.com/posts/a-successful-git-branching-model/).
-
-This is a common workflow model for Git, and built in to
-[Atlassian Source Tree](http://sourcetreeapp.com/).
- 
-The command line `git-flow` tool is easy to install 
- 
-    brew install git-flow
- 
-or
-
-    apt-get install git-flow
- 
-You should then work on all significant features in their own branch and
-merge them back in when they are ready.
-
- 
-    # until we get a public JIRA we're just using an in-house one. sorry
-    git flow feature start BUG-8192
-    
-    # finishes merges back in to develop/
-    git flow feature finish BUG-8192
-    
-    # release branch
-    git flow release start 0.4.0
-    
-    git flow release finish 0.4.0
--->
-
-## Attention OS/X developers
-
-YARN on OS/X doesn't terminate subprocesses the way it does on Linux, so
-HBase Region Servers created by the hbase shell script remain running
-even after the tests terminate.
-
-This causes some tests -especially those related to flexing down- to fail, 
-and test reruns may be very confused. If ever a test fails because there
-are too many region servers running, this is the likely cause
-
-After every test run: do a `jps -v` to look for any leftover HBase services
--and kill them.
-
-Here is a handy bash command to do this
-
-    jps -l | grep HRegion | awk '{print $1}' | xargs kill -9
-
-
-## Groovy 
-
-Slider uses Groovy 2.x as its language for writing tests -for better assertions
-and easier handling of lists and closures. Although the first prototype
-used Groovy on the production source, this was dropped in favor of
-a Java-only production codebase.
-
-## Maven utils
-
-
-Here are some handy aliases to make maven easier 
-
-    alias mci='mvn clean install -DskipTests'
-    alias mi='mvn install -DskipTests'
-    alias mvct='mvn clean test'
-    alias mvnsite='mvn site:site -Dmaven.javadoc.skip=true'
-    alias mvt='mvn test'
-
-

http://git-wip-us.apache.org/repos/asf/incubator-slider/blob/209cee43/src/site/markdown/developing/functional_tests.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/developing/functional_tests.md b/src/site/markdown/developing/functional_tests.md
deleted file mode 100644
index 8b5e170..0000000
--- a/src/site/markdown/developing/functional_tests.md
+++ /dev/null
@@ -1,416 +0,0 @@
-<!---
-   Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
--->
-
-# Testing Apache Slider
-
-     The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL
-      NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED",  "MAY", and
-      "OPTIONAL" in this document are to be interpreted as described in
-      RFC 2119.
-
-# Functional Tests
-
-The functional test suite is designed to test slider against
-a live cluster. 
-
-For these to work you need
-
-1. A YARN Cluster -secure or insecure
-1. A `slider-client.xml` file configured to interact with the cluster
-1. Agent 
-1. HBase tests:  HBase `.tar.gz` uploaded to HDFS, and a local or remote accumulo conf 
-directory
-1. Accumulo Tests Accumulo `.tar.gz` uploaded to HDFS, and a local or remote accumulo conf 
-directory
-
-## Configuration of functional tests
-
-Maven needs to be given
-
-1. A path to the expanded test archive
-1. A path to a slider configuration directory for the cluster
-
-The path for the expanded test is automatically calculated as being the directory under
-`..\slider-assembly\target` where an untarred slider distribution can be found.
-If it is not present, the tests will fail
-
-The path to the configuration directory must be supplied in the property
-`slider.conf.dir` which can be set on the command line
-
-    mvn test -Dslider.conf.dir=src/test/configs/sandbox/slider
-
-It can also be set in the (optional) file `slider-funtest/build.properties`:
-
-    slider.conf.dir=src/test/configs/sandbox/slider
-
-This file is loaded whenever a slider build or test run takes place
-
-## Configuration of `slider-client.xml`
-
-The `slider-client.xml` must have extra configuration options for both the HBase and
-Accumulo tests, as well as a common set for actually talking to a YARN cluster.
-
-## Disabling the functional tests entirely
-
-All functional tests which require a live YARN cluster
-can be disabled through the property `slider.funtest.enabled`
-  
-    <property>
-      <name>slider.funtest.enabled</name>
-      <value>false</value>
-    </property>
-
-There is a configuration do do exactly this in
-`src/test/configs/offline/slider`:
-
-    slider.conf.dir=src/test/configs/offline/slider
-
-Tests which do not require a live YARN cluster will still run;
-these verify that the `bin/slider` script works.
-
-### Non-mandatory options
-
-The following test options may be added to `slider-client.xml` if the defaults
-need to be changed
-                   
-    <property>
-      <name>slider.test.zkhosts</name>
-      <description>comma separated list of ZK hosts</description>
-      <value>localhost</value>
-    </property>
-       
-    <property>
-      <name>slider.test.thaw.wait.seconds</name>
-      <description>Time to wait in seconds for a thaw to result in a running AM</description>
-      <value>60000</value>
-    </property>
-    
-    <property>
-      <name>slider.test.freeze.wait.seconds</name>
-      <description>Time to wait in seconds for a freeze to halt the cluster</description>
-      <value>60000</value>
-    </property>
-            
-     <property>
-      <name>slider.test.timeout.millisec</name>
-      <description>Time out in milliseconds before a test is considered to have failed.
-      There are some maven properties which also define limits and may need adjusting</description>
-      <value>180000</value>
-    </property>
-
-     <property>
-      <name>slider.test.yarn.ram</name>
-      <description>Size in MB to ask for containers</description>
-      <value>192</value>
-    </property>
-
-    
-Note that while the same properties need to be set in
-`slider-core/src/test/resources/slider-client.xml`, those tests take a file in the local
-filesystem -here a URI to a path visible across all nodes in the cluster are required
-the tests do not copy the .tar/.tar.gz files over. The application configuration
-directories may be local or remote -they are copied into the `.slider` directory
-during cluster creation.
-
-##  Provider-specific parameters
-
-An individual provider can pick up settings from their own
-`src/test/resources/slider-client.xml` file, or the one in `slider-core`.
-We strongly advice placing all the values in the `slider-core` file.
-
-1. All uncertainty about which file is picked up on the class path first goes
-away
-2. There's one place to  keep all the configuration values in sync.
-
-### Agent Tests
-
-Agent tests are executed through the following mvn command executed at slider/slider-funtest:
-
-```
-cd slider-funtest
-mvn test -Dslider.conf.dir=../src/test/clusters/remote/slider -Dtest=TestAppsThroughAgent -DfailIfNoTests=false
-```
-
-**Enable/Execute the tests**
-
-To enable the test ensure that *slider.test.agent.enabled* is set to *true*.
-
-    <property>
-      <name>slider.test.agent.enabled</name>
-      <description>Flag to enable/disable Agent tests</description>
-      <value>true</value>
-    </property>
-        
-**Test setup**
-
-Edit config file src/test/clusters/remote/slider/slider-client.xml and ensure that the host names are accurate for the test cluster.
-
-**User setup**
-
-Ensure that the user, running the test, is present on the cluster against which you are running the tests. The user must be a member of the hadoop group.
-
-E.g. adduser **testuser** -d /home/**testuser** -G hadoop -m
-
-**HDFS Setup**
-
-Set up hdfs folders for slider and test user
-
-*  su hdfs
-*  hdfs dfs -mkdir /slider
-*  hdfs dfs -chown testuser:hdfs /slider
-*  hdfs dfs -mkdir /user/testuser
-*  hdfs dfs -chown testuser:hdfs /user/testuser
-
-Load up agent package and config
-
-*  su **testuser**
-*  hdfs dfs -mkdir /slider/agent
-*  hdfs dfs -mkdir /slider/agent/conf
-*  hdfs dfs -copyFromLocal SLIDER_INSTALL_LOC/agent/conf/agent.ini /slider/agent/conf
-
-Ensure correct host name is provided for the agent tarball.
-        
-    <property>
-      <name>slider.test.agent.tar</name>
-      <description>Path to the Agent Tar file in HDFS</description>
-      <value>hdfs://NN_HOSTNAME:8020/slider/agent/slider-agent.tar.gz</value>
-    </property>
-
-
-
-### HBase Tests
-
-The HBase tests can be enabled or disabled
-    
-    <property>
-      <name>slider.test.hbase.enabled</name>
-      <description>Flag to enable/disable HBase tests</description>
-      <value>true</value>
-    </property>
-        
-Mandatory test parameters must be added to `slider-client.xml`
-
-    <property>
-      <name>slider.test.hbase.tar</name>
-      <description>Path to the HBase Tar file in HDFS</description>
-      <value>hdfs://sandbox:8020/user/slider/hbase.tar.gz</value>
-    </property>
-    
-    <property>
-      <name>slider.test.hbase.appconf</name>
-      <description>Path to the directory containing the HBase application config</description>
-      <value>file://${user.dir}/src/test/configs/sandbox/hbase</value>
-    </property>
-    
-Optional parameters:  
-  
-     <property>
-      <name>slider.test.hbase.launch.wait.seconds</name>
-      <description>Time to wait in seconds for HBase to start</description>
-      <value>1800</value>
-    </property>  
-
-#### Accumulo configuration options
-
-Enable/disable the tests
-
-     <property>
-      <name>slider.test.accumulo.enabled</name>
-      <description>Flag to enable/disable Accumulo tests</description>
-      <value>true</value>
-     </property>
-         
-Optional parameters
-         
-     <property>
-      <name>slider.test.accumulo.launch.wait.seconds</name>
-      <description>Time to wait in seconds for Accumulo to start</description>
-      <value>1800</value>
-     </property>
-
-### Configuring the YARN cluster for tests
-
-Here are the configuration options we use in `yarn-site.xml` for testing:
-
-These tell YARN to ignore memory requirements in allocating VMs, and
-to keep the log files around after an application run. 
-
-      <property>
-        <name>yarn.scheduler.minimum-allocation-mb</name>
-        <value>1</value>
-      </property>
-      <property>
-        <description>Whether physical memory limits will be enforced for
-          containers.
-        </description>
-        <name>yarn.nodemanager.pmem-check-enabled</name>
-        <value>false</value>
-      </property>
-      <!-- we really don't want checking here-->
-      <property>
-        <name>yarn.nodemanager.vmem-check-enabled</name>
-        <value>false</value>
-      </property>
-      
-      <!-- how long after a failure to see what is left in the directory-->
-      <property>
-        <name>yarn.nodemanager.delete.debug-delay-sec</name>
-        <value>60000</value>
-      </property>
-    
-      <!--ten seconds before the process gets a -9 -->
-      <property>
-        <name>yarn.nodemanager.sleep-delay-before-sigkill.ms</name>
-        <value>30000</value>
-      </property>
-
-
-## Testing against a secure cluster
-
-To test against a secure cluster
-
-1. `slider-client.xml` must be configured as per [Security](../security.html).
-1. the client must have the kerberos tokens issued so that the user running
-the tests has access to HDFS and YARN.
-
-If there are problems authenticating (including the cluster being offline)
-the tests appear to hang
-
-### Validating the configuration
-
-    mvn test -Dtest=TestBuildSetup
-
-### Using relative paths in test configurations
-
-When you are sharing configurations across machines via SCM or similar,
-its impossible to have absolute paths in the configuration options to
-the location of items in the local filesystem (e.g. configuration directories).
-
-There's two techniques
-
-1. Keep the data in HDFS and refer to it there. This works if there is a shared,
-persistent HDFS cluster.
-
-1. Use the special property `slider.test.conf.dir` that is set to the path
-of the directory, and which can then be used to create an absolute path
-from paths relative to the configuration dir:
-
-	    <property>
-    	  <name>slider.test.hbase.appconf</name>
-    	  <description>Path to the directory containing the HBase application config</description>
-    	  <value>file://${slider.test.conf.dir}/../hbase</value>
-    	</property>
-
-
-If the actual XML file path is required, a similar property
-`slider.test.conf.xml` is set.
-
-
-## Parallel execution
-
-Attempts to run test cases in parallel failed -even with a configuration
-to run methods in a class sequentially, but separate classes independently.
-
-Even after identifying and eliminating some unintended sharing of static
-mutable variables, trying to run test cases in parallel seemed to hang
-tests and produce timeouts.
-
-For this reason parallel tests have been disabled. To accelerate test runs
-through parallelization, run different tests on different hosts instead.
-
-## Other constraints
-
-* Port assignments SHOULD NOT be fixed, as this will cause clusters to fail if
-there are too many instances of a role on a same host, or if other tests are
-using the same port.
-* If a test does need to fix a port, it MUST be for a single instance of a role,
-and it must be different from all others. The assignment should be set in 
-`org.apache.slider.funtest.itest.PortAssignments` so as to ensure uniqueness
-over time. Otherwise: use the value of `0` to allow the OS to assign free ports
-on demand.
-
-## Test Requirements
-
-
-1. Test cases should be written so that each class works with exactly one
-Slider-deployed cluster
-1. Every test MUST have its own cluster name -preferably derived from the
-classname.
-1. This cluster should be deployed in an `@BeforeClass` method.
-1. The `@AfterClass` method MUST tear this cluster down.
-1. Tests must skip their execution if functional tests -or the 
-specific hbase or accumulo categories- are disabled.
-1. Tests within the suite (i.e. class) must be designed to be independent
--to work irrespectively of the ordering of other tests.
-
-## Running and debugging the functional tests.
-
-The functional tests all 
-
-1. In the root `slider` directory, build a complete Slider release
-
-        mvn install -DskipTests
-1. Start the YARN cluster/set up proxies to connect to it, etc.
-
-1. In the `slider-funtest` dir, run the tests
-
-        mvn test 
-        
-A common mistake during development is to rebuild the `slider-core` JARs
-then the `slider-funtest` tests without rebuilding the `slider-assembly`.
-In this situation, the tests are in sync with the latest build of the code
--including any bug fixes- but the scripts executed by those tests are
-of a previous build of `slider-core.jar`. As a result, the fixes are not picked
-up.
-
-#### To propagate changes in slider-core through to the funtest classes for
-testing, you must build/install all the slider packages from the root assembly.
-
-    mvn clean install -DskipTests
-
-## Limitations of slider-funtest
-
-1. All tests run from a single client -workload can't scale
-1. Output from failed AM and containers aren't collected
-
-## Troubleshooting the functional tests
-
-1. If application instances fail to come up as there are still outstanding
-requests, it means that YARN didn't have the RAM/cores to spare for the number
-of containers. Edit the `slider.test.yarn.ram` to make it smaller.
-
-1. If you are testing in a local VM and stops responding, it'll have been
-swapped out to RAM. Rebooting can help, but for a long term fix go through
-all the Hadoop configurations (HDFS, YARN, Zookeeper) and set their heaps to
-smaller numbers, like 256M each. Also: turn off unused services (hcat, oozie,
-webHDFS)
-
-1. The YARN UI will list the cluster launches -look for the one
-with a name close to the test and view its logs
-
-1. Container logs will appear "elsewhere". The log lists
-the containers used -you may be able to track the logs
-down from the specific nodes.
-
-1. If you browse the filesystem, look for the specific test clusters
-in `~/.slider/cluster/$testname`
-
-1. If you are using a secure cluster, make sure that the clocks
-are synchronized, and that you have a current token -`klist` will
-tell you this. In a VM: install and enable `ntp`, consider rebooting if ther
-are any problems. Check also that it has the same time zone settings
-as the host OS.

http://git-wip-us.apache.org/repos/asf/incubator-slider/blob/209cee43/src/site/markdown/developing/index.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/developing/index.md b/src/site/markdown/developing/index.md
deleted file mode 100644
index 4372495..0000000
--- a/src/site/markdown/developing/index.md
+++ /dev/null
@@ -1,35 +0,0 @@
-<!---
-   Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
--->
-  
-# Developing Apache Slider
-
-Slider is an open source project -anyone is free to contributed, and we
-strongly encourage people to do so. 
-
-Here are documents covering how to go about building, testing and releasing
-Slider
-
-* [Building](building.html)
-* [Debugging](../debugging.html)
-* [Testing](testing.html)
-* [Functional Testing](functional_tests.html)
-* [Manual Testing](manual_testing.html)
-* [Agent test setup](agent_test_setup.html)
-* [Releasing](releasing.html)
-
-
- 

http://git-wip-us.apache.org/repos/asf/incubator-slider/blob/209cee43/src/site/markdown/developing/manual_testing.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/developing/manual_testing.md b/src/site/markdown/developing/manual_testing.md
deleted file mode 100644
index bfc7e7c..0000000
--- a/src/site/markdown/developing/manual_testing.md
+++ /dev/null
@@ -1,53 +0,0 @@
-<!---
-   Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
--->
-  
-# Manually Testing Apache Slider
-
-Manual testing invloves using Slider package and an AppPackage to perform basic
- cluster functionalities such as create/destroy, flex up/down, and freeze/thaw.
-  A python helper script is provided that can be used to automatically test and app package.
-
-## `SliderTester.py`
-Details to be added.
-
-## `SliderTester.ini`
-The various config parameters are:
-
-### slider
-* `package`: location of the slider package
-* `jdk.path`: jdk path on the test hosts
-
-### app
-* `package`: location of the app package
-
-### cluster
-* `yarn.application.classpath`: yarn application classpaths
-* `slider.zookeeper.quorum`: the ZK quorum hosts
-* `yarn.resourcemanager.address`:
-* `yarn.resourcemanager.scheduler.address`:
-* `fs.defaultFS`: e.g. `hdfs://NN_HOST:8020`
-
-### test
-* `app.user`: user to use for app creation
-* `hdfs.root.user`: hdfs root user
-* `hdfs.root.dir`: HDFS root, default /slidertst
-* `hdfs.user.dir`: HDFS user dir, default /user
-* `test.root`: local test root folder, default /test
-* `cluster.name`: name of the test cluster, default tst1
-* `cluster.type`: cluster type to build and test, e.g. hbase,storm,accumulo
-
-### agent

http://git-wip-us.apache.org/repos/asf/incubator-slider/blob/209cee43/src/site/markdown/developing/releasing.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/developing/releasing.md b/src/site/markdown/developing/releasing.md
deleted file mode 100644
index a8380bf..0000000
--- a/src/site/markdown/developing/releasing.md
+++ /dev/null
@@ -1,195 +0,0 @@
-<!---
-   Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
--->
-
-
-# Apache Slider Release Process
-
-Here is our release process.
-
-
-## IMPORTANT: THIS IS OUT OF DATE WITH THE MOVE TO THE ASF ## 
-
-### Before you begin
-
-Check out the latest version of the develop branch,
-run the tests. This should be done on a checked out
-version of the code that is not the one you are developing on
-(ideally, a clean VM), to ensure that you aren't releasing a slightly
-modified version of your own, and that you haven't accidentally
-included passwords or other test run details into the build resource
-tree.
-
-The `slider-funtest` functional test package is used to run functional
-tests against a running Hadoop YARN cluster. It needs to be configured
-according to the instructions in [testing](testing.html) to
-create HBase and Accumulo clusters in the YARN cluster.
-
-*Make sure that the functional tests are passing (and not being skipped) before
-starting to make a release*
-
-
-
-**Step #1:** Create a JIRA for the release, estimate 3h
-(so you don't try to skip the tests)
-
-    export SLIDER_RELEASE_JIRA=SLIDER-13927
-    
-**Step #2:** Check everything in. Git flow won't let you progress without this.
-
-**Step #3:** Git flow: create a release branch
-
-    export SLIDER_RELEASE=0.5.2
-    
-    git flow release start slider-$SLIDER_RELEASE
-
-**Step #4:** in the new branch, increment those version numbers using (the maven
-versions plugin)[http://mojo.codehaus.org/versions-maven-plugin/]
-
-    mvn versions:set -DnewVersion=$SLIDER_RELEASE
-
-
-**Step #5:** commit the changed POM files
-  
-    git add <changed files>
-    git commit -m "$SLIDER_RELEASE_JIRA updating release POMs for $SLIDER_RELEASE"
-
-  
-**Step #6:** Do a final test run to make sure nothing is broken
-
-In the `slider` directory, run:
-
-    mvn clean install -DskipTests
-
-Once everything is built- including .tar files, run the tests
-
-    mvn test
-
-This will run the functional tests as well as the `slider-core` tests.
-
-It is wise to reset any VMs here, and on live clusters kill all running jobs.
-This stops functional tests failing because the job doesn't get started before
-the tests time out.
-
-As the test run takes 30-60+ minutes, now is a good time to consider
-finalizing the release notes.
-
-
-**Step #7:** Build the release package
-
-Run
-    
-    mvn clean site:site site:stage package -DskipTests
-
-
-
-**Step #8:** validate the tar file
-
-Look in `slider-assembly/target` to find the `.tar.gz` file, and the
-expanded version of it. Inspect that expanded version to make sure that
-everything looks good -and that the versions of all the dependent artifacts
-look good too: there must be no `-SNAPSHOT` dependencies.
-
-
-**Step #9:** Build the release notes
-
-Create a a one-line plain text release note for commits and tags
-And a multi-line markdown release note, which will be used for artifacts.
-
-
-    Release against hadoop 2.4.0, HBase-0.98.1 and Accumulo 1.5.1 artifacts. 
-
-The multi-line release notes should go into `slider/src/site/markdown/release_notes`.
-
-
-These should be committed
-
-    git add --all
-    git commit -m "$SLIDER_RELEASE_JIRA updating release notes"
-
-**Step #10:** End the git flow
-
-Finish the git flow release, either in the SourceTree GUI or
-the command line:
-
-    
-    git flow release finish slider-$SLIDER_RELEASE
-    
-
-On the command line you have to enter the one-line release description
-prepared earlier.
-
-You will now be back on the `develop` branch.
-
-**Step #11:** update mvn versions
-
-Switch back to `develop` and update its version number past
-the release number
-
-
-    export SLIDER_RELEASE=0.6.0-SNAPSHOT
-    mvn versions:set -DnewVersion=$SLIDER_RELEASE
-    git commit -a -m "$SLIDER_RELEASE_JIRA updating development POMs to $SLIDER_RELEASE"
-
-**Step #12:** Push the release and develop branches to github 
-
-    git push origin master develop 
-
-(assuming that `origin` maps to `git@github.com:hortonworks/slider.git`;
- you can check this with `git remote -v`
-
-
-The `git-flow` program automatically pushes up the `release/slider-X.Y` branch,
-before deleting it locally.
-
-If you are planning on any release work of more than a single test run,
-consider having your local release branch track the master.
-
-
-**Step #13:** ### Release on github small artifacts
-
-Browse to https://github.com/hortonworks/slider/releases/new
-
-Create a new release on the site by following the instructions
-
-Files under 5GB can be published directly. Otherwise, follow step 14
-
-**Step #14:**  For releasing via an external CDN (e.g. Rackspace Cloud)
-
-Using the web GUI for your particular distribution network, upload the
-`.tar.gz` artifact
-
-After doing this, edit the release notes on github to point to the
-tar file's URL.
-
-Example: 
-    [Download slider-0.10.1-all.tar.gz](http://dffeaef8882d088c28ff-185c1feb8a981dddd593a05bb55b67aa.r18.cf1.rackcdn.com/slider-0.10.1-all.tar.gz)
-
-**Step #15:** Announce the release 
-
-**Step #16:** Finish the JIRA
-
-Log the time, close the issue. This should normally be the end of a 
-sprint -so wrap that up too.
-
-**Step #17:** Get back to developing!
-
-Check out the develop branch and purge all release artifacts
-
-    git checkout develop
-    git pull origin
-    mvn clean
-    

http://git-wip-us.apache.org/repos/asf/incubator-slider/blob/209cee43/src/site/markdown/developing/testing.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/developing/testing.md b/src/site/markdown/developing/testing.md
deleted file mode 100644
index 2c2ae62..0000000
--- a/src/site/markdown/developing/testing.md
+++ /dev/null
@@ -1,182 +0,0 @@
-<!---
-   Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
--->
-
-# Testing Apache Slider
-
-     The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL
-      NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED",  "MAY", and
-      "OPTIONAL" in this document are to be interpreted as described in
-      RFC 2119.
-
-## Standalone Tests
-
-Slider core contains a suite of tests that are designed to run on the local machine,
-using Hadoop's `MiniDFSCluster` and `MiniYARNCluster` classes to create small,
-one-node test clusters. All the YARN/HDFS code runs in the JUnit process; the
-AM and spawned processeses run independently.
-
-
-
-### For HBase Tests in `slider-providers/hbase`
-
-Requirements
-* A copy of `hbase.tar.gz` in the local filesystem
-* A an expanded `hbase.tar.gz` in the local filesystem
-
-
-### For Accumulo Tests in `slider-providers/accumulo`
-* A copy of `accumulo.tar.gz` in the local filesystem, 
-* An expanded `accumulo.tar.gz` in the local filesystem, 
-* an expanded Zookeeper installation
-
-All of these need to be defined in the file `slider-core/src/test/resources/slider-test.xml`
-
-Example:
-  
-    <configuration>
-    
-      <property>
-        <name>slider.test.hbase.enabled</name>
-        <description>Flag to enable/disable HBase tests</description>
-        <value>true</value>
-      </property>
-      
-      <property>
-        <name>slider.test.hbase.home</name>
-        <value>/home/slider/hbase-0.98.0</value>
-        <description>HBASE Home</description>
-      </property>
-    
-      <property>
-        <name>slider.test.hbase.tar</name>
-        <value>/home/slider/Projects/hbase-0.98.0-bin.tar.gz</value>
-        <description>HBASE archive URI</description>
-      </property>
-    
-      <property>
-        <name>slider.test.accumulo.enabled</name>
-        <description>Flag to enable/disable Accumulo tests</description>
-        <value>true</value>
-      </property>
-    
-      <property>
-        <name>slider.test.accumulo.home</name>
-        <value>
-          /home/slider/accumulo-1.6.0-SNAPSHOT/</value>
-        <description>Accumulo Home</description>
-      </property>
-    
-      <property>
-        <name>slider.test.accumulo.tar</name>
-        <value>/home/slider/accumulo-1.6.0-SNAPSHOT-bin.tar</value>
-        <description>Accumulo archive URI</description>
-      </property>
-
-      <property>
-        <name>slider.test.am.restart.time</name>
-        <description>Time in millis to await an AM restart</description>
-        <value>30000</value>
-      </property>
-
-      <property>
-        <name>zk.home</name>
-        <value>/home/slider/zookeeper</value>
-        <description>Zookeeper home dir on target systems</description>
-      </property>
-    
-      <property>
-        <name>hadoop.home</name>
-        <value>/home/slider/hadoop-2.2.0</value>
-        <description>Hadoop home dir on target systems</description>
-      </property>
-      
-    </configuration>
-
-*Important:* For the local tests, a simple local filesystem path is used for
-all the values. 
-
-For the functional tests, the accumulo and hbase tar properties will
-need to be set to a URL of a tar file that is accessible to all the
-nodes in the cluster -which usually means HDFS, and so an `hdfs://` URL
-
-
-##  Provider-specific parameters
-
-An individual provider can pick up settings from their own
-`src/test/resources/slider-client.xml` file, or the one in `slider-core`.
-We strongly advice placing all the values in the `slider-core` file.
-
-1. All uncertainty about which file is picked up on the class path first goes
-away
-2. There's one place to  keep all the configuration values in sync.
-
-### Agent Tests
-
-
-### HBase Tests
-
-The HBase tests can be enabled or disabled
-    
-    <property>
-      <name>slider.test.hbase.enabled</name>
-      <description>Flag to enable/disable HBase tests</description>
-      <value>true</value>
-    </property>
-        
-Mandatory test parameters must be added to `slider-client.xml`
-
-  
-    <property>
-      <name>slider.test.hbase.tar</name>
-      <description>Path to the HBase Tar file in HDFS</description>
-      <value>hdfs://sandbox:8020/user/slider/hbase.tar.gz</value>
-    </property>
-    
-    <property>
-      <name>slider.test.hbase.appconf</name>
-      <description>Path to the directory containing the HBase application config</description>
-      <value>file://${user.dir}/src/test/configs/sandbox/hbase</value>
-    </property>
-    
-Optional parameters:  
-  
-     <property>
-      <name>slider.test.hbase.launch.wait.seconds</name>
-      <description>Time to wait in seconds for HBase to start</description>
-      <value>180000</value>
-    </property>  
-
-
-#### Accumulo configuration options
-
-Enable/disable the tests
-
-     <property>
-      <name>slider.test.accumulo.enabled</name>
-      <description>Flag to enable/disable Accumulo tests</description>
-      <value>true</value>
-     </property>
-         
-         
-Optional parameters
-         
-     <property>
-      <name>slider.test.accumulo.launch.wait.seconds</name>
-      <description>Time to wait in seconds for Accumulo to start</description>
-      <value>180000</value>
-     </property>
-

http://git-wip-us.apache.org/repos/asf/incubator-slider/blob/209cee43/src/site/markdown/examples.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/examples.md b/src/site/markdown/examples.md
deleted file mode 100644
index f706f2f..0000000
--- a/src/site/markdown/examples.md
+++ /dev/null
@@ -1,159 +0,0 @@
-<!---
-   Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
--->
-
-# Apache Slider Examples
-
- 
-## Setup
- 
-### Setting up a YARN cluster
- 
-For simple local demos, a Hadoop pseudo-distributed cluster will suffice -if on a VM then
-its configuration should be changed to use a public (machine public) IP.
-
-# The examples below all assume there is a cluster node called 'master', which
-hosts the HDFS NameNode and the YARN Resource Manager
-
-
-# preamble
-
-    export HADOOP_CONF_DIR=~/conf
-    export PATH=~/hadoop/bin:/~/hadoop/sbin:~/zookeeper-3.4.5/bin:$PATH
-    
-    hdfs namenode -format master
-  
-
-
-
-# start all the services
-
-    nohup hdfs --config $HADOOP_CONF_DIR namenode & 
-    nohup hdfs --config $HADOOP_CONF_DIR datanode &
-    
-    
-    nohup yarn --config $HADOOP_CONF_DIR resourcemanager &
-    nohup yarn --config $HADOOP_CONF_DIR nodemanager &
-    
-# using hadoop/sbin service launchers
-    
-    hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs start namenode
-    hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs start datanode
-    yarn-daemon.sh --config $HADOOP_CONF_DIR start resourcemanager
-    yarn-daemon.sh --config $HADOOP_CONF_DIR start nodemanager
-    
-    ~/zookeeper/bin/zkServer.sh start
-    
-    
-# stop them
-
-    hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs stop namenode
-    hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs stop datanode
-    
-    yarn-daemon.sh --config $HADOOP_CONF_DIR stop resourcemanager
-    yarn-daemon.sh --config $HADOOP_CONF_DIR stop nodemanager
-    
-
-
-NN up on [http://master:50070/dfshealth.jsp](http://master:50070/dfshealth.jsp)
-RM yarn-daemon.sh --config $HADOOP_CONF_DIR start nodemanager
-
-    ~/zookeeper/bin/zkServer.sh start
-
-
-    # shutdown
-    ~/zookeeper/bin/zkServer.sh stop
-
-
-Tip: after a successful run on a local cluster, do a quick `rm -rf $HADOOP_HOME/logs`
-to keep the log bloat under control.
-
-## get hbase in
-
-copy to local 
-
-    get hbase-0.98.0-bin.tar on 
-
-
-    hdfs dfs -rm hdfs://master:9090/hbase.tar
-    hdfs dfs -copyFromLocal hbase-0.98.0-bin.tar hdfs://master:9090/hbase.tar
-
-or
-    
-    hdfs dfs -copyFromLocal hbase-0.96.0-bin.tar hdfs://master:9090/hbase.tar
-    hdfs dfs -ls hdfs://master:9090/
-    
-
-### Optional: point bin/slider at your chosen cluster configuration
-
-export SLIDER_CONF_DIR=~/Projects/slider/slider-core/src/test/configs/ubuntu-secure/slider
-
-## Optional: Clean up any existing slider cluster details
-
-This is for demos only, otherwise you lose the clusters and their databases.
-
-    hdfs dfs -rm -r hdfs://master:9090/user/home/stevel/.slider
-
-## Create a Slider Cluster
- 
- 
-    slider  create cl1 \
-    --component worker 1  --component master 1 \
-     --manager master:8032 --filesystem hdfs://master:9090 \
-     --zkhosts localhost:2181 --image hdfs://master:9090/hbase.tar
-    
-    # create the cluster
-    
-    slider create cl1 \
-     --component worker 4 --component master 1 \
-      --manager master:8032 --filesystem hdfs://master:9090 --zkhosts localhost \
-      --image hdfs://master:9090/hbase.tar \
-      --appconf file:////Users/slider/Hadoop/configs/master/hbase \
-      --compopt master jvm.heap 128 \
-      --compopt master env.MALLOC_ARENA_MAX 4 \
-      --compopt worker jvm.heap 128 
-
-    # freeze the cluster
-    slider freeze cl1 \
-    --manager master:8032 --filesystem hdfs://master:9090
-
-    # thaw a cluster
-    slider thaw cl1 \
-    --manager master:8032 --filesystem hdfs://master:9090
-
-    # destroy the cluster
-    slider destroy cl1 \
-    --manager master:8032 --filesystem hdfs://master:9090
-
-    # list clusters
-    slider list cl1 \
-    --manager master:8032 --filesystem hdfs://master:9090
-    
-    slider flex cl1 --component worker 2
-    --manager master:8032 --filesystem hdfs://master:9090 \
-    --component worker 5
-    
-## Create an Accumulo Cluster
-
-    slider create accl1 --provider accumulo \
-    --component master 1 --component tserver 1 --component gc 1 --component monitor 1 --component tracer 1 \
-    --manager localhost:8032 --filesystem hdfs://localhost:9000 \
-    --zkhosts localhost:2181 --zkpath /local/zookeeper \
-    --image hdfs://localhost:9000/user/username/accumulo-1.6.0-SNAPSHOT-bin.tar \
-    --appconf hdfs://localhost:9000/user/username/accumulo-conf \
-    -O zk.home /local/zookeeper -O hadoop.home /local/hadoop \
-    -O site.monitor.port.client 50095 -O accumulo.password secret 
-    

http://git-wip-us.apache.org/repos/asf/incubator-slider/blob/209cee43/src/site/markdown/exitcodes.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/exitcodes.md b/src/site/markdown/exitcodes.md
deleted file mode 100644
index ac63fe1..0000000
--- a/src/site/markdown/exitcodes.md
+++ /dev/null
@@ -1,161 +0,0 @@
-<!---
-   Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
--->
-
-# Apache Slider Client Exit Codes
-
-Here are the exit codes returned 
-
-Exit code values 1 and 2 are interpreted by YARN -in particular converting the
-"1" value from an error into a successful shut down. Slider
-converts the -1 error code from a forked process into `EXIT_MASTER_PROCESS_FAILED`;
-no. 72.
-
-
-    /**
-     * 0: success
-     */
-    int EXIT_SUCCESS                    =  0;
-    
-    /**
-     * -1: generic "false" response. The operation worked but
-     * the result was not true
-     */
-    int EXIT_FALSE                      = -1;
-    
-    /**
-     * Exit code when a client requested service termination:
-     */
-    int EXIT_CLIENT_INITIATED_SHUTDOWN  =  1;
-    
-    /**
-     * Exit code when targets could not be launched:
-     */
-    int EXIT_TASK_LAUNCH_FAILURE        =  2;
-    
-    /**
-     * Exit code when an exception was thrown from the service:
-     */
-    int EXIT_EXCEPTION_THROWN           = 32;
-    
-    /**
-     * Exit code when a usage message was printed:
-     */
-    int EXIT_USAGE                      = 33;
-    
-    /**
-     * Exit code when something happened but we can't be specific:
-     */
-    int EXIT_OTHER_FAILURE              = 34;
-    
-    /**
-     * Exit code when a control-C, kill -3, signal was picked up:
-     */
-                                  
-    int EXIT_INTERRUPTED                = 35;
-    
-    /**
-     * Exit code when the command line doesn't parse:, or
-     * when it is otherwise invalid.
-     */
-    int EXIT_COMMAND_ARGUMENT_ERROR     = 36;
-    
-    /**
-     * Exit code when the configurations in valid/incomplete:
-     */
-    int EXIT_BAD_CONFIGURATION          = 37;
-    
-    /**
-     * Exit code when the configurations in valid/incomplete:
-     */
-    int EXIT_CONNECTIVTY_PROBLEM        = 38;
-    
-    /**
-     * internal error: {@value}
-     */
-    int EXIT_INTERNAL_ERROR = 64;
-    
-    /**
-     * Unimplemented feature: {@value}
-     */
-    int EXIT_UNIMPLEMENTED =        65;
-  
-    /**
-     * service entered the failed state: {@value}
-     */
-    int EXIT_YARN_SERVICE_FAILED =  66;
-  
-    /**
-     * service was killed: {@value}
-     */
-    int EXIT_YARN_SERVICE_KILLED =  67;
-  
-    /**
-     * timeout on monitoring client: {@value}
-     */
-    int EXIT_TIMED_OUT =            68;
-  
-    /**
-     * service finished with an error: {@value}
-     */
-    int EXIT_YARN_SERVICE_FINISHED_WITH_ERROR = 69;
-  
-    /**
-     * the application instance is unknown: {@value}
-     */
-    int EXIT_UNKNOWN_INSTANCE = 70;
-  
-    /**
-     * the application instance is in the wrong state for that operation: {@value}
-     */
-    int EXIT_BAD_STATE =    71;
-  
-    /**
-     * A spawned master process failed 
-     */
-    int EXIT_PROCESS_FAILED = 72;
-  
-    /**
-     * The cluster failed -too many containers were
-     * failing or some other threshold was reached
-     */
-    int EXIT_DEPLOYMENT_FAILED = 73;
-  
-    /**
-     * The application is live -and the requested operation
-     * does not work if the cluster is running
-     */
-    int EXIT_APPLICATION_IN_USE = 74;
-  
-    /**
-     * There already is an application instance of that name
-     * when an attempt is made to create a new instance
-     */
-    int EXIT_INSTANCE_EXISTS = 75;
-    
-    /**
-     * The resource was not found
-     */
-    int EXIT_NOT_FOUND = 77;
-    
-## Other exit codes
-
-YARN itself can fail containers, here are some of the causes we've seen
-
-
-    143: Appears to be triggered by the container exceeding its cgroup memory
-    limits
- 

http://git-wip-us.apache.org/repos/asf/incubator-slider/blob/209cee43/src/site/markdown/getting_started.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/getting_started.md b/src/site/markdown/getting_started.md
deleted file mode 100644
index 7488c6c..0000000
--- a/src/site/markdown/getting_started.md
+++ /dev/null
@@ -1,509 +0,0 @@
-<!---
-   Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
--->
-
-# Apache Slider: Getting Started
-
-
-## Introduction
-
-The following provides the steps required for setting up a cluster and deploying a YARN hosted application using Slider.
-
-* [Prerequisites](#sysreqs)
-
-* [Setup the Cluster](#setup)
-
-* [Download Slider Packages](#download)
-
-* [Build Slider](#build)
-
-* [Install Slider](#install)
-
-* [Deploy Slider Resources](#deploy)
-
-* [Download Sample Application Packages](#downsample)
-
-* [Install, Configure, Start and Verify Sample Application](#installapp)
-
-* [Appendix A: Storm Sample Application Specifications](#appendixa)
-
-* [Appendix B: HBase Sample Application Specifications](#appendixb)
-
-## <a name="sysreqs"></a>System Requirements
-
-The Slider deployment has the following minimum system requirements:
-
-* Hadoop 2.4+
-
-* Required Services: HDFS, YARN, MapReduce2 and ZooKeeper
-
-* Oracle JDK 1.7 (64-bit)
-
-## <a name="setup"></a>Setup the Cluster
-
-After setting up your Hadoop cluster (using Ambari or other means) with the 
-services listed above, modify your YARN configuration to allow for multiple
-containers on a single host. In `yarn-site.xml` make the following modifications:
-
-<table>
-  <tr>
-    <td>Property</td>
-    <td>Value</td>
-  </tr>
-  <tr>
-    <td>yarn.scheduler.minimum-allocation-mb</td>
-    <td>>= 256</td>
-  </tr>
-  <tr>
-    <td>yarn.nodemanager.delete.debug-delay-sec</td>
-    <td>>= 3600 (to retain for an hour)</td>
-  </tr>
-</table>
-
-
-There are other options detailed in the Troubleshooting file available <a href="troubleshooting.html">here</a>
-
-
-## <a name="download"></a>Download Slider Packages
-
-Slider releases are available at
-[https://www.apache.org/dyn/closer.cgi/incubator/slider](https://www.apache.org/dyn/closer.cgi/incubator/slider).
-
-## <a name="build"></a>Build Slider
-
-* From the top level directory, execute `mvn clean install -DskipTests`
-* Use the generated compressed tar file in slider-assembly/target directory (e.g. slider-0.30.0-all.tar.gz) for the subsequent steps
-
-## <a name="install"></a>Install Slider
-
-Follow the following steps to expand/install Slider:
-
-    mkdir ${slider-install-dir*;
-
-    cd ${slider-install-dir}
-
-Login as the "yarn" user (assuming this is a host associated with the installed cluster).  E.g., `su yarn`
-*This assumes that all apps are being run as ‘yarn’ user. Any other user can be used to run the apps - ensure that file permission is granted as required.*
-
-Expand the tar file:  `tar -xvf slider-0.30.0-all.tar.gz`
-
-Browse to the Slider directory: `cd slider-0.30.0/bin`
-
-      export PATH=$PATH:/usr/jdk64/jdk1.7.0_45/bin 
-    
-(or the path to the JDK bin directory)
-
-Modify Slider configuration file `${slider-install-dir}/slider-0.30.0/conf/slider-client.xml` to add the following properties:
-
-      <property>
-          <name>yarn.application.classpath</name>
-          <value>/etc/hadoop/conf,/usr/lib/hadoop/*,/usr/lib/hadoop/lib/*,/usr/lib/hadoop-hdfs/*,/usr/lib/hadoop-hdfs/lib/*,/usr/lib/hadoop-yarn/*,/usr/lib/hadoop-yarn/lib/*,/usr/lib/hadoop-mapreduce/*,/usr/lib/hadoop-mapreduce/lib/*</value>
-      </property>
-      
-      <property>
-          <name>slider.zookeeper.quorum</name>
-          <value>yourZooKeeperHost:port</value>
-      </property>
-
-
-In addition, specify the scheduler and HDFS addresses as follows:
-
-    <property>
-        <name>yarn.resourcemanager.address</name>
-        <value>yourResourceManagerHost:8050</value>
-    </property>
-    <property>
-        <name>yarn.resourcemanager.scheduler.address</name>
-        <value>yourResourceManagerHost:8030</value>
-    </property>
-    <property>
-        <name>fs.defaultFS</name>
-        <value>hdfs://yourNameNodeHost:8020</value>
-    </property>
-
-
-Execute:
- 
-    ${slider-install-dir}/slider-0.30.0/bin/slider version
-
-Ensure there are no errors and you can see "Compiled against Hadoop 2.4.0"
-
-## <a name="deploy"></a>Deploy Slider Resources
-
-Ensure that all file folders are accessible to the user creating the application instance. The example assumes "yarn" to be that user.
-
-### Create HDFS root folder for Slider
-
-Perform the following steps to create the Slider root folder with the appropriate permissions:
-
-    su hdfs
-    
-    hdfs dfs -mkdir /slider
-    
-    hdfs dfs -chown yarn:hdfs /slider
-    
-    hdfs dfs -mkdir /user/yarn
-    
-    hdfs dfs -chown yarn:hdfs /user/yarn
-
-### Load Slider Agent
-
-    su yarn
-    
-    hdfs dfs -mkdir /slider/agent
-    
-    hdfs dfs -mkdir /slider/agent/conf
-    
-    hdfs dfs -copyFromLocal ${slider-install-dir}/slider-0.30.0/agent/slider-agent-0.30.0.tar.gz /slider/agent
-
-### Create and deploy Slider Agent configuration
-
-Create an agent config file (agent.ini) based on the sample available at:
-
-    ${slider-install-dir}/slider-0.30.0/agent/conf/agent.ini
-
-The sample agent.ini file can be used as is (see below). Some of the parameters of interest are:
-
-# `log_level` = INFO or DEBUG, to control the verbosity of log
-# `app_log_dir` = the relative location of the application log file
-# `log_dir` = the relative location of the agent and command log file
-
-    [server]
-    hostname=localhost
-    port=8440
-    secured_port=8441
-    check_path=/ws/v1/slider/agents/
-    register_path=/ws/v1/slider/agents/{name}/register
-    heartbeat_path=/ws/v1/slider/agents/{name}/heartbeat
-
-    [agent]
-    app_pkg_dir=app/definition
-    app_install_dir=app/install
-    app_run_dir=app/run
-    app_task_dir=app/command-log
-    app_log_dir=app/log
-    app_tmp_dir=app/tmp
-    log_dir=infra/log
-    run_dir=infra/run
-    version_file=infra/version
-    log_level=INFO
-
-    [python]
-
-    [command]
-    max_retries=2
-    sleep_between_retries=1
-
-    [security]
-
-    [heartbeat]
-    state_interval=6
-    log_lines_count=300
-
-
-Once created, deploy the agent.ini file to HDFS:
-
-    su yarn
-    
-    hdfs dfs -copyFromLocal agent.ini /slider/agent/conf
-
-## <a name="downsample"></a>Download Sample Application Packages
-
-There are three sample application packages available for download to use with Slider:
-
-<table>
-  <tr>
-    <td>Application</td>
-    <td>Version</td>
-    <td>URL</td>
-  </tr>
-  <tr>
-    <td>Apache HBase</td>
-    <td>0.96.0</td>
-    <td>http://public-repo-1.hortonworks.com/slider/hbase_v096.tar</td>
-  </tr>
-  <tr>
-    <td>Apache Storm</td>
-    <td>0.9.1</td>
-    <td>http://public-repo-1.hortonworks.com/slider/storm_v091.tar</td>
-  </tr>
-  <tr>
-    <td>Apache Accumulo</td>
-    <td>1.5.1</td>
-    <td>http://public-repo-1.hortonworks.com/slider/accumulo_v151.tar</td>
-  </tr>
-</table>
-
-
-Download the packages and deploy one of these sample applications to YARN via Slider using the steps below.
-
-## <a name="installapp"></a>Install, Configure, Start and Verify Sample Application
-
-* [Load Sample Application Package](#load)
-
-* [Create Application Specifications](#create)
-
-* [Start the Application](#start)
-
-* [Verify the Application](#verify)
-
-* [Manage the Application Lifecycle](#manage)
-
-### <a name="load"></a>Load Sample Application Package
-
-    hdfs dfs -copyFromLocal *sample-application-package/slider
-
-If necessary, create HDFS folders needed by the application. For example, HBase requires the following HDFS-based setup:
-
-    su hdfs
-    
-    hdfs dfs -mkdir /apps
-    
-    hdfs dfs -mkdir /apps/hbase
-    
-    hdfs dfs -chown yarn:hdfs /apps/hbase
-
-### <a name="create"></a>Create Application Specifications
-
-Configuring a Slider application consists of two parts: the [Resource Specification](#resspec),
- and the *[Application Configuration](#appconfig). Below are guidelines for creating these files.
-
-*Note: There are sample Resource Specifications (**resources.json**) and Application Configuration 
-(**appConfig.json**) files in the *[Appendix](#appendixa)* and also in the root directory of the
-Sample Applications packages (e.g. /**hbase-v096/resources.json** and /**hbase-v096/appConfig.json**).*
-
-#### <a name="resspec"></a>Resource Specification
-
-Slider needs to know what components (and how many components) are in an application package to deploy. For example, in HBase, the components are **_master_** and **_worker_** -- the latter hosting **HBase RegionServers**, and the former hosting the **HBase Master**. 
-
-As Slider creates each instance of a component in its own YARN container, it also needs to know what to ask YARN for in terms of **memory** and **CPU** for those containers. 
-
-All this information goes into the **Resources Specification** file ("Resource Spec") named `resources.json`. The Resource Spec tells Slider how many instances of each component in the application (such as an HBase RegionServer) to deploy and the parameters for YARN.
-
-Sample Resource Spec files are available in the Appendix:
-
-* [Appendix A: Storm Sample Resource Specification](#heading=h.1hj8hn5xne7c)
-
-* [Appendix B: HBase Sample Resource Specification](#heading=h.l7z5mvhvxmzv)
-
-Store the Resource Spec file on your local disk (e.g. `/tmp/resources.json`).
-
-#### <a name="appconfig"></a>Application Configuration
-
-Alongside the Resource Spec there is the **Application Configuration** file ("App Config") which includes parameters that are specific to the application, rather than YARN. The App Config is a file that contains all application configuration. This configuration is applied to the default configuration provided by the application definition and then handed off to the associated component agent.
-
-For example, the heap sizes of the JVMs,  The App Config defines the configuration details **specific to the application and component** instances. For HBase, this includes any values for the *to-be-generated *hbase-site.xml file, as well as options for individual components, such as their heap size.
-
-Sample App Configs are available in the Appendix:
-
-* [Appendix A: Storm Sample Application Configuration](#heading=h.2qai3c6w260l)
-
-* [Appendix B: HBase Sample Application Configuration](#heading=h.hetv1wn44c5x)
-
-Store the appConfig.json file on your local disc and a copy in HDFS:
-
-    su yarn
-    
-    hdfs dfs -mkdir /slider/appconf
-    
-    hdfs dfs -copyFromLocal appConf.json /slider/appconf
-
-### <a name="start"></a>Start the Application
-
-Once the steps above are completed, the application can be started through the **Slider Command Line Interface (CLI)**.
-
-Change directory to the "bin" directory under the slider installation
-
-    cd ${slider-install-dir}/slider-0.30.0/bin
-
-Execute the following command:
-
-    ./slider create cl1 --manager yourResourceManagerHost:8050 --image hdfs://yourNameNodeHost:8020/slider/agent/slider-agent-0.30.0.tar.gz --template appConfig.json --resources resources.json
-
-### <a name="verify"></a>Verify the Application
-
-The successful launch of the application can be verified via the YARN Resource Manager Web UI. In most instances, this UI is accessible via a web browser at port 8088 of the Resource Manager Host:
-
-![image alt text](images/image_0.png)
-
-The specific information for the running application is accessible via the "ApplicationMaster" link that can be seen in the far right column of the row associated with the running application (probably the top row):
-
-![image alt text](images/image_1.png)
-
-### <a name="manage"></a>Manage the Application Lifecycle
-
-Once started, applications can be frozen/stopped, thawed/restarted, and destroyed/removed as follows:
-
-#### Frozen:
-
-    ./slider freeze cl1 --manager yourResourceManagerHost:8050  --filesystem hdfs://yourNameNodeHost:8020
-
-#### Thawed: 
-
-    ./slider thaw cl1 --manager yourResourceManagerHost:8050  --filesystem hdfs://yourNameNodeHost:8020
-
-#### Destroyed: 
-
-    ./slider destroy cl1 --manager yourResourceManagerHost:8050  --filesystem hdfs://yourNameNodeHost:8020
-
-#### Flexed:
-
-    ./slider flex cl1 --component worker 5 --manager yourResourceManagerHost:8050  --filesystem hdfs://yourNameNodeHost:8020
-
-# <a name="appendixa"></a>Appendix A: Apache Storm Sample Application Specifications
-
-## Storm Resource Specification Sample
-
-    {
-      "schema" : "http://example.org/specification/v2.0.0",
-      "metadata" : {
-      },
-      "global" : {
-      },
-      "components" : {
-        "slider-appmaster" : {
-        },
-        "NIMBUS" : {
-            "yarn.role.priority" : "1",
-            "yarn.component.instances" : "1"
-        },
-        "STORM_REST_API" : {
-            "yarn.role.priority" : "2",
-            "yarn.component.instances" : "1"
-        },
-        "STORM_UI_SERVER" : {
-            "yarn.role.priority" : "3",
-            "yarn.component.instances" : "1"
-        },
-        "DRPC_SERVER" : {
-            "yarn.role.priority" : "4",
-            "yarn.component.instances" : "1"
-        },
-        "SUPERVISOR" : {
-            "yarn.role.priority" : "5",
-            "yarn.component.instances" : "1"
-        }
-      }
-    }
-
-
-## Storm Application Configuration Sample
-
-    {
-      "schema" : "http://example.org/specification/v2.0.0",
-      "metadata" : {
-      },
-      "global" : {
-          "A site property for type XYZ with name AA": "its value",
-          "site.XYZ.AA": "Value",
-          "site.hbase-site.hbase.regionserver.port": "0",
-          "site.core-site.fs.defaultFS": "${NN_URI}",
-          "Using a well known keyword": "Such as NN_HOST for name node host",
-          "site.hdfs-site.dfs.namenode.http-address": "${NN_HOST}:50070",
-          "a global property used by app scripts": "not affiliated with any site-xml",
-          "site.global.app_user": "yarn",
-          "Another example of available keywords": "Such as AGENT_LOG_ROOT",
-          "site.global.app_log_dir": "${AGENT_LOG_ROOT}/app/log",
-          "site.global.app_pid_dir": "${AGENT_WORK_ROOT}/app/run",
-      }
-    }
-
-
-# <a name="appendixb"></a>Appendix B:  Apache HBase Sample Application Specifications
-
-## HBase Resource Specification Sample
-
-    {
-      "schema" : "http://example.org/specification/v2.0.0",
-      "metadata" : {
-      },
-      "global" : {
-      },
-      "components" : {
-        "HBASE_MASTER" : {
-            "yarn.role.priority" : "1",
-            "yarn.component.instances" : "1"
-        },
-        "slider-appmaster" : {
-        },
-        "HBASE_REGIONSERVER" : {
-            "yarn.role.priority" : "2",
-            "yarn.component.instances" : "1"
-        }
-      }
-    }
-
-
-## HBase Application Configuration Sample
-
-    {
-      "schema" : "http://example.org/specification/v2.0.0",
-      "metadata" : {
-      },
-      "global" : {
-        "agent.conf": "/slider/agent/conf/agent.ini",
-        "agent.version": "/slider/agent/version",
-        "application.def": "/slider/hbase_v096.tar",
-        "config_types": "core-site,hdfs-site,hbase-site",
-        "java_home": "/usr/jdk64/jdk1.7.0_45",
-        "package_list": "files/hbase-0.96.1-hadoop2-bin.tar",
-        "site.global.app_user": "yarn",
-        "site.global.app_log_dir": "${AGENT_LOG_ROOT}/app/log",
-        "site.global.app_pid_dir": "${AGENT_WORK_ROOT}/app/run",
-        "site.global.app_root": "${AGENT_WORK_ROOT}/app/install/hbase-0.96.1-hadoop2",
-        "site.global.app_install_dir": "${AGENT_WORK_ROOT}/app/install",
-        "site.global.hbase_master_heapsize": "1024m",
-        "site.global.hbase_regionserver_heapsize": "1024m",
-        "site.global.user_group": "hadoop",
-        "site.global.security_enabled": "false",
-        "site.hbase-site.hbase.hstore.flush.retries.number": "120",
-        "site.hbase-site.hbase.client.keyvalue.maxsize": "10485760",
-        "site.hbase-site.hbase.hstore.compactionThreshold": "3",
-        "site.hbase-site.hbase.rootdir": "${NN_URI}/apps/hbase/data",
-        "site.hbase-site.hbase.stagingdir": "${NN_URI}/apps/hbase/staging",
-        "site.hbase-site.hbase.regionserver.handler.count": "60",
-        "site.hbase-site.hbase.regionserver.global.memstore.lowerLimit": "0.38",
-        "site.hbase-site.hbase.hregion.memstore.block.multiplier": "2",
-        "site.hbase-site.hbase.hregion.memstore.flush.size": "134217728",
-        "site.hbase-site.hbase.superuser": "yarn",
-        "site.hbase-site.hbase.zookeeper.property.clientPort": "2181",
-        "site.hbase-site.hbase.regionserver.global.memstore.upperLimit": "0.4",
-        "site.hbase-site.zookeeper.session.timeout": "30000",
-        "site.hbase-site.hbase.tmp.dir": "${AGENT_WORK_ROOT}/work/app/tmp",
-        "site.hbase-site.hbase.local.dir": "${hbase.tmp.dir}/local",
-        "site.hbase-site.hbase.hregion.max.filesize": "10737418240",
-        "site.hbase-site.hfile.block.cache.size": "0.40",
-        "site.hbase-site.hbase.security.authentication": "simple",
-        "site.hbase-site.hbase.defaults.for.version.skip": "true",
-        "site.hbase-site.hbase.zookeeper.quorum": "${ZK_HOST}",
-        "site.hbase-site.zookeeper.znode.parent": "/hbase-unsecure",
-        "site.hbase-site.hbase.hstore.blockingStoreFiles": "10",
-        "site.hbase-site.hbase.hregion.majorcompaction": "86400000",
-        "site.hbase-site.hbase.security.authorization": "false",
-        "site.hbase-site.hbase.cluster.distributed": "true",
-        "site.hbase-site.hbase.hregion.memstore.mslab.enabled": "true",
-        "site.hbase-site.hbase.client.scanner.caching": "100",
-        "site.hbase-site.hbase.zookeeper.useMulti": "true",
-        "site.hbase-site.hbase.regionserver.info.port": "0",
-        "site.hbase-site.hbase.master.info.port": "60010",
-        "site.hbase-site.hbase.regionserver.port": "0",
-        "site.core-site.fs.defaultFS": "${NN_URI}",
-        "site.hdfs-site.dfs.namenode.https-address": "${NN_HOST}:50470",
-        "site.hdfs-site.dfs.namenode.http-address": "${NN_HOST}:50070"
-      }
-  }
-
-

http://git-wip-us.apache.org/repos/asf/incubator-slider/blob/209cee43/src/site/markdown/index.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/index.md b/src/site/markdown/index.md
deleted file mode 100644
index 2c53ec7..0000000
--- a/src/site/markdown/index.md
+++ /dev/null
@@ -1,94 +0,0 @@
-<!---
-   Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
--->
-  
-
-# Apache Slider: Dynamic YARN Applications
-
-
-
-Apache Slider is a YARN application to deploy existing distributed applications on YARN, 
-monitor them and make them larger or smaller as desired -even while 
-the application is running.
-
-Applications can be stopped, "frozen" and restarted, "thawed" later; the distribution
-of the deployed application across the YARN cluster is persisted -enabling
-a best-effort placement close to the previous locations on a cluster thaw.
-Applications which remember the previous placement of data (such as HBase)
-can exhibit fast start-up times from this feature.
-
-YARN itself monitors the health of "YARN containers" hosting parts of 
-the deployed application -it notifies the Slider manager application of container
-failure. Slider then asks YARN for a new container, into which Slider deploys
-a replacement for the failed component. As a result, Slider can keep the
-size of managed applications consistent with the specified configuration, even
-in the face of failures of servers in the cluster -as well as parts of the
-application itself
-
-Some of the features are:
-
-* Allows users to create on-demand applications in a YARN cluster
-
-* Allow different users/applications to run different versions of the application.
-
-* Allow users to configure different application instances differently
-
-* Stop / Suspend / Resume application instances as needed
-
-* Expand / shrink application instances as needed
-
-The Slider tool is a Java command line application.
-
-The tool persists the information as JSON documents in HDFS.
-
-Once the cluster has been started, the cluster can be made to grow or shrink
-using the Slider commands. The cluster can also be stopped, *frozen*
-and later resumed, *thawed*.
-      
-Slider implements all its functionality through YARN APIs and the existing
-application shell scripts. The goal of the application was to have minimal
-code changes and as of this writing, it has required few changes.
-
-## Using 
-
-* [Getting Started](getting_started.html)
-* [Man Page](manpage.html)
-* [Examples](examples.html)
-* [Client Configuration](client-configuration.html)
-* [Client Exit Codes](exitcodes.html)
-* [Security](security.html)
-* [How to define a new slider-packaged application](slider_specs/index.html)
-* [Application configuration model](configuration/index.html)
-
-
-## Developing 
-
-* [Architecture](architecture/index.html)
-* [Developing](developing/index.html)
-* [Application Needs](slider_specs/application_needs.md)
-* [Service Registry](registry/index.html)
-
-## Disclaimer
-
-Apache Slider (incubating) is an effort undergoing incubation at The
-Apache Software Foundation (ASF), sponsored by the name of Apache TLP
-sponsor. Incubation is required of all newly accepted projects until a
-further review indicates that the infrastructure, communications, and
-decision making process have stabilized in a manner consistent with
-other successful ASF projects. While incubation status is not
-necessarily a reflection of the completeness or stability of the code,
-it does indicate that the project has yet to be fully endorsed by the
-ASF.