You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@sdap.apache.org by GitBox <gi...@apache.org> on 2018/06/29 14:44:50 UTC

[GitHub] fgreg closed pull request #16: SDAP-57 Update nexus docker images

fgreg closed pull request #16: SDAP-57 Update nexus docker images
URL: https://github.com/apache/incubator-sdap-nexus/pull/16
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docker/cassandra/Dockerfile b/docker/cassandra/Dockerfile
index 59f9022..e9c93b7 100644
--- a/docker/cassandra/Dockerfile
+++ b/docker/cassandra/Dockerfile
@@ -1,5 +1,22 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
 FROM cassandra:2.2.8
 
+MAINTAINER Apache SDAP "dev@sdap.apache.org"
+
 RUN apt-get update && apt-get -y install git && rm -rf /var/lib/apt/lists/*
 
-RUN cd / && git clone https://github.com/dataplumber/nexus.git && cp -r /nexus/data-access/config/schemas/cassandra/nexustiles.cql /tmp/. && rm -rf /nexus
+RUN cd / && git clone https://github.com/apache/incubator-sdap-nexus.git && cp -r /incubator-sdap-nexus/data-access/config/schemas/cassandra/nexustiles.cql /tmp/. && rm -rf /incubator-sdap-nexus
diff --git a/docker/ingest-admin/Dockerfile b/docker/ingest-admin/Dockerfile
deleted file mode 100644
index e11f71d..0000000
--- a/docker/ingest-admin/Dockerfile
+++ /dev/null
@@ -1,12 +0,0 @@
-FROM nexusjpl/ingest-base
-
-USER root
-RUN yum install -y https://archive.cloudera.com/cdh5/one-click-install/redhat/7/x86_64/cloudera-cdh-5-0.x86_64.rpm && \
-    yum install -y zookeeper
-
-COPY nx-env.sh /usr/local/nx-env.sh
-COPY nx-deploy-stream.sh /usr/local/nx-deploy-stream.sh
-
-USER springxd
-ENTRYPOINT ["/usr/local/nexus-ingest.sh"]
-CMD ["--admin"]
\ No newline at end of file
diff --git a/docker/ingest-admin/README.md b/docker/ingest-admin/README.md
deleted file mode 100644
index 6da6db1..0000000
--- a/docker/ingest-admin/README.md
+++ /dev/null
@@ -1,86 +0,0 @@
-# ingest-admin Docker
-
-This can be used to start a spring-xd admin node.
-
-# Docker Compose
-
-Use the [docker-compose.yml](docker-compose.yml) file to start up mysql, redis, and xd-admin in one command. Example:
-
-    MYSQL_PASSWORD=admin ZK_HOST_IP=10.200.10.1 KAFKA_HOST_IP=10.200.10.1 docker-compose up
-
-`MYSQL_PASSWORD` sets the password for a new MySQL user called `xd` when the MySQL database is initialized.
-`ZK_HOST_IP` must be set to a valid IP address of a zookeeper host that will be used to manage Spring XD.
-`KAFKA_HOST_IP` must be set to a valid IP address of a kafka broker that will be used for the transport layer of Spring XD
-
-# Docker Run
-
-This container relies on 4 external services that must already be running: MySQL, Redis, Zookeeper, and Kafka.
-
-To start the server use:
-
-    docker run -it \
-    -e "MYSQL_PORT_3306_TCP_ADDR=mysqldb" -e "MYSQL_PORT_3306_TCP_PORT=3306" \
-    -e "MYSQL_USER=xd" -e "MYSQL_PASSWORD=admin" \
-    -e "REDIS_ADDR=redis" -e "REDIS_PORT=6397" \
-    -e "ZOOKEEPER_CONNECT=zkhost:2181" -e "ZOOKEEPER_XD_CHROOT=springxd" \
-    -e "KAFKA_BROKERS=kafka1:9092" -e "KAFKA_ZKADDRESS=zkhost:2181/kafka"
-    --add-host="zkhost:10.200.10.1" \
-    --add-host="kafka1:10.200.10.1"
-    --name xd-admin nexusjpl/ingest-admin
-
-This mode requires a number of Environment Variables to be defined.
-
-#####  `MYSQL_PORT_3306_TCP_ADDR`
-
-Address to a running MySQL service
-
-#####  `MYSQL_PORT_3306_TCP_PORT`
-
-Port for running MySQL service
-
-#####  `MYSQL_USER`
-
-Username to connnect to MySQL service
-
-#####  `MYSQL_PASSWORD`
-
-Password for connecting to MySQL service
-
-#####  `ZOOKEEPER_CONNECT`
-
-Zookeeper connect string. Can be a comma-delimmited list of host:port values.
-
-#####  `ZOOKEEPER_XD_CHROOT`
-
-Zookeeper root node for spring-xd
-
-#####  `REDIS_ADDR`
-
-Address to a running Redis service
-
-#####  `REDIS_PORT`
-
-Port for running Redis service
-
-#####  `KAFKA_BROKERS`
-
-Comma-delimmited list of host:port values which define the list of Kafka brokers used for transport.
-
-#####  `KAFKA_ZKADDRESS`
-
-Specifies the ZooKeeper connection string in the form hostname:port where host and port are the host and port of a ZooKeeper server.  
-
-The server may also have a ZooKeeper chroot path as part of its ZooKeeper connection string which puts its data under some path in the global ZooKeeper namespace. If so the consumer should use the same chroot path in its connection string. For example to give a chroot path of `/chroot/path` you would give the connection string as `hostname1:port1,hostname2:port2,hostname3:port3/chroot/path`.
-
-# XD Shell
-
-## Using Docker Exec
-
-Once the xd-admin container is running you can use docker exec to start an XD Shell that communicates with the xd-admin server:
-
-    docker exec -it xd-admin xd-shell
-
-## Using Standalone Container
-You can use the springxd shell docker image to start a seperate container running XD shell connected to the xd-admin server:
-
-    docker run -it --network container:xd-admin springxd/shell
\ No newline at end of file
diff --git a/docker/ingest-admin/docker-compose.yml b/docker/ingest-admin/docker-compose.yml
deleted file mode 100644
index 0f2c5b6..0000000
--- a/docker/ingest-admin/docker-compose.yml
+++ /dev/null
@@ -1,73 +0,0 @@
-version: '3'
-
-networks:
-  ingestnetwork:
-  nexus:
-      external: true
-
-services:
-
-    mysqldb:
-        image: mysql:8
-        hostname: mysqldb
-        expose:
-            - "3306"
-        environment:
-            - MYSQL_RANDOM_ROOT_PASSWORD=yes
-            - MYSQL_DATABASE=xdjob
-            - MYSQL_USER=xd
-            - MYSQL_PASSWORD=${MYSQL_PASSWORD}
-        networks:
-            - ingestnetwork
-            - nexus
-        deploy:
-            placement:
-                constraints:
-                    - nexus.ingest-admin == true
-    
-    redis:
-        image: redis:3
-        container_name: redis
-        expose:
-            - "6379"
-        networks:
-            - ingestnetwork
-            - nexus
-        deploy:
-            placement:
-                constraints:
-                    - nexus.ingest-admin == true
-            
-    xd-admin:
-        image: nexusjpl/ingest-admin
-        container_name: xd-admin
-        command: [-a]
-        environment:
-            - MYSQL_PORT_3306_TCP_ADDR=mysqldb
-            - MYSQL_PORT_3306_TCP_PORT=3306
-            - MYSQL_USER=xd
-            - MYSQL_PASSWORD=${MYSQL_PASSWORD}
-            - REDIS_ADDR=redis
-            - REDIS_PORT=6379
-            - "ZOOKEEPER_CONNECT=zkhost:2181"
-            - ZOOKEEPER_XD_CHROOT=springxd
-            - "KAFKA_BROKERS=kafka1:9092"
-            - "KAFKA_ZKADDRESS=zkhost:2181/kafka"
-        depends_on:
-            - mysqldb
-            - redis
-        extra_hosts:
-            - "zkhost:$ZK_HOST_IP"
-            - "kafka1:$KAFKA_HOST_IP"
-        networks:
-            - ingestnetwork
-            - nexus
-        deploy:
-            placement:
-                constraints:
-                    - nexus.ingest-admin == true
-            restart_policy:
-                condition: on-failure
-                delay: 5s
-                max_attempts: 3
-                window: 120s
\ No newline at end of file
diff --git a/docker/ingest-admin/nx-deploy-stream.sh b/docker/ingest-admin/nx-deploy-stream.sh
deleted file mode 100755
index e77483a..0000000
--- a/docker/ingest-admin/nx-deploy-stream.sh
+++ /dev/null
@@ -1,38 +0,0 @@
-#!/bin/bash
-
-. /usr/local/nx-env.sh
-
-if [ $# -gt 0 ]; then
-  while true; do  
-    case "$1" in
-        --datasetName)
-            DATASET_NAME="$2" 
-            shift 2
-        ;;
-        --dataDirectory)
-            DATA_DIR="$2"
-            shift 2
-        ;;
-        --variableName)
-            VARIABLE="$2"
-            shift 2
-        ;;
-        --tilesDesired)
-            TILES_DESIRED="$2"
-            shift 2
-        ;;
-        *)
-            break # out-of-args, stop looping
-
-        ;;
-    esac
-  done
-fi
-
-echo "stream create --name ${DATASET_NAME} --definition \"scan-for-avhrr-granules: file --dir=${DATA_DIR} --mode=ref --pattern=*.nc --maxMessages=1 --fixedDelay=1 | header-absolutefilepath: header-enricher --headers={\\\"absolutefilepath\\\":\\\"payload\\\"} | dataset-tiler --dimensions=lat,lon --tilesDesired=${TILES_DESIRED} | join-with-static-time: transform --expression=\\\"'time:0:1,'+payload.stream().collect(T(java.util.stream.Collectors).joining(';time:0:1,'))+';file://'+headers['absolutefilepath']\\\" | python-chain: tcpshell --command='python -u -m nexusxd.processorchain' --environment=CHAIN=nexusxd.tilereadingprocessor.read_grid_data:nexusxd.emptytilefilter.filter_empty_tiles:nexusxd.kelvintocelsius.transform:nexusxd.tilesumarizingprocessor.summarize_nexustile,VARIABLE=${VARIABLE},LATITUDE=lat,LONGITUDE=lon,TIME=time,READER=GRIDTILE,TEMP_DIR=/tmp,STORED_VAR_NAME=${VARIABLE} --bufferSize=1000000 --remoteReplyTimeout=360000 | add-id: script --script=file:///usr/local/spring-xd/current/xd-nexus-shared/generate-tile-id.groovy | set-dataset-name: script --script=file:///usr/local/spring-xd/current/xd-nexus-shared/set-dataset-name.groovy --variables='datasetname=${DATASET_NAME}' | nexus --cassandraContactPoints=${CASS_HOST} --cassandraKeyspace=nexustiles --solrCloudZkHost=${SOLR_CLOUD_ZK_HOST} --solrCollection=nexustiles --cassandraPort=${CASS_PORT}\"" > /tmp/stream-create
-
-xd-shell --cmdfile /tmp/stream-create
-
-echo "stream deploy --name ${DATASET_NAME} --properties module.python-chain.count=3,module.nexus.count=3" > /tmp/stream-deploy
-
-xd-shell --cmdfile /tmp/stream-deploy
diff --git a/docker/ingest-admin/nx-env.sh b/docker/ingest-admin/nx-env.sh
deleted file mode 100755
index 93ee22f..0000000
--- a/docker/ingest-admin/nx-env.sh
+++ /dev/null
@@ -1,5 +0,0 @@
-#!/bin/bash
-
-export SOLR_CLOUD_ZK_HOST=zk1:2181,zk2:2181,zk3:2181/solr
-export CASS_HOST=cassandra1,cassandra2,cassandra3
-export CASS_PORT=9042
diff --git a/docker/ingest-base/Dockerfile b/docker/ingest-base/Dockerfile
deleted file mode 100644
index e612ebe..0000000
--- a/docker/ingest-base/Dockerfile
+++ /dev/null
@@ -1,62 +0,0 @@
-FROM nexusjpl/nexusbase
-
-WORKDIR /tmp
-
-RUN yum -y install unzip nc
-
-# Create conda environment and install dependencies
-RUN conda create -y --name nexus-xd-python-modules python && \
-    source activate nexus-xd-python-modules && \
-    conda install -y scipy=0.18.1 && \
-    conda install -y -c conda-forge nco=4.6.4 netcdf4=1.2.7
-    
-# Install Spring XD
-RUN groupadd -r springxd && adduser -r -g springxd springxd
-
-WORKDIR /usr/local/spring-xd
-RUN wget -q "http://repo.spring.io/libs-release/org/springframework/xd/spring-xd/1.3.1.RELEASE/spring-xd-1.3.1.RELEASE-dist.zip" && \
-    unzip spring-xd-1.3.1.RELEASE-dist.zip && \
-    rm spring-xd-1.3.1.RELEASE-dist.zip && \
-    ln -s spring-xd-1.3.1.RELEASE current && \
-    mkdir current/xd/lib/none
-
-RUN wget -q "https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-5.0.8.tar.gz" && \
-    tar -zxf mysql-connector-java-5.0.8.tar.gz && \
-    mv mysql-connector-java-5.0.8/mysql-connector-java-5.0.8-bin.jar current/xd/lib && \
-    rm -rf mysql-connector-java-5.0.8 && \
-    rm -f mysql-connector-java-5.0.8.tar.gz && \
-    chown -R springxd:springxd spring-xd-1.3.1.RELEASE
-
-USER springxd
-ENV PATH $PATH:/usr/local/spring-xd/current/xd/bin:/usr/local/spring-xd/current/shell/bin:/usr/local/spring-xd/current/zookeeper/bin
-COPY xd-container-logback.groovy /usr/local/spring-xd/current/xd/config/xd-container-logback.groovy
-COPY xd-singlenode-logback.groovy /usr/local/spring-xd/current/xd/config/xd-singlenode-logback.groovy
-VOLUME ["/usr/local/spring-xd/current/xd/config"]
-EXPOSE 9393
-
-# Configure Java Library Repositories
-ENV PATH $PATH:/usr/local/anaconda2/bin
-ENV M2_HOME /usr/local/apache-maven
-ENV M2 $M2_HOME/bin 
-ENV PATH $PATH:$M2
-USER root
-COPY maven_settings.xml $M2_HOME/conf/settings.xml
-COPY ivy_settings.xml /usr/local/repositories/.groovy/grapeConfig.xml
-RUN mkdir -p /usr/local/repositories/.m2 && mkdir -p /usr/local/repositories/.groovy && chown -R springxd:springxd /usr/local/repositories
-
-# ########################
-# # nexus-ingest code   #
-# ########################
-WORKDIR /tmp
-RUN pwd
-COPY install-custom-software.sh /tmp/install-custom-software.sh
-RUN /bin/bash install-custom-software.sh
-RUN chown -R springxd:springxd /usr/local/spring-xd/spring-xd-1.3.1.RELEASE && \
-    chown -R springxd:springxd /usr/local/anaconda2/envs/nexus-xd-python-modules/ && \
-    chown -R springxd:springxd /usr/local/repositories
-VOLUME ["/usr/local/data/nexus"]
-
-COPY nexus-ingest.sh /usr/local/nexus-ingest.sh
-USER springxd
-ENTRYPOINT ["/usr/local/nexus-ingest.sh"]
-CMD ["--help"]
\ No newline at end of file
diff --git a/docker/ingest-base/README.md b/docker/ingest-base/README.md
deleted file mode 100644
index f86cd8f..0000000
--- a/docker/ingest-base/README.md
+++ /dev/null
@@ -1,3 +0,0 @@
-# ingest-base
-
-This file is used as a base image for the other ingest-* containers in the nexusjpl organization.
\ No newline at end of file
diff --git a/docker/ingest-base/install-custom-software.sh b/docker/ingest-base/install-custom-software.sh
deleted file mode 100755
index 8276fe4..0000000
--- a/docker/ingest-base/install-custom-software.sh
+++ /dev/null
@@ -1,82 +0,0 @@
-scriptdir=`dirname $0`
-
-homedir="/usr/local/spring-xd/current"
-condaenv="nexus-xd-python-modules"
-
-pushd $homedir
-mkdir nexus
-pushd nexus
-git init
-git pull https://github.com/dataplumber/nexus.git
-popd
-
-source activate $condaenv
-
-# Install spring-xd python module
-pushd nexus/nexus-ingest/spring-xd-python
-python setup.py install --force
-popd
-
-# Install protobuf generated artifacts
-pushd nexus/nexus-ingest/nexus-messages
-./gradlew clean build writeNewPom
-
-pomfile=`find build/poms/*.xml`
-jarfile=`find build/libs/*.jar`
-mvn install:install-file -DpomFile=$pomfile -Dfile=$jarfile
-
-pushd build/python/nexusproto
-python setup.py install --force
-popd
-popd
-
-# Install ingestion modules
-pushd nexus/nexus-ingest/nexus-xd-python-modules
-python setup.py install --force
-popd
-
-# Install shared Groovy scripts
-pushd nexus/nexus-ingest/groovy-scripts
-mkdir $homedir/xd-nexus-shared
-cp *.groovy $homedir/xd-nexus-shared
-popd
-
-# Start singlenode so we can interact with it
-nohup xd-singlenode --hadoopDistro none > /dev/null 2>&1 &
-
-# Delete all streams in Spring XD so we can update the custom modules
-touch /tmp/xdcommand
-echo stream all destroy --force > /tmp/xdcommand
-until xd-shell --cmdfile /tmp/xdcommand;
-do
-    sleep 1
-done
-
-# Build and upload dataset-tiler
-pushd nexus/nexus-ingest/dataset-tiler
-./gradlew clean build
-jarfile=`find build/libs/*.jar`
-touch /tmp/moduleupload
-echo module upload --type processor --name dataset-tiler --file $jarfile --force > /tmp/xdcommand
-xd-shell --cmdfile /tmp/xdcommand
-popd
-
-# Build and upload tcp-shell
-pushd nexus/nexus-ingest/tcp-shell
-./gradlew clean build
-jarfile=`find build/libs/*.jar`
-touch /tmp/moduleupload
-echo module upload --type processor --name tcpshell --file $jarfile --force > /tmp/xdcommand
-xd-shell --cmdfile /tmp/xdcommand
-popd
-
-# Build and upload nexus-sink
-pushd nexus/nexus-ingest/nexus-sink
-./gradlew clean build
-jarfile=`find build/libs/*.jar`
-touch /tmp/moduleupload
-echo module upload --type sink --name nexus --file $jarfile --force > /tmp/xdcommand
-xd-shell --cmdfile /tmp/xdcommand
-popd
-
-popd
\ No newline at end of file
diff --git a/docker/ingest-base/ivy_settings.xml b/docker/ingest-base/ivy_settings.xml
deleted file mode 100644
index 3cbe7c9..0000000
--- a/docker/ingest-base/ivy_settings.xml
+++ /dev/null
@@ -1,31 +0,0 @@
-<!--
-     Licensed to the Apache Software Foundation (ASF) under one
-     or more contributor license agreements.  See the NOTICE file
-     distributed with this work for additional information
-     regarding copyright ownership.  The ASF licenses this file
-     to you under the Apache License, Version 2.0 (the
-     "License"); you may not use this file except in compliance
-     with the License.  You may obtain a copy of the License at
-       http://www.apache.org/licenses/LICENSE-2.0
-     Unless required by applicable law or agreed to in writing,
-     software distributed under the License is distributed on an
-     "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-     KIND, either express or implied.  See the License for the
-     specific language governing permissions and limitations
-     under the License.
--->
-<ivysettings>
-  <settings defaultResolver="downloadGrapes"/>
-  <resolvers>
-    <chain name="downloadGrapes" returnFirst="true">
-      <filesystem name="cachedGrapes">
-        <ivy pattern="/usr/local/repositories/.groovy/grapes/[organisation]/[module]/ivy-[revision].xml"/>
-        <artifact pattern="/usr/local/repositories/.groovy/grapes/[organisation]/[module]/[type]s/[artifact]-[revision](-[classifier]).[ext]"/>
-      </filesystem>
-      <ibiblio name="localm2" root="file:/usr/local/repositories/.m2" checkmodified="true" changingPattern=".*" changingMatcher="regexp" m2compatible="true"/>
-      <!-- todo add 'endorsed groovy extensions' resolver here -->
-      <ibiblio name="jcenter" root="https://jcenter.bintray.com/" m2compatible="true"/>
-      <ibiblio name="ibiblio" m2compatible="true"/>
-    </chain>
-  </resolvers>
-</ivysettings>
\ No newline at end of file
diff --git a/docker/ingest-base/maven_settings.xml b/docker/ingest-base/maven_settings.xml
deleted file mode 100644
index aa97c4c..0000000
--- a/docker/ingest-base/maven_settings.xml
+++ /dev/null
@@ -1,256 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-    http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<!--
- | This is the configuration file for Maven. It can be specified at two levels:
- |
- |  1. User Level. This settings.xml file provides configuration for a single user,
- |                 and is normally provided in ${user.home}/.m2/settings.xml.
- |
- |                 NOTE: This location can be overridden with the CLI option:
- |
- |                 -s /path/to/user/settings.xml
- |
- |  2. Global Level. This settings.xml file provides configuration for all Maven
- |                 users on a machine (assuming they're all using the same Maven
- |                 installation). It's normally provided in
- |                 ${maven.home}/conf/settings.xml.
- |
- |                 NOTE: This location can be overridden with the CLI option:
- |
- |                 -gs /path/to/global/settings.xml
- |
- | The sections in this sample file are intended to give you a running start at
- | getting the most out of your Maven installation. Where appropriate, the default
- | values (values used when the setting is not specified) are provided.
- |
- |-->
-<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
-          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-          xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">
-  <!-- localRepository
-   | The path to the local repository maven will use to store artifacts.
-   |
-   | Default: ${user.home}/.m2/repository -->
-  <localRepository>/usr/local/repositories/.m2</localRepository>
-
-  <!-- interactiveMode
-   | This will determine whether maven prompts you when it needs input. If set to false,
-   | maven will use a sensible default value, perhaps based on some other setting, for
-   | the parameter in question.
-   |
-   | Default: true
-  <interactiveMode>true</interactiveMode>
-  -->
-
-  <!-- offline
-   | Determines whether maven should attempt to connect to the network when executing a build.
-   | This will have an effect on artifact downloads, artifact deployment, and others.
-   |
-   | Default: false
-  <offline>false</offline>
-  -->
-
-  <!-- pluginGroups
-   | This is a list of additional group identifiers that will be searched when resolving plugins by their prefix, i.e.
-   | when invoking a command line like "mvn prefix:goal". Maven will automatically add the group identifiers
-   | "org.apache.maven.plugins" and "org.codehaus.mojo" if these are not already contained in the list.
-   |-->
-  <pluginGroups>
-    <!-- pluginGroup
-     | Specifies a further group identifier to use for plugin lookup.
-    <pluginGroup>com.your.plugins</pluginGroup>
-    -->
-  </pluginGroups>
-
-  <!-- proxies
-   | This is a list of proxies which can be used on this machine to connect to the network.
-   | Unless otherwise specified (by system property or command-line switch), the first proxy
-   | specification in this list marked as active will be used.
-   |-->
-  <proxies>
-    <!-- proxy
-     | Specification for one proxy, to be used in connecting to the network.
-     |
-    <proxy>
-      <id>optional</id>
-      <active>true</active>
-      <protocol>http</protocol>
-      <username>proxyuser</username>
-      <password>proxypass</password>
-      <host>proxy.host.net</host>
-      <port>80</port>
-      <nonProxyHosts>local.net|some.host.com</nonProxyHosts>
-    </proxy>
-    -->
-  </proxies>
-
-  <!-- servers
-   | This is a list of authentication profiles, keyed by the server-id used within the system.
-   | Authentication profiles can be used whenever maven must make a connection to a remote server.
-   |-->
-  <servers>
-    <!-- server
-     | Specifies the authentication information to use when connecting to a particular server, identified by
-     | a unique name within the system (referred to by the 'id' attribute below).
-     |
-     | NOTE: You should either specify username/password OR privateKey/passphrase, since these pairings are
-     |       used together.
-     |
-    <server>
-      <id>deploymentRepo</id>
-      <username>repouser</username>
-      <password>repopwd</password>
-    </server>
-    -->
-
-    <!-- Another sample, using keys to authenticate.
-    <server>
-      <id>siteServer</id>
-      <privateKey>/path/to/private/key</privateKey>
-      <passphrase>optional; leave empty if not used.</passphrase>
-    </server>
-    -->
-  </servers>
-
-  <!-- mirrors
-   | This is a list of mirrors to be used in downloading artifacts from remote repositories.
-   |
-   | It works like this: a POM may declare a repository to use in resolving certain artifacts.
-   | However, this repository may have problems with heavy traffic at times, so people have mirrored
-   | it to several places.
-   |
-   | That repository definition will have a unique id, so we can create a mirror reference for that
-   | repository, to be used as an alternate download site. The mirror site will be the preferred
-   | server for that repository.
-   |-->
-  <mirrors>
-    <!-- mirror
-     | Specifies a repository mirror site to use instead of a given repository. The repository that
-     | this mirror serves has an ID that matches the mirrorOf element of this mirror. IDs are used
-     | for inheritance and direct lookup purposes, and must be unique across the set of mirrors.
-     |
-    <mirror>
-      <id>mirrorId</id>
-      <mirrorOf>repositoryId</mirrorOf>
-      <name>Human Readable Name for this Mirror.</name>
-      <url>http://my.repository.com/repo/path</url>
-    </mirror>
-     -->
-  </mirrors>
-
-  <!-- profiles
-   | This is a list of profiles which can be activated in a variety of ways, and which can modify
-   | the build process. Profiles provided in the settings.xml are intended to provide local machine-
-   | specific paths and repository locations which allow the build to work in the local environment.
-   |
-   | For example, if you have an integration testing plugin - like cactus - that needs to know where
-   | your Tomcat instance is installed, you can provide a variable here such that the variable is
-   | dereferenced during the build process to configure the cactus plugin.
-   |
-   | As noted above, profiles can be activated in a variety of ways. One way - the activeProfiles
-   | section of this document (settings.xml) - will be discussed later. Another way essentially
-   | relies on the detection of a system property, either matching a particular value for the property,
-   | or merely testing its existence. Profiles can also be activated by JDK version prefix, where a
-   | value of '1.4' might activate a profile when the build is executed on a JDK version of '1.4.2_07'.
-   | Finally, the list of active profiles can be specified directly from the command line.
-   |
-   | NOTE: For profiles defined in the settings.xml, you are restricted to specifying only artifact
-   |       repositories, plugin repositories, and free-form properties to be used as configuration
-   |       variables for plugins in the POM.
-   |
-   |-->
-  <profiles>
-    <!-- profile
-     | Specifies a set of introductions to the build process, to be activated using one or more of the
-     | mechanisms described above. For inheritance purposes, and to activate profiles via <activatedProfiles/>
-     | or the command line, profiles have to have an ID that is unique.
-     |
-     | An encouraged best practice for profile identification is to use a consistent naming convention
-     | for profiles, such as 'env-dev', 'env-test', 'env-production', 'user-jdcasey', 'user-brett', etc.
-     | This will make it more intuitive to understand what the set of introduced profiles is attempting
-     | to accomplish, particularly when you only have a list of profile id's for debug.
-     |
-     | This profile example uses the JDK version to trigger activation, and provides a JDK-specific repo.
-    <profile>
-      <id>jdk-1.4</id>
-
-      <activation>
-        <jdk>1.4</jdk>
-      </activation>
-
-      <repositories>
-        <repository>
-          <id>jdk14</id>
-          <name>Repository for JDK 1.4 builds</name>
-          <url>http://www.myhost.com/maven/jdk14</url>
-          <layout>default</layout>
-          <snapshotPolicy>always</snapshotPolicy>
-        </repository>
-      </repositories>
-    </profile>
-    -->
-
-    <!--
-     | Here is another profile, activated by the system property 'target-env' with a value of 'dev',
-     | which provides a specific path to the Tomcat instance. To use this, your plugin configuration
-     | might hypothetically look like:
-     |
-     | ...
-     | <plugin>
-     |   <groupId>org.myco.myplugins</groupId>
-     |   <artifactId>myplugin</artifactId>
-     |
-     |   <configuration>
-     |     <tomcatLocation>${tomcatPath}</tomcatLocation>
-     |   </configuration>
-     | </plugin>
-     | ...
-     |
-     | NOTE: If you just wanted to inject this configuration whenever someone set 'target-env' to
-     |       anything, you could just leave off the <value/> inside the activation-property.
-     |
-    <profile>
-      <id>env-dev</id>
-
-      <activation>
-        <property>
-          <name>target-env</name>
-          <value>dev</value>
-        </property>
-      </activation>
-
-      <properties>
-        <tomcatPath>/path/to/tomcat/instance</tomcatPath>
-      </properties>
-    </profile>
-    -->
-  </profiles>
-
-  <!-- activeProfiles
-   | List of profiles that are active for all builds.
-   |
-  <activeProfiles>
-    <activeProfile>alwaysActiveProfile</activeProfile>
-    <activeProfile>anotherAlwaysActiveProfile</activeProfile>
-  </activeProfiles>
-  -->
-</settings>
\ No newline at end of file
diff --git a/docker/ingest-base/nexus-ingest.sh b/docker/ingest-base/nexus-ingest.sh
deleted file mode 100755
index 2380c30..0000000
--- a/docker/ingest-base/nexus-ingest.sh
+++ /dev/null
@@ -1,115 +0,0 @@
-#!/bin/bash
- 
-# NOTE: This requires GNU getopt.  On Mac OS X and FreeBSD, you have to install this
-# separately; see below.
-TEMP=`getopt -o scah --long singleNode,container,admin,help -n 'nexus-ingest' -- "$@"`
-
-if [ $? != 0 ] ; then echo "Terminating..." >&2 ; exit 1 ; fi
-
-# Note the quotes around `$TEMP': they are essential!
-eval set -- "$TEMP"
-
-SINGLENODE=false
-CONTAINER=false
-ADMIN=false
-while true; do
-  case "$1" in
-    -s | --singleNode ) SINGLENODE=true; shift ;;
-    -c | --container ) CONTAINER=true; shift ;;
-    -a | --admin ) ADMIN=true; shift ;;
-    -h | --help ) 
-        echo "usage: nexus-ingest [-s|--singleNode] [-c|--container] [-a|--admin]" >&2
-        exit 2
-        ;;
-    -- ) shift; break ;;
-    * ) break ;;
-  esac
-done
-
-if [ "$SINGLENODE" = true ]; then
-    source activate nexus-xd-python-modules
-    
-    export JAVA_OPTS="-Dgrape.root=/usr/local/repositories/.groovy/grapes -Dgroovy.root=/usr/local/repositories/.groovy/ -Dgrape.config=/usr/local/repositories/.groovy/grapeConfig.xml"
-    
-    xd-singlenode --hadoopDistro none
-elif [ "$CONTAINER"  = true ]; then
-    source activate nexus-xd-python-modules
-    export SPRING_DATASOURCE_URL="jdbc:mysql://$MYSQL_PORT_3306_TCP_ADDR:$MYSQL_PORT_3306_TCP_PORT/xdjob"
-    export SPRING_DATASOURCE_USERNAME=$MYSQL_USER
-    export SPRING_DATASOURCE_PASSWORD=$MYSQL_PASSWORD
-    export SPRING_DATASOURCE_DRIVERCLASSNAME="com.mysql.jdbc.Driver"
-
-    export ZK_NAMESPACE=$ZOOKEEPER_XD_CHROOT
-    export ZK_CLIENT_CONNECT=$ZOOKEEPER_CONNECT
-    export ZK_CLIENT_SESSIONTIMEOUT=60000
-    export ZK_CLIENT_CONNECTIONTIMEOUT=30000
-    export ZK_CLIENT_INITIALRETRYWAIT=1000
-    export ZK_CLIENT_RETRYMAXATTEMPTS=3
-
-    export SPRING_REDIS_HOST=$REDIS_ADDR
-    export SPRING_REDIS_PORT=$REDIS_PORT
-    
-    export XD_TRANSPORT="kafka"
-    export XD_MESSAGEBUS_KAFKA_BROKERS=$KAFKA_BROKERS
-    export XD_MESSAGEBUS_KAFKA_ZKADDRESS=$KAFKA_ZKADDRESS
-    export XD_MESSAGEBUS_KAFKA_MODE="embeddedHeaders"
-    export XD_MESSAGEBUS_KAFKA_OFFSETMANAGEMENT="kafkaNative"
-    export XD_MESSAGEBUS_KAFKA_HEADERS="absolutefilepath,spec"
-    export XD_MESSAGEBUS_KAFKA_SOCKETBUFFERSIZE=3097152
-    export XD_MESSAGEBUS_KAFKA_DEFAULT_QUEUESIZE=4
-    export XD_MESSAGEBUS_KAFKA_DEFAULT_FETCHSIZE=2048576
-    
-    export JAVA_OPTS="-Dgrape.root=/usr/local/repositories/.groovy/grapes -Dgroovy.root=/usr/local/repositories/.groovy/ -Dgrape.config=/usr/local/repositories/.groovy/grapeConfig.xml"
-    
-    until nc --send-only -v -w30 $MYSQL_PORT_3306_TCP_ADDR $MYSQL_PORT_3306_TCP_PORT </dev/null
-    do
-      echo "Waiting for database connection..."
-      # wait for 5 seconds before check again
-      sleep 5
-    done
-    
-    xd-container --hadoopDistro none
-elif [ "$ADMIN"  = true ]; then
-    source activate nexus-xd-python-modules
-    export SPRING_DATASOURCE_URL="jdbc:mysql://$MYSQL_PORT_3306_TCP_ADDR:$MYSQL_PORT_3306_TCP_PORT/xdjob"
-    export SPRING_DATASOURCE_USERNAME=$MYSQL_USER
-    export SPRING_DATASOURCE_PASSWORD=$MYSQL_PASSWORD
-    export SPRING_DATASOURCE_DRIVERCLASSNAME="com.mysql.jdbc.Driver"
-
-    export ZK_NAMESPACE=$ZOOKEEPER_XD_CHROOT
-    export ZK_CLIENT_CONNECT=$ZOOKEEPER_CONNECT
-    export ZK_CLIENT_SESSIONTIMEOUT=60000
-    export ZK_CLIENT_CONNECTIONTIMEOUT=30000
-    export ZK_CLIENT_INITIALRETRYWAIT=1000
-    export ZK_CLIENT_RETRYMAXATTEMPTS=3
-
-    export SPRING_REDIS_HOST=$REDIS_ADDR
-    export SPRING_REDIS_PORT=$REDIS_PORT
-    
-    export XD_TRANSPORT="kafka"
-    export XD_MESSAGEBUS_KAFKA_BROKERS=$KAFKA_BROKERS
-    export XD_MESSAGEBUS_KAFKA_ZKADDRESS=$KAFKA_ZKADDRESS
-    export XD_MESSAGEBUS_KAFKA_MODE="embeddedHeaders"
-    export XD_MESSAGEBUS_KAFKA_OFFSETMANAGEMENT="kafkaNative"
-    export XD_MESSAGEBUS_KAFKA_HEADERS="absolutefilepath,spec"
-    export XD_MESSAGEBUS_KAFKA_SOCKETBUFFERSIZE=3097152
-    export XD_MESSAGEBUS_KAFKA_DEFAULT_QUEUESIZE=4
-    export XD_MESSAGEBUS_KAFKA_DEFAULT_FETCHSIZE=2048576
-    
-    export JAVA_OPTS="-Dgrape.root=/usr/local/repositories/.groovy/grapes -Dgroovy.root=/usr/local/repositories/.groovy/ -Dgrape.config=/usr/local/repositories/.groovy/grapeConfig.xml"
-    
-    until nc --send-only -v -w30 $MYSQL_PORT_3306_TCP_ADDR $MYSQL_PORT_3306_TCP_PORT </dev/null
-    do
-      echo "Waiting for database connection..."
-      # wait for 5 seconds before check again
-      sleep 5
-    done
-    
-    zookeeper-client -server $ZK_CLIENT_CONNECT -cmd create /$ZOOKEEPER_XD_CHROOT ""
-    
-    xd-admin --hadoopDistro none
-else
-    echo "One of -s, -c, or -a is required."
-    echo "usage: nexus-ingest [-s|--singleNode] [-c|--container] [-a|--admin]" >&2
-    exit 3
-fi
\ No newline at end of file
diff --git a/docker/ingest-base/stream-definitions b/docker/ingest-base/stream-definitions
deleted file mode 100644
index 0bd79e3..0000000
--- a/docker/ingest-base/stream-definitions
+++ /dev/null
@@ -1,17 +0,0 @@
-
-stream create --name ingest-avhrr --definition "scan-for-avhrr-granules: file --dir=/usr/local/data/nexus/AVHRR_L4_GLOB_V2/daily_data --mode=ref --pattern=201*.nc --maxMessages=1 | header-absolutefilepath: header-enricher --headers={\"absolutefilepath\":\"payload\"} | dataset-tiler --dimensions=lat,lon --tilesDesired=1296 | join-with-static-time: transform --expression=\"'time:0:1,'+payload.stream().collect(T(java.util.stream.Collectors).joining(';time:0:1,'))+';file://'+headers['absolutefilepath']\" | python-chain: tcpshell --command='python -u -m nexusxd.processorchain' --environment=CHAIN=nexusxd.tilereadingprocessor.read_grid_data:nexusxd.emptytilefilter.filter_empty_tiles:nexusxd.tilesumarizingprocessor.summarize_nexustile,VARIABLE=analysed_sst,LATITUDE=lat,LONGITUDE=lon,TIME=time,READER=GRIDTILE,TEMP_DIR=/tmp,STORED_VAR_NAME=analysed_sst --bufferSize=1000000 --remoteReplyTimeout=360000 | add-id: script --script=file:///usr/local/spring-xd/current/xd-nexus-shared/generate-tile-id.groovy | set-dataset-name: script --script=file:///usr/local/spring-xd/current/xd-nexus-shared/set-dataset-name.groovy --variables='datasetname=AVHRR_OI_L4_GHRSST_NCEI' | nexus --cassandraContactPoints=cassandra1,cassandra2,cassandra3,cassandra4,cassandra5,cassandra6 --cassandraKeyspace=nexustiles --solrCloudZkHost=zk1:2181,zk2:2181,zk3:2181/solr --solrCollection=nexustiles --cassandraPort=9042"
-
-stream create --name ingest-avhrr-clim --definition "scan-for-avhrr-clim-granules: file --dir=/usr/local/data/nexus/AVHRR_L4_GLOB_V2/climatology_5day --mode=ref --pattern=*.nc --maxMessages=1 | header-absolutefilepath: header-enricher --headers={\"absolutefilepath\":\"payload\"} | dataset-tiler --dimensions=lat,lon --tilesDesired=1296 | join-with-static-time: transform --expression=\"'time:0:1,'+payload.stream().collect(T(java.util.stream.Collectors).joining(';time:0:1,'))+';file://'+headers['absolutefilepath']\" | python-chain: tcpshell --command='python -u -m nexusxd.processorchain' --environment=CHAIN=nexusxd.tilereadingprocessor.read_grid_data:nexusxd.emptytilefilter.filter_empty_tiles:nexusxd.tilesumarizingprocessor.summarize_nexustile,VARIABLE=analysed_sst,META=analysed_sst_std,LATITUDE=lat,LONGITUDE=lon,TIME=time,READER=GRIDTILE,TEMP_DIR=/tmp,STORED_VAR_NAME=analysed_sst --bufferSize=1000000 --remoteReplyTimeout=360000 | add-id: script --script=file:///usr/local/spring-xd/current/xd-nexus-shared/generate-tile-id.groovy | add-time: script --script=file:///usr/local/spring-xd/current/xd-nexus-shared/add-time-from-granulename.groovy --variables='regex=^(\\d{3}),dateformat=DDD' | add-day-atr: script --script=file:///usr/local/spring-xd/current/xd-nexus-shared/add-day-of-year-attribute.groovy --variables='regex=^(\\d{3})' | set-dataset-name: script --script=file:///usr/local/spring-xd/current/xd-nexus-shared/set-dataset-name.groovy --variables='datasetname=AVHRR_OI_L4_GHRSST_NCEI_CLIM' | nexus --cassandraContactPoints=cassandra1,cassandra2,cassandra3,cassandra4,cassandra5,cassandra6 --cassandraKeyspace=nexustiles --solrCloudZkHost=zk1:2181,zk2:2181,zk3:2181/solr --solrCollection=nexustiles --cassandraPort=9042"
-
-module.python-chain.count=5,module.nexus.count=5
-
-
-
-
-stream create --name ingest-modis-aqua-aod-500 --definition "file --dir=/usr/local/data/nexus/MODIS_AQUA_AOD/scrubbed_daily_data --mode=ref --pattern=*.nc | header-absolutefilepath: header-enricher --headers={\"absolutefilepath\":\"payload\"} | dataset-tiler --dimensions=lat,lon --tilesDesired=500 | join-with-static-time: transform --expression=\"'time:0:1,'+payload.stream().collect(T(java.util.stream.Collectors).joining(';time:0:1,'))+';file://'+headers['absolutefilepath']\" | python-chain: tcpshell --command='python -u -m nexusxd.processorchain' --environment=CHAIN=nexusxd.tilereadingprocessor.read_grid_data:nexusxd.emptytilefilter.filter_empty_tiles:nexusxd.tilesumarizingprocessor.summarize_nexustile,VARIABLE=MYD08_D3_6_Aerosol_Optical_Depth_Land_Ocean_Mean,LATITUDE=lat,LONGITUDE=lon,TIME=time,READER=GRIDTILE,TEMP_DIR=/tmp --bufferSize=1000000 --remoteReplyTimeout=1300000 | add-id: script --script=file:///usr/local/spring-xd/current/xd-nexus-shared/generate-tile-id.groovy | set-dataset-name: script --script=file:///usr/local/spring-xd/current/xd-nexus-shared/set-dataset-name.groovy --variables='datasetname=MODIS_AQUA_AOD_500' | nexus --cassandraContactPoints=cassandra1,cassandra2,cassandra3,cassandra4,cassandra5,cassandra6 --cassandraKeyspace=nexustiles --solrCloudZkHost=zk1:2181,zk2:2181,zk3:2181/solr --solrCollection=nexustiles --cassandraPort=9042"
-
-stream create --name ingest-modis-aqua-aod-16 --definition "file --dir=/usr/local/data/nexus/MODIS_AQUA_AOD/scrubbed_daily_data --mode=ref --pattern=*.nc | header-absolutefilepath: header-enricher --headers={\"absolutefilepath\":\"payload\"} | dataset-tiler --dimensions=lat,lon --tilesDesired=16 | join-with-static-time: transform --expression=\"'time:0:1,'+payload.stream().collect(T(java.util.stream.Collectors).joining(';time:0:1,'))+';file://'+headers['absolutefilepath']\" | python-chain: tcpshell --command='python -u -m nexusxd.processorchain' --environment=CHAIN=nexusxd.tilereadingprocessor.read_grid_data:nexusxd.emptytilefilter.filter_empty_tiles:nexusxd.tilesumarizingprocessor.summarize_nexustile,VARIABLE=MYD08_D3_6_Aerosol_Optical_Depth_Land_Ocean_Mean,LATITUDE=lat,LONGITUDE=lon,TIME=time,READER=GRIDTILE,TEMP_DIR=/tmp --bufferSize=1000000 --remoteReplyTimeout=1300000 | add-id: script --script=file:///usr/local/spring-xd/current/xd-nexus-shared/generate-tile-id.groovy | set-dataset-name: script --script=file:///usr/local/spring-xd/current/xd-nexus-shared/set-dataset-name.groovy --variables='datasetname=MODIS_AQUA_AOD_16' | nexus --cassandraContactPoints=cassandra1,cassandra2,cassandra3,cassandra4,cassandra5,cassandra6 --cassandraKeyspace=nexustiles --solrCloudZkHost=zk1:2181,zk2:2181,zk3:2181/solr --solrCollection=nexustiles --cassandraPort=9042"
-
-stream create --name ingest-modis-terra-aod-500 --definition "file --dir=/usr/local/data/nexus/MODIS_TERRA_AOD/scrubbed_daily_data --mode=ref --pattern=*.nc | header-absolutefilepath: header-enricher --headers={\"absolutefilepath\":\"payload\"} | dataset-tiler --dimensions=lat,lon --tilesDesired=500 | join-with-static-time: transform --expression=\"'time:0:1,'+payload.stream().collect(T(java.util.stream.Collectors).joining(';time:0:1,'))+';file://'+headers['absolutefilepath']\" | python-chain: tcpshell --command='python -u -m nexusxd.processorchain' --environment=CHAIN=nexusxd.tilereadingprocessor.read_grid_data:nexusxd.emptytilefilter.filter_empty_tiles:nexusxd.tilesumarizingprocessor.summarize_nexustile,VARIABLE=MOD08_D3_6_Aerosol_Optical_Depth_Land_Ocean_Mean,LATITUDE=lat,LONGITUDE=lon,TIME=time,READER=GRIDTILE,TEMP_DIR=/tmp --bufferSize=1000000 --remoteReplyTimeout=1300000 | add-id: script --script=file:///usr/local/spring-xd/current/xd-nexus-shared/generate-tile-id.groovy | set-dataset-name: script --script=file:///usr/local/spring-xd/current/xd-nexus-shared/set-dataset-name.groovy --variables='datasetname=MODIS_TERRA_AOD_500' | nexus --cassandraContactPoints=cassandra1,cassandra2,cassandra3,cassandra4,cassandra5,cassandra6 --cassandraKeyspace=nexustiles --solrCloudZkHost=zk1:2181,zk2:2181,zk3:2181/solr --solrCollection=nexustiles --cassandraPort=9042"
-
-stream create --name ingest-modis-terra-aod-16 --definition "file --dir=/usr/local/data/nexus/MODIS_TERRA_AOD/scrubbed_daily_data --mode=ref --pattern=*.nc | header-absolutefilepath: header-enricher --headers={\"absolutefilepath\":\"payload\"} | dataset-tiler --dimensions=lat,lon --tilesDesired=16 | join-with-static-time: transform --expression=\"'time:0:1,'+payload.stream().collect(T(java.util.stream.Collectors).joining(';time:0:1,'))+';file://'+headers['absolutefilepath']\" | python-chain: tcpshell --command='python -u -m nexusxd.processorchain' --environment=CHAIN=nexusxd.tilereadingprocessor.read_grid_data:nexusxd.emptytilefilter.filter_empty_tiles:nexusxd.tilesumarizingprocessor.summarize_nexustile,VARIABLE=MOD08_D3_6_Aerosol_Optical_Depth_Land_Ocean_Mean,LATITUDE=lat,LONGITUDE=lon,TIME=time,READER=GRIDTILE,TEMP_DIR=/tmp --bufferSize=1000000 --remoteReplyTimeout=1300000 | add-id: script --script=file:///usr/local/spring-xd/current/xd-nexus-shared/generate-tile-id.groovy | set-dataset-name: script --script=file:///usr/local/spring-xd/current/xd-nexus-shared/set-dataset-name.groovy --variables='datasetname=MODIS_TERRA_AOD_16' | nexus --cassandraContactPoints=cassandra1,cassandra2,cassandra3,cassandra4,cassandra5,cassandra6 --cassandraKeyspace=nexustiles --solrCloudZkHost=zk1:2181,zk2:2181,zk3:2181/solr --solrCollection=nexustiles --cassandraPort=9042"
diff --git a/docker/ingest-base/xd-container-logback.groovy b/docker/ingest-base/xd-container-logback.groovy
deleted file mode 100644
index bbb5179..0000000
--- a/docker/ingest-base/xd-container-logback.groovy
+++ /dev/null
@@ -1,98 +0,0 @@
-/* Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *   http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-import org.springframework.xd.dirt.util.logging.CustomLoggerConverter
-import org.springframework.xd.dirt.util.logging.VersionPatternConverter
-import ch.qos.logback.classic.encoder.PatternLayoutEncoder
-import ch.qos.logback.core.rolling.RollingFileAppender
-import ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy
-
-// We highly recommended that you always add a status listener just
-// after the last import statement and before all other statements
-// NOTE - this includes logging configuration in the log and stacktraces in the event of errors
-// statusListener(OnConsoleStatusListener)
-
-// Emulates Log4j formatting
-conversionRule("category", CustomLoggerConverter)
-
-//XD Version
-conversionRule("version", VersionPatternConverter)
-
-def ISO8601 = "yyyy-MM-dd'T'HH:mm:ssZ"
-def datePattern = ISO8601
-
-appender("STDOUT", ConsoleAppender) {
-	encoder(PatternLayoutEncoder) {
-		pattern = "%d{${datePattern}} %version %level{5} %thread %category{2} - %msg%n"
-	}
-}
-
-def logfileNameBase = "${System.getProperty('xd.home')}/logs/container-${System.getProperty('PID')}"
-
-appender("FILE", RollingFileAppender) {
-	file = "${logfileNameBase}.log"
-	append = true
-	rollingPolicy(TimeBasedRollingPolicy) {
-		fileNamePattern = "${logfileNameBase}-%d{yyyy-MM-dd}.%i.log"
-		timeBasedFileNamingAndTriggeringPolicy(SizeAndTimeBasedFNATP) {
-			maxFileSize = "100MB"
-		}
-		maxHistory = 30
-	}
-
-	encoder(PatternLayoutEncoder) {
-		pattern = "%d{${datePattern}} %version %level{5} %thread %category{2} - %msg%n"
-	}
-}
-
-root(WARN, ["STDOUT", "FILE"])
-
-logger("org.nasa", DEBUG)
-logger("org.springframework.scheduling.concurrent", DEBUG, ["FILE"], false)
-
-logger("org.springframework.xd", WARN)
-logger("org.springframework.xd.dirt.server", INFO)
-logger("org.springframework.xd.dirt.util.XdConfigLoggingInitializer", INFO)
-logger("xd.sink", INFO)
-logger("org.springframework.xd.sqoop", INFO)
-
-logger("org.springframework", WARN)
-logger("org.springframework.boot", WARN)
-logger("org.springframework.integration", WARN)
-logger("org.springframework.retry", WARN)
-logger("org.springframework.amqp", WARN)
-
-logger("org.nasa.ingest.tcpshell", INFO)
-
-//This prevents the "Error:KeeperErrorCode = NodeExists" INFO messages
-//logged by ZooKeeper when a parent node does not exist while
-//invoking Curator's creatingParentsIfNeeded node builder.
-logger("org.apache.zookeeper.server.PrepRequestProcessor", WARN)
-
-// This prevents the WARN level about a non-static, @Bean method in Spring Batch that is irrelevant
-logger("org.springframework.context.annotation.ConfigurationClassEnhancer", ERROR)
-
-// This prevents boot LoggingApplicationListener logger's misleading warning message
-logger("org.springframework.boot.logging.LoggingApplicationListener", ERROR)
-
-// This prevents Hadoop configuration warnings
-logger("org.apache.hadoop.conf.Configuration", ERROR)
-
-
-//This is for the throughput-sampler sink module
-logger("org.springframework.xd.integration.throughput", INFO)
-
-// Suppress json-path warning until SI 4.2 is released
-logger("org.springframework.integration.config.IntegrationRegistrar", ERROR)
diff --git a/docker/ingest-base/xd-singlenode-logback.groovy b/docker/ingest-base/xd-singlenode-logback.groovy
deleted file mode 100644
index f57d12b..0000000
--- a/docker/ingest-base/xd-singlenode-logback.groovy
+++ /dev/null
@@ -1,103 +0,0 @@
-/* Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *   http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
-*/
-
-import org.springframework.xd.dirt.util.logging.CustomLoggerConverter
-import org.springframework.xd.dirt.util.logging.VersionPatternConverter
-import ch.qos.logback.classic.encoder.PatternLayoutEncoder
-import ch.qos.logback.core.rolling.RollingFileAppender
-
-// We highly recommended that you always add a status listener just
-// after the last import statement and before all other statements
-// NOTE - this includes logging configuration in the log and stacktraces in the event of errors
-// statusListener(OnConsoleStatusListener)
-
-// Emulates Log4j formatting
-conversionRule("category", CustomLoggerConverter)
-
-//XD Version
-conversionRule("version", VersionPatternConverter)
-
-def ISO8601 = "yyyy-MM-dd'T'HH:mm:ssZ"
-def datePattern = ISO8601
-
-appender("STDOUT", ConsoleAppender) {
-	encoder(PatternLayoutEncoder) {
-		pattern = "%d{${datePattern}} %version %level{5} %thread %category{2} - %msg%n"
-	}
-}
-
-def logfileNameBase = "${System.getProperty('xd.home')}/logs/singlenode-${System.getProperty('PID')}"
-
-appender("FILE", RollingFileAppender) {
-	file = "${logfileNameBase}.log"
-	append = false
-	rollingPolicy(TimeBasedRollingPolicy) {
-		fileNamePattern = "${logfileNameBase}-%d{yyyy-MM-dd}.%i.log"
-		timeBasedFileNamingAndTriggeringPolicy(SizeAndTimeBasedFNATP) {
-			maxFileSize = "100KB"
-		}
-	}
-
-	encoder(PatternLayoutEncoder) {
-		pattern = "%d{${datePattern}} %version %level{5} %thread %category{2} - %msg%n"
-	}
-}
-
-root(WARN, ["STDOUT", "FILE"])
-
-logger("org.nasa", INFO)
-logger("org.springframework.xd", WARN)
-logger("org.springframework.xd.dirt.server", INFO)
-logger("org.springframework.xd.dirt.util.XdConfigLoggingInitializer", INFO)
-logger("xd.sink", INFO)
-logger("org.springframework.xd.sqoop", INFO)
-// This is for the throughput-sampler sink module
-logger("org.springframework.xd.integration.throughput", INFO)
-
-logger("org.springframework", WARN)
-logger("org.springframework.boot", WARN)
-logger("org.springframework.integration", WARN)
-logger("org.springframework.retry", WARN)
-logger("org.springframework.amqp", WARN)
-
-// Below this line are specific settings for things that are too noisy
-logger("org.springframework.beans.factory.config", ERROR)
-logger("org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer", ERROR)
-
-// This prevents the WARN level InstanceNotFoundException: org.apache.ZooKeeperService:name0=StandaloneServer_port-1
-logger("org.apache.zookeeper.jmx.MBeanRegistry", ERROR)
-
-
-// This prevents the WARN level about a non-static, @Bean method in Spring Batch that is irrelevant
-logger("org.springframework.context.annotation.ConfigurationClassEnhancer", ERROR)
-
-// This prevents the "Error:KeeperErrorCode = NodeExists" INFO messages
-// logged by ZooKeeper when a parent node does not exist while
-// invoking Curator's creatingParentsIfNeeded node builder.
-logger("org.apache.zookeeper.server.PrepRequestProcessor", WARN)
-
-
-// This prevents boot LoggingApplicationListener logger's misleading warning message
-logger("org.springframework.boot.logging.LoggingApplicationListener", ERROR)
-
-
-
-// This prevents Hadoop configuration warnings
-logger("org.apache.hadoop.conf.Configuration", ERROR)
-
-// Suppress json-path warning until SI 4.2 is released
-logger("org.springframework.integration.config.IntegrationRegistrar", ERROR)
-
diff --git a/docker/ingest-container/Dockerfile b/docker/ingest-container/Dockerfile
deleted file mode 100644
index 873a800..0000000
--- a/docker/ingest-container/Dockerfile
+++ /dev/null
@@ -1,5 +0,0 @@
-FROM nexusjpl/ingest-base
-
-USER springxd
-ENTRYPOINT ["/usr/local/nexus-ingest.sh"]
-CMD ["--container"]
\ No newline at end of file
diff --git a/docker/ingest-container/README.md b/docker/ingest-container/README.md
deleted file mode 100644
index ee17bc3..0000000
--- a/docker/ingest-container/README.md
+++ /dev/null
@@ -1,74 +0,0 @@
-# ingest-container Docker
-
-This can be used to start spring-xd as a container for use in distributed mode with all nexus modules already installed.
-
-# Docker Compose
-
-Use the [docker-compose.yml](docker-compose.yml) file to start up a container and place it on the same network as services started from nexusjpl/ingest-admin. Example:
-
-    MYSQL_PASSWORD=admin ZK_HOST_IP=10.200.10.1 KAFKA_HOST_IP=10.200.10.1 docker-compose up
-
-`MYSQL_PASSWORD` must match the password used for the user called `xd` when the MySQL database was initialized.
-`ZK_HOST_IP` must be set to a valid IP address of a zookeeper host that will be used to manage Spring XD.
-`KAFKA_HOST_IP` must be set to a valid IP address of a kafka broker that will be used for the transport layer of Spring XD
-
-# Docker Run
-
-This container relies on 5 external services that must already be running: nexusjpl/ingest-admin, MySQL, Redis, Zookeeper, and Kafka.
-
-To start the server use:
-
-    docker run -it \
-    -e "MYSQL_PORT_3306_TCP_ADDR=mysqldb" -e "MYSQL_PORT_3306_TCP_PORT=3306" \
-    -e "MYSQL_USER=xd" -e "MYSQL_PASSWORD=admin" \
-    -e "REDIS_ADDR=redis" -e "REDIS_PORT=6397" \
-    -e "ZOOKEEPER_CONNECT=zkhost:2181" -e "ZOOKEEPER_XD_CHROOT=springxd" \
-    -e "KAFKA_BROKERS=kafka1:9092" -e "KAFKA_ZKADDRESS=zkhost:2181/kafka"
-    --add-host="zkhost:10.200.10.1" \
-    --add-host="kafka1:10.200.10.1"
-    --network container:ingest-admin
-    --name xd-admin nexusjpl/ingest-container
-
-This mode requires a number of Environment Variables to be defined.
-
-#####  `MYSQL_PORT_3306_TCP_ADDR`
-
-Address to a running MySQL service
-
-#####  `MYSQL_PORT_3306_TCP_PORT`
-
-Port for running MySQL service
-
-#####  `MYSQL_USER`
-
-Username to connnect to MySQL service
-
-#####  `MYSQL_PASSWORD`
-
-Password for connecting to MySQL service
-
-#####  `ZOOKEEPER_CONNECT`
-
-Zookeeper connect string. Can be a comma-delimmited list of host:port values.
-
-#####  `ZOOKEEPER_XD_CHROOT`
-
-Zookeeper root node for spring-xd
-
-#####  `REDIS_ADDR`
-
-Address to a running Redis service
-
-#####  `REDIS_PORT`
-
-Port for running Redis service
-
-#####  `KAFKA_BROKERS`
-
-Comma-delimmited list of host:port values which define the list of Kafka brokers used for transport.
-
-#####  `KAFKA_ZKADDRESS`
-
-Specifies the ZooKeeper connection string in the form hostname:port where host and port are the host and port of a ZooKeeper server.  
-
-The server may also have a ZooKeeper chroot path as part of its ZooKeeper connection string which puts its data under some path in the global ZooKeeper namespace. If so the consumer should use the same chroot path in its connection string. For example to give a chroot path of `/chroot/path` you would give the connection string as `hostname1:port1,hostname2:port2,hostname3:port3/chroot/path`.
\ No newline at end of file
diff --git a/docker/ingest-container/docker-compose.yml b/docker/ingest-container/docker-compose.yml
deleted file mode 100644
index 8800aaf..0000000
--- a/docker/ingest-container/docker-compose.yml
+++ /dev/null
@@ -1,44 +0,0 @@
-version: '3'
-
-networks:
-  ingestadmin_ingestnetwork:
-      external: true
-  nexus:
-      external: true
-
-volumes:
-  data-volume:
-
-services:
-
-    xd-container:
-        image: nexusjpl/ingest-container:1
-        container_name: xd-container
-        command: [-c]
-        environment:
-            - MYSQL_PORT_3306_TCP_ADDR=mysqldb
-            - MYSQL_PORT_3306_TCP_PORT=3306
-            - MYSQL_USER=xd
-            - MYSQL_PASSWORD
-            - REDIS_ADDR=redis
-            - REDIS_PORT=6379
-            - "ZOOKEEPER_CONNECT=zkhost:2181"
-            - ZOOKEEPER_XD_CHROOT=springxd
-            - "KAFKA_BROKERS=kafka1:9092"
-            - "KAFKA_ZKADDRESS=zkhost:2181/kafka"
-        external_links:
-            - mysqldb
-            - redis
-        extra_hosts:
-            - "zkhost:$ZK_HOST_IP"
-            - "kafka1:$KAFKA_HOST_IP"
-        networks:
-            - default
-            - ingestadmin_ingestnetwork
-            - nexus
-        volumes:
-              - data-volume:/usr/local/data/nexus
-        deploy:
-            placement:
-                constraints:
-                    - node.labels.nexus.type == ingest
diff --git a/docker/ingest-singlenode/Dockerfile b/docker/ingest-singlenode/Dockerfile
deleted file mode 100644
index 70da4fe..0000000
--- a/docker/ingest-singlenode/Dockerfile
+++ /dev/null
@@ -1,5 +0,0 @@
-FROM nexusjpl/ingest-base
-
-USER springxd
-ENTRYPOINT ["/usr/local/nexus-ingest.sh"]
-CMD ["--singleNode"]
\ No newline at end of file
diff --git a/docker/ingest-singlenode/README.md b/docker/ingest-singlenode/README.md
deleted file mode 100644
index d263618..0000000
--- a/docker/ingest-singlenode/README.md
+++ /dev/null
@@ -1,27 +0,0 @@
-# ingest-singlenode Docker
-
-This can be used to start spring-xd in singlenode mode with all nexus modules already installed.
-
-# Singlenode Mode
-
-To start the server in singleNode mode use:
-
-    docker run -it -v ~/data/:/usr/local/data/nexus -p 9393:9393 --name nexus-ingest nexusjpl/ingest-singlenode
-
-This starts a singleNode instance of Spring XD with a data volume mounted to the host machine's home directory to be used for ingestion. It also exposes the Admin UI on port 9393 of the host machine.
-
-You can then connect to the Admin UI with http://localhost:9393/admin-ui.
-
-# XD Shell
-
-## Using Docker Exec
-
-Once the nexus-ingest container is running you can use docker exec to start an XD Shell that communicates with the singlenode server:
-
-    docker exec -it nexus-ingest xd-shell
-
-## Using Standalone Container
-
-You can use the springxd shell docker image to start a seperate container running XD shell connected to the singlenode server:
-
-    docker run -it --network container:nexus-ingest springxd/shell
\ No newline at end of file
diff --git a/docker/jupyter/Dockerfile b/docker/jupyter/Dockerfile
new file mode 100644
index 0000000..2ba2c25
--- /dev/null
+++ b/docker/jupyter/Dockerfile
@@ -0,0 +1,41 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+FROM jupyter/scipy-notebook
+
+MAINTAINER Apache SDAP "dev@sdap.apache.org"
+
+USER root
+RUN apt-get update && \
+    apt-get install -y git libgeos-dev
+USER jovyan
+
+COPY requirements.txt /tmp
+RUN pip install -r /tmp/requirements.txt && \
+    conda install -y basemap
+
+ENV CHOWN_HOME_OPTS='-R'
+ENV REBUILD_CODE=true
+RUN mkdir -p /home/jovyan/Quickstart && \
+    mkdir -p /home/jovyan/nexuscli && \
+    cd /home/jovyan/nexuscli && \
+    git init && \
+    git remote add -f origin https://github.com/apache/incubator-sdap-nexus && \
+    git config core.sparseCheckout true && \
+    echo "client" >> .git/info/sparse-checkout && \
+    git pull origin master && \
+    cd client && \
+    python setup.py install
+
+COPY ["Time Series Example.ipynb", "/home/jovyan/Quickstart/Time Series Example.ipynb"]
diff --git a/docker/jupyter/Time Series Example.ipynb b/docker/jupyter/Time Series Example.ipynb
new file mode 100644
index 0000000..9fe5ee6
--- /dev/null
+++ b/docker/jupyter/Time Series Example.ipynb	
@@ -0,0 +1,161 @@
+{
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "# Start Here\n",
+    "\n",
+    "In the cell below are a few functions that help with plotting data using `matplotlib`. Run the cell to define the functions."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "%matplotlib inline\n",
+    "import matplotlib.pyplot as plt\n",
+    "\n",
+    "def show_plot(x_data, y_data, x_label, y_label):\n",
+    "    \"\"\"\n",
+    "    Display a simple line plot.\n",
+    "    \n",
+    "    :param x_data: Numpy array containing data for the X axis\n",
+    "    :param y_data: Numpy array containing data for the Y axis\n",
+    "    :param x_label: Label applied to X axis\n",
+    "    :param y_label: Label applied to Y axis\n",
+    "    \"\"\"\n",
+    "    plt.figure(figsize=(10,5), dpi=100)\n",
+    "    plt.plot(x_data, y_data, 'b-', marker='|', markersize=2.0, mfc='b')\n",
+    "    plt.grid(b=True, which='major', color='k', linestyle='-')\n",
+    "    plt.xlabel(x_label)\n",
+    "    plt.ylabel (y_label)\n",
+    "    plt.show()\n",
+    "    \n",
+    "def plot_box(bbox):\n",
+    "    \"\"\"\n",
+    "    Display a Green bounding box on an image of the blue marble.\n",
+    "    \n",
+    "    :param bbox: Shapely Polygon that defines the bounding box to display\n",
+    "    \"\"\"\n",
+    "    min_lon, min_lat, max_lon, max_lat = bbox.bounds\n",
+    "    import matplotlib.pyplot as plt1\n",
+    "    from matplotlib.patches import Polygon\n",
+    "    from mpl_toolkits.basemap import Basemap\n",
+    "\n",
+    "    map = Basemap()\n",
+    "    map.bluemarble(scale=0.5)\n",
+    "    poly = Polygon([(min_lon,min_lat),(min_lon,max_lat),(max_lon,max_lat),(max_lon,min_lat)],\n",
+    "                   facecolor=(0,0,0,0.0),edgecolor='green',linewidth=2)\n",
+    "    plt1.gca().add_patch(poly)\n",
+    "    plt1.gcf().set_size_inches(10,15)\n",
+    "    \n",
+    "    plt1.show()\n",
+    "    \n",
+    "def show_plot_two_series(x_data_a, x_data_b, y_data_a, y_data_b, x_label, \n",
+    "                         y_label_a, y_label_b, series_a_label, series_b_label):\n",
+    "    \"\"\"\n",
+    "    Display a line plot of two series\n",
+    "    \n",
+    "    :param x_data_a: Numpy array containing data for the Series A X axis\n",
+    "    :param x_data_b: Numpy array containing data for the Series B X axis\n",
+    "    :param y_data_a: Numpy array containing data for the Series A Y axis\n",
+    "    :param y_data_b: Numpy array containing data for the Series B Y axis\n",
+    "    :param x_label: Label applied to X axis\n",
+    "    :param y_label_a: Label applied to Y axis for Series A\n",
+    "    :param y_label_b: Label applied to Y axis for Series B\n",
+    "    :param series_a_label: Name of Series A\n",
+    "    :param series_b_label: Name of Series B\n",
+    "    \"\"\"\n",
+    "    fig, ax1 = plt.subplots(figsize=(10,5), dpi=100)\n",
+    "    series_a, = ax1.plot(x_data_a, y_data_a, 'b-', marker='|', markersize=2.0, mfc='b', label=series_a_label)\n",
+    "    ax1.set_ylabel(y_label_a, color='b')\n",
+    "    ax1.tick_params('y', colors='b')\n",
+    "    ax1.set_ylim(min(0, *y_data_a), max(y_data_a)+.1*max(y_data_a))\n",
+    "    ax1.set_xlabel(x_label)\n",
+    "    \n",
+    "    ax2 = ax1.twinx()\n",
+    "    series_b, = ax2.plot(x_data_b, y_data_b, 'r-', marker='|', markersize=2.0, mfc='r', label=series_b_label)\n",
+    "    ax2.set_ylabel(y_label_b, color='r')\n",
+    "    ax2.set_ylim(min(0, *y_data_b), max(y_data_b)+.1*max(y_data_b))\n",
+    "    ax2.tick_params('y', colors='r')\n",
+    "    \n",
+    "    plt.grid(b=True, which='major', color='k', linestyle='-')\n",
+    "    plt.legend(handles=(series_a, series_b), bbox_to_anchor=(1.1, 1), loc=2, borderaxespad=0.)\n",
+    "    plt.show()\n"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "# Run Time Series and Plot\n",
+    "\n",
+    "In the cell below we import the `nexuscli` library and target the NEXUS webservice running in the `nexus-webapp` container.  \n",
+    "\n",
+    "Then we define a bounding box and plot it on a map using the `plot_box` function defined above.  \n",
+    "\n",
+    "Next, we define the time bounds for our time series and submit the request to NEXUS. The request is timed and the results are then plotted using the `show_plot` method defined above."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "import time\n",
+    "import nexuscli\n",
+    "from datetime import datetime\n",
+    "\n",
+    "from shapely.geometry import box\n",
+    "\n",
+    "nexuscli.set_target(\"http://nexus-webapp:8083\")\n",
+    "\n",
+    "# Create a bounding box using the box method imported above\n",
+    "bbox = box(-150, 40, -120, 55)\n",
+    "\n",
+    "# Plot the bounding box using the helper method plot_box\n",
+    "plot_box(bbox)\n",
+    "\n",
+    "start = time.perf_counter()\n",
+    "\n",
+    "# Call the time_series method for the AVHRR_OI_L4_GHRSST_NCEI dataset using \n",
+    "# the bounding box and time period 2013-01-01 through 2014-03-01\n",
+    "datasets = [\"AVHRR_OI_L4_GHRSST_NCEI\"]\n",
+    "start_time = datetime(2015, 11, 1)\n",
+    "end_time = datetime(2015, 11, 30)\n",
+    "ts = nexuscli.time_series(datasets, bbox, start_time, end_time, spark=True)\n",
+    "\n",
+    "print(\"Time Series took {} seconds to generate\".format(time.perf_counter() - start))\n",
+    "\n",
+    "# Plot the resulting time series using the helper method show_plot\n",
+    "avhrr_ts = ts[0]\n",
+    "show_plot(avhrr_ts.time, avhrr_ts.mean, 'Time', 'Temperature (C)')"
+   ]
+  }
+ ],
+ "metadata": {
+  "kernelspec": {
+   "display_name": "Python 3",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.6.5"
+  }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/docker/jupyter/requirements.txt b/docker/jupyter/requirements.txt
new file mode 100644
index 0000000..e4f500a
--- /dev/null
+++ b/docker/jupyter/requirements.txt
@@ -0,0 +1,4 @@
+shapely
+requests
+numpy
+cassandra-driver==3.9.0
diff --git a/docker/kafka/Dockerfile b/docker/kafka/Dockerfile
deleted file mode 100644
index ca62ee3..0000000
--- a/docker/kafka/Dockerfile
+++ /dev/null
@@ -1,26 +0,0 @@
-FROM centos:7
-
-RUN yum -y update && \
-    yum -y install wget
-
-# Install Oracle JDK 1.8u121-b13
-RUN wget -q --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u121-b13/e9e7ea248e2c4826b92b3f075a80e441/jdk-8u121-linux-x64.rpm" && \
-    yum -y install jdk-8u121-linux-x64.rpm && \
-    rm jdk-8u121-linux-x64.rpm
-ENV JAVA_HOME /usr/java/default
-
-# Install Kafka
-RUN groupadd -r kafka && useradd -r -g kafka kafka
-WORKDIR /usr/local/kafka
-RUN wget -q http://apache.claz.org/kafka/0.9.0.1/kafka_2.11-0.9.0.1.tgz && \
-    tar -xvzf kafka_2.11-0.9.0.1.tgz && \
-    ln -s kafka_2.11-0.9.0.1 current && \
-    rm -f kafka_2.11-0.9.0.1.tgz && \
-    chown -R kafka:kafka kafka_2.11-0.9.0.1
-
-ENV PATH $PATH:/usr/local/kafka/current/bin
-    
-USER kafka
-COPY kafka.properties /usr/local/kafka/current/config/
-
-ENTRYPOINT ["kafka-server-start.sh"]
diff --git a/docker/kafka/README.md b/docker/kafka/README.md
deleted file mode 100644
index 49142fb..0000000
--- a/docker/kafka/README.md
+++ /dev/null
@@ -1,19 +0,0 @@
-
-
-This Docker container runs Apache Kafka 2.11-0.9.0.1 on CentOs 7 with Oracle jdk-8u121-linux-x64.
-
-The easiest way to run it is:
-
-    docker run -it --add-host="zkhost:10.200.10.1" -p 9092:9092 nexusjpl/kafka
-
-The default command when running this container is the `kafka-server-start.sh` script using the `/usr/local/kafka/current/config/server.properties` configuration file. 
-
-Be default, the server.properties file is configured to connect to zookeeper as such:
-
-    zookeeper.connect=zkhost:2181/kafka
-
-So by specifying `--add-host="zkhost:10.200.10.1"` with a valid IP address to a zookeeper node, Kafka will be able to connect to an existing cluster.
-
-If you need to override any of the configuration you can use:
-
-    docker run -it --add-host="zkhost:10.200.10.1" nexusjpl/kafka kafka-server-start.sh /usr/local/kafka/current/config/server.properties --override property=value
\ No newline at end of file
diff --git a/docker/kafka/docker-compose.yml b/docker/kafka/docker-compose.yml
deleted file mode 100644
index b01c4ff..0000000
--- a/docker/kafka/docker-compose.yml
+++ /dev/null
@@ -1,53 +0,0 @@
-version: '3'
-
-networks:
-  nexus:
-      external: true
-      
-      
-services:
-    
-    kafka1:
-        image: nexusjpl/kafka
-        container_name: kafka1
-        command: ["/usr/local/kafka/current/config/kafka1.properties"]
-        extra_hosts:
-            - "zkhost1:$ZK_HOST1_IP"
-            - "zkhost2:$ZK_HOST2_IP"
-            - "zkhost3:$ZK_HOST3_IP"
-        networks:
-            - nexus
-        deploy:
-            placement:
-              constraints:
-                - node.labels.nexus.type == kafka
-        
-    kafka2:
-        image: nexusjpl/kafka
-        container_name: kafka2
-        command: ["/usr/local/kafka/current/config/kafka2.properties"]
-        extra_hosts:
-            - "zkhost1:$ZK_HOST1_IP"
-            - "zkhost2:$ZK_HOST2_IP"
-            - "zkhost3:$ZK_HOST3_IP"
-        networks:
-            - nexus
-        deploy:
-            placement:
-              constraints:
-                - node.labels.nexus.type == kafka
-        
-    kafka3:
-        image: nexusjpl/kafka
-        container_name: kafka3
-        command: ["/usr/local/kafka/current/config/kafka3.properties"]
-        extra_hosts:
-            - "zkhost1:$ZK_HOST1_IP"
-            - "zkhost2:$ZK_HOST2_IP"
-            - "zkhost3:$ZK_HOST3_IP"
-        networks:
-            - nexus
-        deploy:
-            placement:
-              constraints:
-                - node.labels.nexus.type == kafka
\ No newline at end of file
diff --git a/docker/kafka/kafka.properties b/docker/kafka/kafka.properties
deleted file mode 100644
index 44d2794..0000000
--- a/docker/kafka/kafka.properties
+++ /dev/null
@@ -1,131 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# see kafka.server.KafkaConfig for additional details and defaults
-
-############################# Server Basics #############################
-
-# Override this on command line startup
-# e.g. --override broker.id=1
-#broker.id=1
-
-#The maximum size of a message that the server can receive. It is important that this property
-# be in sync with the maximum fetch size your consumers use or else an unruly producer  
-# will be able to publish messages too large for consumers to consume.
-message.max.bytes=2048576
-
-#The number of byes of messages to attempt to fetch for each partition in the fetch requests the replicas send to the leader.
-replica.fetch.max.bytes=2048576
-
-############################# Socket Server Settings #############################
-
-listeners=PLAINTEXT://:9092
-
-# The port the socket server listens on
-#port=9092
-
-# Hostname the broker will bind to. If not set, the server will bind to all interfaces
-#host.name=localhost
-
-# Hostname the broker will advertise to producers and consumers. If not set, it uses the
-# value for "host.name" if configured.  Otherwise, it will use the value returned from
-# java.net.InetAddress.getCanonicalHostName().
-#advertised.host.name=<hostname routable by clients>
-
-# The port to publish to ZooKeeper for clients to use. If this is not set,
-# it will publish the same port that the broker binds to.
-#advertised.port=<port accessible by clients>
-
-# The number of threads handling network requests
-num.network.threads=3
-
-# The number of threads doing disk I/O
-num.io.threads=8
-
-# The send buffer (SO_SNDBUF) used by the socket server
-socket.send.buffer.bytes=102400
-
-# The receive buffer (SO_RCVBUF) used by the socket server
-socket.receive.buffer.bytes=102400
-
-# The maximum size of a request that the socket server will accept (protection against OOM)
-socket.request.max.bytes=104857600
-
-
-############################# Log Basics #############################
-
-# A comma seperated list of directories under which to store log files
-log.dirs=/tmp/kafka-logs
-
-# The default number of log partitions per topic. More partitions allow greater
-# parallelism for consumption, but this will also result in more files across
-# the brokers.
-num.partitions=1
-
-# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
-# This value is recommended to be increased for installations with data dirs located in RAID array.
-num.recovery.threads.per.data.dir=1
-
-############################# Log Flush Policy #############################
-
-# Messages are immediately written to the filesystem but by default we only fsync() to sync
-# the OS cache lazily. The following configurations control the flush of data to disk.
-# There are a few important trade-offs here:
-#    1. Durability: Unflushed data may be lost if you are not using replication.
-#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
-#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks.
-# The settings below allow one to configure the flush policy to flush data after a period of time or
-# every N messages (or both). This can be done globally and overridden on a per-topic basis.
-
-# The number of messages to accept before forcing a flush of data to disk
-#log.flush.interval.messages=10000
-
-# The maximum amount of time a message can sit in a log before we force a flush
-#log.flush.interval.ms=1000
-
-############################# Log Retention Policy #############################
-
-# The following configurations control the disposal of log segments. The policy can
-# be set to delete segments after a period of time, or after a given size has accumulated.
-# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
-# from the end of the log.
-
-# The minimum age of a log file to be eligible for deletion
-log.retention.hours=168
-
-# A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
-# segments don't drop below log.retention.bytes.
-#log.retention.bytes=1073741824
-
-# The maximum size of a log segment file. When this size is reached a new log segment will be created.
-log.segment.bytes=1073741824
-
-# The interval at which log segments are checked to see if they can be deleted according
-# to the retention policies
-log.retention.check.interval.ms=300000
-
-############################# Zookeeper #############################
-
-# Zookeeper connection string (see zookeeper docs for details).
-# This is a comma separated host:port pairs, each corresponding to a zk
-# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
-# You can also append an optional chroot string to the urls to specify the
-# root directory for all kafka znodes.
-zookeeper.connect=zkhost1:2181,zkhost2:2181,zkhost3:2181/kafka
-
-# Timeout in ms for connecting to zookeeper
-zookeeper.connection.timeout.ms=10000
-zookeeper.session.timeout.ms=10000
-zookeeper.sync.time.ms=4000
-
diff --git a/docker/nexus-cluster.yml b/docker/nexus-cluster.yml
deleted file mode 100644
index d3fdba1..0000000
--- a/docker/nexus-cluster.yml
+++ /dev/null
@@ -1,251 +0,0 @@
-version: '3'
-
-networks:
-    nexus:
-        external: true
-      
-volumes:
-    kafka1-logs:
-        driver: local
-    kafka2-logs:
-        driver: local
-    kafka3-logs:
-        driver: local
-
-services:
-
-    mysqldb:
-        image: mysql:8
-        command: [--character-set-server=latin1, --collation-server=latin1_swedish_ci]
-        hostname: mysqldb
-        environment:
-            - MYSQL_RANDOM_ROOT_PASSWORD=yes
-            - MYSQL_DATABASE=xdjob
-            - MYSQL_USER=xd
-            - MYSQL_PASSWORD=admin
-        networks:
-            - nexus
-        ports:
-            - "3306"
-        deploy:
-            restart_policy:
-                condition: any
-                delay: 5s
-                max_attempts: 3
-                window: 120s
-            placement:
-                constraints:
-                    - node.labels.nexus.ingest-admin == true
-                    
-    redis:
-        image: redis:3
-        hostname: redis
-        networks:
-            - nexus
-        ports:
-            - "6379"
-        deploy:
-            restart_policy:
-                condition: any
-                delay: 5s
-                max_attempts: 3
-                window: 120s
-            placement:
-                constraints:
-                    - node.labels.nexus.ingest-admin == true
-                    
-    xd-admin:
-        image: nexusjpl/ingest-admin
-        hostname: xd-admin
-        depends_on:
-            - "mysqldb"
-            - "redis"
-        networks:
-            - nexus
-        ports:
-          - "9393:9393"
-        environment:
-            - MYSQL_PORT_3306_TCP_ADDR=mysqldb
-            - MYSQL_PORT_3306_TCP_PORT=3306
-            - MYSQL_USER=xd
-            - MYSQL_PASSWORD=admin
-            - REDIS_ADDR=redis
-            - REDIS_PORT=6379
-            - ZOOKEEPER_CONNECT=zk1:2181,zk2:2181,zk3:2181
-            - ZOOKEEPER_XD_CHROOT=springxd
-            - KAFKA_BROKERS=kafka1:9092,kafka2:9092,kafka3:9092
-            - KAFKA_ZKADDRESS=zk1:2181,zk2:2181,zk3:2181/kafka
-        deploy:
-            restart_policy:
-                condition: any
-                delay: 5s
-                max_attempts: 3
-                window: 120s
-            placement:
-                constraints:
-                    - node.labels.nexus.ingest-admin == true
-  
-    xd-container:
-        image: nexusjpl/ingest-container
-        depends_on:
-            - "xd-admin"
-        networks:
-            - nexus
-        volumes:
-            - /efs/data/share/datasets:/usr/local/data/nexus/
-        environment:
-            - MYSQL_PORT_3306_TCP_ADDR=mysqldb
-            - MYSQL_PORT_3306_TCP_PORT=3306
-            - MYSQL_USER=xd
-            - MYSQL_PASSWORD=admin
-            - REDIS_ADDR=redis
-            - REDIS_PORT=6379
-            - ZOOKEEPER_CONNECT=zk1:2181,zk2:2181,zk3:2181
-            - ZOOKEEPER_XD_CHROOT=springxd
-            - KAFKA_BROKERS=kafka1:9092,kafka2:9092,kafka3:9092
-            - KAFKA_ZKADDRESS=zk1:2181,zk2:2181,zk3:2181/kafka
-        deploy:
-            mode: global
-            restart_policy:
-                condition: any
-                delay: 5s
-                max_attempts: 5
-                window: 120s
-            placement:
-                constraints:
-                    - node.labels.nexus.ingest == true
-
-    zk1:
-        image: zookeeper
-        hostname: zk1
-        networks:
-            - nexus
-        volumes:
-            - /data/zk1/data:/data
-            - /data/zk1/datalog:/datalog
-        environment:
-            - ZOO_MY_ID=1
-            - "ZOO_SERVERS=server.1=zk1:2888:3888 server.2=zk2:2888:3888 server.3=zk3:2888:3888"
-        deploy:
-            restart_policy:
-                condition: any
-                delay: 5s
-                max_attempts: 3
-                window: 120s
-            placement:
-                constraints:
-                    - node.labels.nexus.ingest-msg == true
-                    - node.labels.nexus.zoo.id == 1
-    zk2:
-        image: zookeeper
-        hostname: zk2
-        networks:
-            - nexus
-        volumes:
-            - /data/zk2/data:/data
-            - /data/zk2/datalog:/datalog
-        environment:
-            - ZOO_MY_ID=2
-            - "ZOO_SERVERS=server.1=zk1:2888:3888 server.2=zk2:2888:3888 server.3=zk3:2888:3888"
-        deploy:
-            restart_policy:
-                condition: any
-                delay: 5s
-                max_attempts: 3
-                window: 120s
-            placement:
-                constraints:
-                    - node.labels.nexus.ingest-msg == true
-                    - node.labels.nexus.zoo.id == 2
-                    
-    zk3:
-        image: zookeeper
-        hostname: zk3
-        networks:
-            - nexus
-        volumes:
-            - /data/zk3/data:/data
-            - /data/zk3/datalog:/datalog
-        environment:
-            - ZOO_MY_ID=3
-            - "ZOO_SERVERS=server.1=zk1:2888:3888 server.2=zk2:2888:3888 server.3=zk3:2888:3888"
-        deploy:
-            restart_policy:
-                condition: any
-                delay: 5s
-                max_attempts: 3
-                window: 120s
-            placement:
-                constraints:
-                    - node.labels.nexus.ingest-msg == true
-                    - node.labels.nexus.zoo.id == 3
-                    
-    kafka1:
-        image: nexusjpl/kafka
-        command: ["/usr/local/kafka/current/config/kafka.properties", "--override", "zookeeper.connect=zk1:2181,zk2:2181,zk3:2181/kafka", "--override", "broker.id=1"]
-        hostname: kafka1
-        depends_on:
-            - "zk1"
-            - "zk2"
-            - "zk3"
-        networks:
-            - nexus
-        volumes:
-            - kafka1-logs:/tmp/kafka-logs
-        deploy:
-            restart_policy:
-                condition: any
-                delay: 5s
-                max_attempts: 3
-                window: 120s
-            placement:
-                constraints:
-                    - node.labels.nexus.ingest-msg == true
-                    - node.labels.nexus.kafka.id == 1
-                    
-    kafka2:
-        image: nexusjpl/kafka
-        command: ["/usr/local/kafka/current/config/kafka.properties", "--override", "zookeeper.connect=zk1:2181,zk2:2181,zk3:2181/kafka", "--override", "broker.id=2"]
-        hostname: kafka2
-        depends_on:
-            - "zk1"
-            - "zk2"
-            - "zk3"
-        networks:
-            - nexus
-        volumes:
-            - kafka2-logs:/tmp/kafka-logs
-        deploy:
-            restart_policy:
-                condition: any
-                delay: 5s
-                max_attempts: 3
-                window: 120s
-            placement:
-                constraints:
-                    - node.labels.nexus.ingest-msg == true
-                    - node.labels.nexus.kafka.id == 2
-                    
-    kafka3:
-        image: nexusjpl/kafka
-        command: ["/usr/local/kafka/current/config/kafka.properties", "--override", "zookeeper.connect=zk1:2181,zk2:2181,zk3:2181/kafka", "--override", "broker.id=3"]
-        hostname: kafka3
-        depends_on:
-            - "zk1"
-            - "zk2"
-            - "zk3"
-        networks:
-            - nexus
-        volumes:
-            - kafka3-logs:/tmp/kafka-logs
-        deploy:
-            restart_policy:
-                condition: any
-                delay: 5s
-                max_attempts: 3
-                window: 120s
-            placement:
-                constraints:
-                    - node.labels.nexus.ingest-msg == true
-                    - node.labels.nexus.kafka.id == 3
-        
\ No newline at end of file
diff --git a/docker/nexus-imaging/Dockerfile b/docker/nexus-imaging/Dockerfile
index d0bc916..b6e28c1 100644
--- a/docker/nexus-imaging/Dockerfile
+++ b/docker/nexus-imaging/Dockerfile
@@ -1,9 +1,24 @@
-FROM nexusjpl/spark-mesos-base
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+ARG tag_version=1.0.0-SNAPSHOT
+FROM sdap/nexus-webapp:${tag_version}
+
+MAINTAINER Apache SDAP "dev@sdap.apache.org"
 
 RUN yum -y install unzip aws-cli
 
-RUN cd /tmp
-RUN git clone -b image-gen https://github.com/dataplumber/nexus.git
 COPY docker-entrypoint.sh /tmp/docker-entrypoint.sh
 
 WORKDIR /tmp
diff --git a/docker/nexus-imaging/docker-entrypoint.sh b/docker/nexus-imaging/docker-entrypoint.sh
index 565b9d8..638be1c 100755
--- a/docker/nexus-imaging/docker-entrypoint.sh
+++ b/docker/nexus-imaging/docker-entrypoint.sh
@@ -1,8 +1,19 @@
 #!/bin/bash
 
-#Activate Python environment
-source activate nexus
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
 
-cd /tmp/nexus/analysis/webservice
+cd ${NEXUS_SRC}/analysis/webservice
 python WorkflowDriver.py --ds ${DATASET_NAME} --g ${GRANULE_NAME} --p ${PREFIX} --ct ${COLOR_TABLE} --min ${MIN} --max ${MAX} --h ${HEIGHT} --w ${WIDTH} --t ${TIME_INTERVAL} --i ${INTERP}
-
diff --git a/docker/nexus-imaging/setupall.sh b/docker/nexus-imaging/setupall.sh
deleted file mode 100644
index 7846fcf..0000000
--- a/docker/nexus-imaging/setupall.sh
+++ /dev/null
@@ -1,18 +0,0 @@
-cd /tmp/image-gen-nexus/nexus-ingest/nexus-messages
-./gradlew clean build install
-cd /build/python/nexusproto
-
-conda create --name nexus python
-source activate nexus
-
-conda install numpy
-python setup.py install
-
-cd /tmp/image-gen-nexus/data-access
-pip install cython
-python setup.py install
-
-cd /tmp/image-gen-nexus/analysis
-conda install numpy matplotlib mpld3 scipy netCDF4 basemap gdal pyproj=1.9.5.1 libnetcdf=4.3.3.1
-pip install pillow
-python setup.py install
\ No newline at end of file
diff --git a/docker/nexus-webapp/Dockerfile b/docker/nexus-webapp/Dockerfile
index c3d61d7..d144502 100644
--- a/docker/nexus-webapp/Dockerfile
+++ b/docker/nexus-webapp/Dockerfile
@@ -1,20 +1,91 @@
-# Run example: docker run -it --net=host -p 8083:8083 -e MASTER=mesos://127.0.0.1:5050 nexus-webapp
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
 
-FROM nexusjpl/spark-mesos-base
+FROM centos:7
 
-MAINTAINER Joseph Jacob "Joseph.Jacob@jpl.nasa.gov"
+MAINTAINER Apache SDAP "dev@sdap.apache.org"
 
-# Set environment variables.
+ARG SPARK_VERSION=2.2.0
+ARG APACHE_NEXUSPROTO=https://github.com/apache/incubator-sdap-nexusproto.git
+ARG APACHE_NEXUSPROTO_BRANCH=master
+ARG APACHE_NEXUS=https://github.com/apache/incubator-sdap-nexus.git
+ARG APACHE_NEXUS_BRANCH=master
 
-ENV MASTER=local[1] \
-    SPARK_LOCAL_IP=nexus-webapp
+RUN yum -y update && \
+    yum -y install \
+    bzip2 \
+    gcc \
+    git \
+    mesa-libGL.x86_64 \
+    python-devel \
+    wget \
+    which && \
+    yum clean all
 
-# Run NEXUS webapp.
+ENV SPARK_LOCAL_IP=127.0.0.1 \
+    CASSANDRA_CONTACT_POINTS=127.0.0.1 \
+    CASSANDRA_LOCAL_DATACENTER=datacenter1 \
+    SOLR_URL_PORT=127.0.0.1:8983 \
+    SPARK_DIR=spark-${SPARK_VERSION} \
+    SPARK_PACKAGE=spark-${SPARK_VERSION}-bin-hadoop2.7 \
+    SPARK_HOME=/usr/local/spark-${SPARK_VERSION} \
+    PYSPARK_DRIVER_PYTHON=/usr/local/anaconda2/bin/python \
+    PYSPARK_PYTHON=/usr/local/anaconda2/bin/python \
+    PYSPARK_SUBMIT_ARGS="--driver-memory=4g pyspark-shell" \
+    PYTHONPATH=${PYTHONPATH}:/usr/local/spark-${SPARK_VERSION}/python:/usr/local/spark-${SPARK_VERSION}/python/lib/py4j-0.10.4-src.zip:/usr/local/spark-${SPARK_VERSION}/python/lib/pyspark.zip \
+    SPARK_EXECUTOR_URI=/usr/local/spark-${SPARK_VERSION}-bin-hadoop2.7.tgz \
+    NEXUS_SRC=/tmp/incubator-sdap-nexus
 
-EXPOSE 8083
+# Install Spark
+RUN cd /usr/local && \
+    wget --quiet http://d3kbcqa49mib13.cloudfront.net/spark-${SPARK_VERSION}-bin-hadoop2.7.tgz && \
+    tar -xzf spark-${SPARK_VERSION}-bin-hadoop2.7.tgz && \
+    chown -R root.root spark-${SPARK_VERSION}-bin-hadoop2.7.tgz && \
+    ln -s spark-${SPARK_VERSION}-bin-hadoop2.7 ${SPARK_DIR} && \
+    rm spark-${SPARK_VERSION}-bin-hadoop2.7.tgz && \
+    cd /
+
+# Install Miniconda
+RUN wget -q https://repo.continuum.io/miniconda/Miniconda2-latest-Linux-x86_64.sh -O install_anaconda.sh && \
+    /bin/bash install_anaconda.sh -b -p /usr/local/anaconda2 && \
+    rm install_anaconda.sh && \
+    /usr/local/anaconda2/bin/conda update -n base conda
+ENV PATH /usr/local/anaconda2/bin:$PATH
+# Conda dependencies for nexus
+RUN conda install -c conda-forge -y netCDF4 && \
+    conda install -y numpy cython mpld3 scipy basemap gdal matplotlib && \
+    pip install shapely==1.5.16 cassandra-driver==3.5.0 && \
+    conda install -c conda-forge backports.functools_lru_cache=1.3 && \
+    cd /usr/lib64 && ln -s libcom_err.so.2 libcom_err.so.3 && \
+    cd /usr/local/anaconda2/lib && \
+    ln -s libnetcdf.so.11 libnetcdf.so.7 && \
+    ln -s libkea.so.1.4.6 libkea.so.1.4.5 && \
+    ln -s libhdf5_cpp.so.12 libhdf5_cpp.so.10 && \
+    ln -s libjpeg.so.9 libjpeg.so.8
+
+# Install Oracle JDK 1.8u172-b11
+RUN wget -q --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u172-b11/a58eab1ec242421181065cdc37240b08/jdk-8u172-linux-x64.rpm" && \
+    yum install -y jdk-8u172-linux-x64.rpm && \
+    rm jdk-8u172-linux-x64.rpm
 
-WORKDIR /tmp
+COPY *.sh /tmp/
 
-COPY docker-entrypoint.sh /tmp/docker-entrypoint.sh
+# Install nexusproto and nexus
+RUN /tmp/install_nexusproto.sh $APACHE_NEXUSPROTO $APACHE_NEXUSPROTO_BRANCH && \
+    /tmp/install_nexus.sh $APACHE_NEXUS $APACHE_NEXUS_BRANCH $NEXUS_SRC
+
+EXPOSE 8083
 
 ENTRYPOINT ["/tmp/docker-entrypoint.sh"]
diff --git a/docker/nexus-webapp/docker-entrypoint.sh b/docker/nexus-webapp/docker-entrypoint.sh
index d8b59a0..0589fb2 100755
--- a/docker/nexus-webapp/docker-entrypoint.sh
+++ b/docker/nexus-webapp/docker-entrypoint.sh
@@ -1,13 +1,38 @@
 #!/bin/bash
 
-sed -i "s/server.socket_host.*$/server.socket_host=$SPARK_LOCAL_IP/g" /nexus/analysis/webservice/config/web.ini && \
-sed -i "s/cassandra1,cassandra2,cassandra3,cassandra4,cassandra5,cassandra6/$CASSANDRA_CONTACT_POINTS/g" /nexus/data-access/nexustiles/config/datastores.ini && \
-sed -i "s/solr1:8983/$SOLR_URL_PORT/g" /nexus/data-access/nexustiles/config/datastores.ini
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
 
-cd /nexus/data-access
+set -e
+
+if [ -n "$TORNADO_HOST" ]; then
+  sed -i "s/server.socket_host = .*/server.socket_host = '${TORNADO_HOST}'/g" ${NEXUS_SRC}/analysis/webservice/config/web.ini
+fi
+sed -i "s/host=127.0.0.1/host=$CASSANDRA_CONTACT_POINTS/g" ${NEXUS_SRC}/data-access/nexustiles/config/datastores.ini && \
+sed -i "s/local_datacenter=.*/local_datacenter=$CASSANDRA_LOCAL_DATACENTER/g" ${NEXUS_SRC}/data-access/nexustiles/config/datastores.ini && \
+sed -i "s/host=localhost:8983/host=$SOLR_URL_PORT/g" ${NEXUS_SRC}/data-access/nexustiles/config/datastores.ini
+
+# DOMS
+sed -i "s/module_dirs=.*/module_dirs=webservice.algorithms,webservice.algorithms_spark,webservice.algorithms.doms/g" ${NEXUS_SRC}/analysis/webservice/config/web.ini && \
+sed -i "s/host=.*/host=$CASSANDRA_CONTACT_POINTS/g" ${NEXUS_SRC}/analysis/webservice/algorithms/doms/domsconfig.ini && \
+sed -i "s/local_datacenter=.*/local_datacenter=$CASSANDRA_LOCAL_DATACENTER/g" ${NEXUS_SRC}/analysis/webservice/algorithms/doms/domsconfig.ini
+
+cd ${NEXUS_SRC}/data-access
 python setup.py install --force
 
-cd /nexus/analysis
+cd ${NEXUS_SRC}/analysis
 python setup.py install --force
 
-python -m webservice.webapp
\ No newline at end of file
+python -m webservice.webapp
diff --git a/docker/nexus-webapp/install_nexus.sh b/docker/nexus-webapp/install_nexus.sh
new file mode 100755
index 0000000..08113de
--- /dev/null
+++ b/docker/nexus-webapp/install_nexus.sh
@@ -0,0 +1,35 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+set -e
+
+APACHE_NEXUS="https://github.com/apache/incubator-sdap-nexus.git"
+MASTER="master"
+NEXUS_SRC=/incubator-sdap-nexus
+
+GIT_REPO=${1:-$APACHE_NEXUS}
+GIT_BRANCH=${2:-$MASTER}
+NEXUS_SRC_LOC=${3:-$NEXUS_SRC}
+
+mkdir -p ${NEXUS_SRC_LOC}
+pushd ${NEXUS_SRC_LOC}
+git init
+git pull ${GIT_REPO} ${GIT_BRANCH}
+
+cd data-access
+python setup.py install
+cd ../analysis
+python setup.py install
+popd
diff --git a/docker/nexus-webapp/install_nexusproto.sh b/docker/nexus-webapp/install_nexusproto.sh
new file mode 100755
index 0000000..ce44c70
--- /dev/null
+++ b/docker/nexus-webapp/install_nexusproto.sh
@@ -0,0 +1,35 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+set -e
+
+APACHE_NEXUSPROTO="https://github.com/apache/incubator-sdap-nexusproto.git"
+MASTER="master"
+
+GIT_REPO=${1:-$APACHE_NEXUSPROTO}
+GIT_BRANCH=${2:-$MASTER}
+
+mkdir nexusproto
+pushd nexusproto
+git init
+git pull ${GIT_REPO} ${GIT_BRANCH}
+
+./gradlew pythonInstall --info
+
+./gradlew install --info
+
+rm -rf /root/.gradle
+popd
+rm -rf nexusproto
diff --git a/docker/nexusbase/Dockerfile b/docker/nexusbase/Dockerfile
deleted file mode 100644
index 7a2d454..0000000
--- a/docker/nexusbase/Dockerfile
+++ /dev/null
@@ -1,36 +0,0 @@
-FROM centos:7
-
-WORKDIR /tmp
-
-RUN yum -y update && \
-    yum -y install wget \
-    git \
-    which \
-    bzip2
-
-# Install Oracle JDK 1.8u121-b13
-RUN wget -q --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u121-b13/e9e7ea248e2c4826b92b3f075a80e441/jdk-8u121-linux-x64.rpm" && \
-    yum -y install jdk-8u121-linux-x64.rpm && \
-    rm jdk-8u121-linux-x64.rpm
-ENV JAVA_HOME /usr/java/default
-
-# ########################
-# # Apache Maven   #
-# ########################
-ENV M2_HOME /usr/local/apache-maven
-ENV M2 $M2_HOME/bin 
-ENV PATH $PATH:$M2
-
-RUN mkdir $M2_HOME && \
-    wget -q http://mirror.stjschools.org/public/apache/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.tar.gz && \
-    tar -xvzf apache-maven-3.3.9-bin.tar.gz -C $M2_HOME --strip-components=1 && \
-    rm -f apache-maven-3.3.9-bin.tar.gz
-
-# ########################
-# # Anaconda   #
-# ########################
-RUN wget -q https://repo.continuum.io/archive/Anaconda2-4.3.0-Linux-x86_64.sh -O install_anaconda.sh && \
-    /bin/bash install_anaconda.sh -b -p /usr/local/anaconda2 && \
-    rm install_anaconda.sh
-ENV PATH $PATH:/usr/local/anaconda2/bin
-
diff --git a/docker/solr-single-node/Dockerfile b/docker/solr-single-node/Dockerfile
index c2b9302..fe1301d 100644
--- a/docker/solr-single-node/Dockerfile
+++ b/docker/solr-single-node/Dockerfile
@@ -1,11 +1,26 @@
-FROM nexusjpl/nexus-solr
-MAINTAINER Nga Quach "Nga.T.Chung@jpl.nasa.gov"
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+ARG tag_version=latest
+FROM sdap/solr:${tag_version}
+MAINTAINER Apache SDAP "dev@sdap.apache.org"
 
 USER root
 
 RUN apt-get update && apt-get -y install git && rm -rf /var/lib/apt/lists/*
 
-RUN cd / && git clone https://github.com/dataplumber/nexus.git && cp -r /nexus/data-access/config/schemas/solr/nexustiles . && rm -rf /nexus
+RUN cd / && git clone https://github.com/apache/incubator-sdap-nexus.git && cp -r /incubator-sdap-nexus/data-access/config/schemas/solr/nexustiles . && rm -rf /incubator-sdap-nexus
 
 USER $SOLR_USER
 
diff --git a/docker/solr/Dockerfile b/docker/solr/Dockerfile
index 2f72a9f..29ab29b 100644
--- a/docker/solr/Dockerfile
+++ b/docker/solr/Dockerfile
@@ -1,5 +1,19 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
 FROM solr:6.4.2
-MAINTAINER Nga Quach "Nga.T.Chung@jpl.nasa.gov"
+MAINTAINER Apache SDAP "dev@sdap.apache.org"
 
 USER root
 
@@ -7,7 +21,7 @@ RUN cd / && wget https://downloads.sourceforge.net/project/jts-topo-suite/jts/1.
 
 RUN apt-get update && apt-get -y install git && rm -rf /var/lib/apt/lists/*
 
-RUN cd / && git clone https://github.com/dataplumber/nexus.git && cp -r /nexus/data-access/config/schemas/solr/nexustiles /tmp/nexustiles && rm -rf /nexus
+RUN cd / && git clone https://github.com/apache/incubator-sdap-nexus.git && cp -r /incubator-sdap-nexus/data-access/config/schemas/solr/nexustiles /tmp/nexustiles && rm -rf /incubator-sdap-nexus
 
 RUN mkdir /solr-home
 
@@ -15,6 +29,9 @@ RUN chown -R $SOLR_USER:$SOLR_USER /solr-home
 
 VOLUME /solr-home
 
+VOLUME /opt/solr/server/solr/
+RUN chown -R $SOLR_USER:$SOLR_USER /opt/solr/server/solr/
+
 RUN cp /jts-1.14/lib/jts-1.14.jar /opt/solr/server/lib/jts-1.14.jar
 
 RUN cp /jts-1.14/lib/jtsio-1.14.jar /opt/solr/server/lib/jtsio-1.14.jar
diff --git a/docker/spark-mesos-agent/Dockerfile b/docker/spark-mesos-agent/Dockerfile
index 5c58fb0..471d63f 100644
--- a/docker/spark-mesos-agent/Dockerfile
+++ b/docker/spark-mesos-agent/Dockerfile
@@ -1,8 +1,22 @@
-# Run example: docker run --net=nexus --name mesos-agent1 nexusjpl/spark-mesos-agent
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
 
-FROM nexusjpl/spark-mesos-base
+ARG tag_version=1.0.0-SNAPSHOT
+FROM sdap/spark-mesos-base:${tag_version}
 
-MAINTAINER Joseph Jacob "Joseph.Jacob@jpl.nasa.gov"
+MAINTAINER Apache SDAP "dev@sdap.apache.org"
 
 # Run a Mesos slave.
 
diff --git a/docker/spark-mesos-agent/docker-entrypoint.sh b/docker/spark-mesos-agent/docker-entrypoint.sh
index 1ed2c34..36d608b 100755
--- a/docker/spark-mesos-agent/docker-entrypoint.sh
+++ b/docker/spark-mesos-agent/docker-entrypoint.sh
@@ -1,13 +1,39 @@
 #!/bin/bash
 
-sed -i "s/server.socket_host.*$/server.socket_host=$SPARK_LOCAL_IP/g" /nexus/analysis/webservice/config/web.ini && \
-sed -i "s/cassandra1,cassandra2,cassandra3,cassandra4,cassandra5,cassandra6/$CASSANDRA_CONTACT_POINTS/g" /nexus/data-access/nexustiles/config/datastores.ini && \
-sed -i "s/solr1:8983/$SOLR_URL_PORT/g" /nexus/data-access/nexustiles/config/datastores.ini
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
 
-cd /nexus/data-access
+set -e
+
+if [ -n "$TORNADO_HOST" ]; then
+  sed -i "s/server.socket_host = .*/server.socket_host = '${TORNADO_HOST}'/g" ${NEXUS_SRC}/analysis/webservice/config/web.ini
+fi
+sed -i "s/host=127.0.0.1/host=$CASSANDRA_CONTACT_POINTS/g" ${NEXUS_SRC}/data-access/nexustiles/config/datastores.ini && \
+sed -i "s/local_datacenter=.*/local_datacenter=$CASSANDRA_LOCAL_DATACENTER/g" ${NEXUS_SRC}/data-access/nexustiles/config/datastores.ini && \
+sed -i "s/host=localhost:8983/host=$SOLR_URL_PORT/g" ${NEXUS_SRC}/data-access/nexustiles/config/datastores.ini
+
+# DOMS
+sed -i "s/module_dirs=.*/module_dirs=webservice.algorithms,webservice.algorithms_spark,webservice.algorithms.doms/g" ${NEXUS_SRC}/analysis/webservice/config/web.ini && \
+sed -i "s/host=.*/host=$CASSANDRA_CONTACT_POINTS/g" ${NEXUS_SRC}/analysis/webservice/algorithms/doms/domsconfig.ini && \
+sed -i "s/local_datacenter=.*/local_datacenter=$CASSANDRA_LOCAL_DATACENTER/g" ${NEXUS_SRC}/analysis/webservice/algorithms/doms/domsconfig.ini
+
+cd ${NEXUS_SRC}/data-access
 python setup.py install --force
 
-cd /nexus/analysis
+cd ${NEXUS_SRC}/analysis
 python setup.py install --force
 
-${MESOS_HOME}/build/bin/mesos-agent.sh --master=${MESOS_MASTER_NAME}:${MESOS_MASTER_PORT} --port=${MESOS_AGENT_PORT} --work_dir=${MESOS_WORKDIR} --no-systemd_enable_support --launcher=posix --no-switch_user --executor_environment_variables='{ "PYTHON_EGG_CACHE": "/tmp" }'
\ No newline at end of file
+
+${MESOS_HOME}/build/bin/mesos-agent.sh --master=${MESOS_MASTER_NAME}:${MESOS_MASTER_PORT} --port=${MESOS_AGENT_PORT} --work_dir=${MESOS_WORKDIR} --no-systemd_enable_support --launcher=posix --no-switch_user --executor_environment_variables='{ "PYTHON_EGG_CACHE": "/tmp" }'
diff --git a/docker/spark-mesos-base/Dockerfile b/docker/spark-mesos-base/Dockerfile
index 04dd4ff..784f56c 100644
--- a/docker/spark-mesos-base/Dockerfile
+++ b/docker/spark-mesos-base/Dockerfile
@@ -1,19 +1,65 @@
-FROM nexusjpl/nexusbase
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+FROM centos:7
+
+MAINTAINER Apache SDAP "dev@sdap.apache.org"
 
-MAINTAINER Joseph Jacob "Joseph.Jacob@jpl.nasa.gov"
-
-# Install packages needed for builds
+WORKDIR /tmp
 
-RUN yum install -y gcc python-devel
+RUN yum -y update && \
+    yum -y install wget \
+    git \
+    which \
+    bzip2 \
+    gcc \
+    python-devel
+
+# Install Oracle JDK 1.8u121-b13
+RUN wget -q --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u121-b13/e9e7ea248e2c4826b92b3f075a80e441/jdk-8u121-linux-x64.rpm" && \
+    yum -y install jdk-8u121-linux-x64.rpm && \
+    rm jdk-8u121-linux-x64.rpm
+ENV JAVA_HOME /usr/java/default
+
+# ########################
+# # Apache Maven   #
+# ########################
+ENV M2_HOME /usr/local/apache-maven
+ENV M2 $M2_HOME/bin
+ENV PATH $PATH:$M2
+
+RUN mkdir $M2_HOME && \
+    wget -q http://mirror.stjschools.org/public/apache/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.tar.gz && \
+    tar -xvzf apache-maven-3.3.9-bin.tar.gz -C $M2_HOME --strip-components=1 && \
+    rm -f apache-maven-3.3.9-bin.tar.gz
+
+# ########################
+# # Anaconda   #
+# ########################
+RUN wget -q https://repo.continuum.io/archive/Anaconda2-4.3.0-Linux-x86_64.sh -O install_anaconda.sh && \
+    /bin/bash install_anaconda.sh -b -p /usr/local/anaconda2 && \
+    rm install_anaconda.sh
+ENV PATH $PATH:/usr/local/anaconda2/bin
 
 # Set environment variables.  For Mesos, I used MESOS_VER because MESOS_VERSION
-# is expected to be a logical TRUE/FALSE flag that tells Mesos whether or not 
+# is expected to be a logical TRUE/FALSE flag that tells Mesos whether or not
 # to simply print the version number and exit.
 
 ENV INSTALL_LOC=/usr/local \
     HADOOP_VERSION=2.7.3 \
     SPARK_VERSION=2.1.0 \
-    MESOS_VER=1.2.0 \
+    MESOS_VER=1.5.0 \
     MESOS_MASTER_PORT=5050 \
     MESOS_AGENT_PORT=5051 \
     MESOS_WORKDIR=/var/lib/mesos \
@@ -32,12 +78,11 @@ ENV SPARK_HOME=${INSTALL_LOC}/${SPARK_DIR} \
     PYSPARK_DRIVER_PYTHON=${CONDA_HOME}/bin/python \
     PYSPARK_PYTHON=${CONDA_HOME}/bin/python \
     PYSPARK_SUBMIT_ARGS="--driver-memory=4g pyspark-shell"
-    
+
 ENV PYTHONPATH=${PYTHONPATH}:${SPARK_HOME}/python:${SPARK_HOME}/python/lib/py4j-0.10.4-src.zip:${SPARK_HOME}/python/lib/pyspark.zip \
     MESOS_NATIVE_JAVA_LIBRARY=${INSTALL_LOC}/lib/libmesos.so \
-    
     SPARK_EXECUTOR_URI=${INSTALL_LOC}/${SPARK_PACKAGE}.tgz
-    
+
 WORKDIR ${INSTALL_LOC}
 
 # Set up Spark
@@ -55,7 +100,7 @@ RUN source ./install_mesos.sh && \
     mkdir ${MESOS_WORKDIR}
 
 # Set up Anaconda environment
-    
+
 ENV PATH=${CONDA_HOME}/bin:${PATH}:${HADOOP_HOME}/bin:${SPARK_HOME}/bin
 
 RUN conda install -c conda-forge -y netCDF4 && \
@@ -77,30 +122,28 @@ RUN cd ${CONDA_HOME}/lib && \
 
 RUN yum install -y mesa-libGL.x86_64
 
+# Install nexusproto
+ARG APACHE_NEXUSPROTO=https://github.com/apache/incubator-sdap-nexusproto.git
+ARG APACHE_NEXUSPROTO_BRANCH=master
+COPY install_nexusproto.sh ./install_nexusproto.sh
+RUN ./install_nexusproto.sh $APACHE_NEXUSPROTO $APACHE_NEXUSPROTO_BRANCH
+
 # Retrieve NEXUS code and build it.
 
 WORKDIR /
 
-RUN git clone https://github.com/dataplumber/nexus.git
+RUN git clone https://github.com/apache/incubator-sdap-nexus.git
 
-RUN sed -i 's/,webservice.algorithms.doms//g' /nexus/analysis/webservice/config/web.ini && \
-    sed -i 's/127.0.0.1/nexus-webapp/g' /nexus/analysis/webservice/config/web.ini && \
-    sed -i 's/127.0.0.1/cassandra1,cassandra2,cassandra3,cassandra4,cassandra5,cassandra6/g' /nexus/data-access/nexustiles/config/datastores.ini && \
-    sed -i 's/localhost:8983/solr1:8983/g' /nexus/data-access/nexustiles/config/datastores.ini
-
-WORKDIR /nexus/nexus-ingest/nexus-messages
-
-RUN ./gradlew clean build install
-
-WORKDIR /nexus/nexus-ingest/nexus-messages/build/python/nexusproto
-
-RUN python setup.py install
+RUN sed -i 's/,webservice.algorithms.doms//g' /incubator-sdap-nexus/analysis/webservice/config/web.ini && \
+    sed -i 's/127.0.0.1/nexus-webapp/g' /incubator-sdap-nexus/analysis/webservice/config/web.ini && \
+    sed -i 's/127.0.0.1/cassandra1,cassandra2,cassandra3,cassandra4,cassandra5,cassandra6/g' /incubator-sdap-nexus/data-access/nexustiles/config/datastores.ini && \
+    sed -i 's/localhost:8983/solr1:8983/g' /incubator-sdap-nexus/data-access/nexustiles/config/datastores.ini
 
-WORKDIR /nexus/data-access
+WORKDIR /incubator-sdap-nexus/data-access
 
 RUN python setup.py install
 
-WORKDIR /nexus/analysis
+WORKDIR /incubator-sdap-nexus/analysis
 
 RUN python setup.py install
 
diff --git a/docker/spark-mesos-base/install_mesos.sh b/docker/spark-mesos-base/install_mesos.sh
index 65a647e..655d737 100644
--- a/docker/spark-mesos-base/install_mesos.sh
+++ b/docker/spark-mesos-base/install_mesos.sh
@@ -1,3 +1,19 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
 # Install a few utility tools
 yum install -y tar wget git
 
diff --git a/docker/spark-mesos-base/install_nexusproto.sh b/docker/spark-mesos-base/install_nexusproto.sh
new file mode 100755
index 0000000..ce44c70
--- /dev/null
+++ b/docker/spark-mesos-base/install_nexusproto.sh
@@ -0,0 +1,35 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+set -e
+
+APACHE_NEXUSPROTO="https://github.com/apache/incubator-sdap-nexusproto.git"
+MASTER="master"
+
+GIT_REPO=${1:-$APACHE_NEXUSPROTO}
+GIT_BRANCH=${2:-$MASTER}
+
+mkdir nexusproto
+pushd nexusproto
+git init
+git pull ${GIT_REPO} ${GIT_BRANCH}
+
+./gradlew pythonInstall --info
+
+./gradlew install --info
+
+rm -rf /root/.gradle
+popd
+rm -rf nexusproto
diff --git a/docker/spark-mesos-master/Dockerfile b/docker/spark-mesos-master/Dockerfile
index 49a298f..c1d7d39 100644
--- a/docker/spark-mesos-master/Dockerfile
+++ b/docker/spark-mesos-master/Dockerfile
@@ -1,8 +1,22 @@
-# Run example: docker run --net=nexus --name mesos-master -p 5050:5050 nexusjpl/spark-mesos-master
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
 
-FROM nexusjpl/spark-mesos-base
+ARG tag_version=1.0.0-SNAPSHOT
+FROM sdap/spark-mesos-base:${tag_version}
 
-MAINTAINER Joseph Jacob "Joseph.Jacob@jpl.nasa.gov"
+MAINTAINER Apache SDAP "dev@sdap.apache.org"
 
 EXPOSE ${MESOS_MASTER_PORT}
 
diff --git a/docker/zookeeper/Dockerfile b/docker/zookeeper/Dockerfile
deleted file mode 100644
index de3f7b1..0000000
--- a/docker/zookeeper/Dockerfile
+++ /dev/null
@@ -1,27 +0,0 @@
-FROM java:openjdk-8-jre-alpine
-MAINTAINER Namrata Malarout <na...@jpl.nasa.gov>
-
-LABEL name="zookeeper" version="3.4.8"
-
-RUN apk add --no-cache wget bash \
-    && mkdir /opt \
-    && wget -q -O - http://apache.mirrors.pair.com/zookeeper/zookeeper-3.4.8/zookeeper-3.4.8.tar.gz | tar -xzf - -C /opt \
-    && mv /opt/zookeeper-3.4.8 /opt/zookeeper \
-    && cp /opt/zookeeper/conf/zoo_sample.cfg /opt/zookeeper/conf/zoo.cfg \
-    && mkdir -p /tmp/zookeeper
-
-EXPOSE 2181 2182 2183 2888 3888 3889 3890
-
-WORKDIR /opt/zookeeper
-
-VOLUME ["/opt/zookeeper/conf", "/tmp/zookeeper"]
-RUN mkdir /tmp/zookeeper/1
-RUN mkdir /tmp/zookeeper/2
-RUN mkdir /tmp/zookeeper/3
-RUN printf '%s\n%s\n%s\n%s\n%s\n%s\n%s\n%s\n' 'tickTime=2000' 'dataDir=/tmp/zookeeper/1' 'clientPort=2182' 'initLimit=5' 'syncLimit=2' 'server.1=localhost:2888:3888' 'server.2=localhost:2889:3889' 'server.3=localhost:2890:3890' >> /opt/zookeeper/zoo.cfg
-RUN printf '%s\n%s\n%s\n%s\n%s\n%s\n%s\n%s\n' 'tickTime=2000' 'dataDir=/tmp/zookeeper/2' 'clientPort=2182' 'initLimit=5' 'syncLimit=2' 'server.1=localhost:2888:3888' 'server.2=localhost:2889:3889' 'server.3=localhost:2890:3890' > /opt/zookeeper/zoo2.cfg
-RUN printf '%s\n%s\n%s\n%s\n%s\n%s\n%s\n%s\n' 'tickTime=2000' 'dataDir=/tmp/zookeeper/3' 'clientPort=2183' 'initLimit=5' 'syncLimit=2' 'server.1=localhost:2888:3888' 'server.2=localhost:2889:3889' 'server.3=localhost:2890:3890' > /opt/zookeeper/zoo3.cfg
-RUN cd /opt/zookeeper
-RUN cp zoo2.cfg conf/zoo2.cfg
-RUN cp zoo3.cfg conf/zoo3.cfg
-CMD bin/zkServer.sh start zoo.cfg
diff --git a/docker/zookeeper/README.md b/docker/zookeeper/README.md
deleted file mode 100644
index e69de29..0000000
diff --git a/docs/quickstart.rst b/docs/quickstart.rst
new file mode 100644
index 0000000..e5fb84f
--- /dev/null
+++ b/docs/quickstart.rst
@@ -0,0 +1,302 @@
+.. _quickstart:
+
+*****************
+Quickstart Guide
+*****************
+
+Introduction
+=============
+
+NEXUS is a collection of software that enables the analysis of scientific data. In order to achieve fast analysis, NEXUS takes the approach of breaking apart, or "tiling", the original data into smaller tiles for storage. Metadata about each tile is stored in a fast searchable index with a pointer to the original data array. When an analysis is requested, the necessary tiles are looked up in the index and then the data for only those tiles is loaded for processing.
+
+This quickstart guide will walk you through how to install and run NEXUS on your laptop. By the end of this quickstart, you should be able to run a time series analysis for one month of sea surface temperature data and plot the result.
+
+.. _quickstart-prerequisites:
+
+Prerequisites
+==============
+
+* Docker (tested on v18.03.1-ce)
+* Internet Connection
+* bash
+* cURL
+* 500 MB of disk space
+
+Prepare
+========
+
+Start downloading the Docker images and data files.
+
+.. _quickstart-step1:
+
+Pull Docker Images
+-------------------
+
+Pull the necessary Docker images from the `SDAP repository <https://hub.docker.com/u/sdap>`_ on Docker Hub. Please check the repository for the latest version tag.
+
+.. code-block:: bash
+
+  export VERSION=1.0.0-SNAPSHOT
+
+.. code-block:: bash
+
+  docker pull sdap/ningester:${VERSION}
+  docker pull sdap/solr-singlenode:${VERSION}
+  docker pull sdap/cassandra:${VERSION}
+  docker pull sdap/nexus-webapp:${VERSION}
+
+.. _quickstart-step2:
+
+Create a new Docker Bridge Network
+------------------------------------
+
+This quickstart constsists of launching several Docker containers that need to communicate with one another. To facilitate this communication, we want to be able to reference containers via hostname instead of IP address. The default bridge network used by Docker only supports this by using the ``--link`` option wich is now considered to be `deprecated <https://docs.docker.com/network/links/>`_.
+
+The currently recommended way to acheive what we want is to use a `user defined bridge network <https://docs.docker.com/network/bridge/##differences-between-user-defined-bridges-and-the-default-bridge>`_ and launch all of the containers into that network.
+
+The network we will be using for this quickstart will be called ``sdap-net``. Create it using the following command:
+
+.. code-block:: bash
+
+  docker network create sdap-net
+
+.. _quickstart-step3:
+
+Download Sample Data
+---------------------
+
+The data we will be downloading is part of the `AVHRR OI dataset <https://podaac.jpl.nasa.gov/dataset/AVHRR_OI-NCEI-L4-GLOB-v2.0>`_ which measures sea surface temperature. We will download 1 month of data and ingest it into a local Solr and Cassandra instance.
+
+Choose a location that is mountable by Docker (typically needs to be under the User's home directory) to download the data files to.
+
+.. code-block:: bash
+
+  export DATA_DIRECTORY=~/nexus-quickstart/data/avhrr-granules
+  mkdir -p ${DATA_DIRECTORY}
+
+Then go ahead and download 1 month worth of AVHRR netCDF files.
+
+.. code-block:: bash
+
+  cd $DATA_DIRECTORY
+
+  export URL_LIST="https://podaac-opendap.jpl.nasa.gov:443/opendap/allData/ghrsst/data/GDS2/L4/GLOB/NCEI/AVHRR_OI/v2/2015/305/20151101120000-NCEI-L4_GHRSST-SSTblend-AVHRR_OI-GLOB-v02.0-fv02.0.nc https://podaac-opendap.jpl.nasa.gov:443/opendap/allData/ghrsst/data/GDS2/L4/GLOB/NCEI/AVHRR_OI/v2/2015/306/20151102120000-NCEI-L4_GHRSST-SSTblend-AVHRR_OI-GLOB-v02.0-fv02.0.nc https://podaac-opendap.jpl.nasa.gov:443/opendap/allData/ghrsst/data/GDS2/L4/GLOB/NCEI/AVHRR_OI/v2/2015/307/20151103120000-NCEI-L4_GHRSST-SSTblend-AVHRR_OI-GLOB-v02.0-fv02.0.nc https://podaac-opendap.jpl.nasa.gov:443/opendap/allData/ghrsst/data/GDS2/L4/GLOB/NCEI/AVHRR_OI/v2/2015/308/20151104120000-NCEI-L4_GHRSST-SSTblend-AVHRR_OI-GLOB-v02.0-fv02.0.nc https://podaac-opendap.jpl.nasa.gov:443/opendap/allData/ghrsst/data/GDS2/L4/GLOB/NCEI/AVHRR_OI/v2/2015/309/20151105120000-NCEI-L4_GHRSST-SSTblend-AVHRR_OI-GLOB-v02.0-fv02.0.nc https://podaac-opendap.jpl.nasa.gov:443/opendap/allData/ghrsst/data/GDS2/L4/GLOB/NCEI/AVHRR_OI/v2/2015/310/20151106120000-NCEI-L4_GHRSST-SSTblend-AVHRR_OI-GLOB-v02.0-fv02.0.nc https://podaac-opendap.jpl.nasa.gov:443/opendap/allData/ghrsst/data/GDS2/L4/GLOB/NCEI/AVHRR_OI/v2/2015/311/20151107120000-NCEI-L4_GHRSST-SSTblend-AVHRR_OI-GLOB-v02.0-fv02.0.nc https://podaac-opendap.jpl.nasa.gov:443/opendap/allData/ghrsst/data/GDS2/L4/GLOB/NCEI/AVHRR_OI/v2/2015/312/20151108120000-NCEI-L4_GHRSST-SSTblend-AVHRR_OI-GLOB-v02.0-fv02.0.nc https://podaac-opendap.jpl.nasa.gov:443/opendap/allData/ghrsst/data/GDS2/L4/GLOB/NCEI/AVHRR_OI/v2/2015/313/20151109120000-NCEI-L4_GHRSST-SSTblend-AVHRR_OI-GLOB-v02.0-fv02.0.nc https://podaac-opendap.jpl.nasa.gov:443/opendap/allData/ghrsst/data/GDS2/L4/GLOB/NCEI/AVHRR_OI/v2/2015/314/20151110120000-NCEI-L4_GHRSST-SSTblend-AVHRR_OI-GLOB-v02.0-fv02.0.nc https://podaac-opendap.jpl.nasa.gov:443/opendap/allData/ghrsst/data/GDS2/L4/GLOB/NCEI/AVHRR_OI/v2/2015/315/20151111120000-NCEI-L4_GHRSST-SSTblend-AVHRR_OI-GLOB-v02.0-fv02.0.nc https://podaac-opendap.jpl.nasa.gov:443/opendap/allData/ghrsst/data/GDS2/L4/GLOB/NCEI/AVHRR_OI/v2/2015/316/20151112120000-NCEI-L4_GHRSST-SSTblend-AVHRR_OI-GLOB-v02.0-fv02.0.nc https://podaac-opendap.jpl.nasa.gov:443/opendap/allData/ghrsst/data/GDS2/L4/GLOB/NCEI/AVHRR_OI/v2/2015/317/20151113120000-NCEI-L4_GHRSST-SSTblend-AVHRR_OI-GLOB-v02.0-fv02.0.nc https://podaac-opendap.jpl.nasa.gov:443/opendap/allData/ghrsst/data/GDS2/L4/GLOB/NCEI/AVHRR_OI/v2/2015/318/20151114120000-NCEI-L4_GHRSST-SSTblend-AVHRR_OI-GLOB-v02.0-fv02.0.nc https://podaac-opendap.jpl.nasa.gov:443/opendap/allData/ghrsst/data/GDS2/L4/GLOB/NCEI/AVHRR_OI/v2/2015/319/20151115120000-NCEI-L4_GHRSST-SSTblend-AVHRR_OI-GLOB-v02.0-fv02.0.nc https://podaac-opendap.jpl.nasa.gov:443/opendap/allData/ghrsst/data/GDS2/L4/GLOB/NCEI/AVHRR_OI/v2/2015/320/20151116120000-NCEI-L4_GHRSST-SSTblend-AVHRR_OI-GLOB-v02.0-fv02.0.nc https://podaac-opendap.jpl.nasa.gov:443/opendap/allData/ghrsst/data/GDS2/L4/GLOB/NCEI/AVHRR_OI/v2/2015/321/20151117120000-NCEI-L4_GHRSST-SSTblend-AVHRR_OI-GLOB-v02.0-fv02.0.nc https://podaac-opendap.jpl.nasa.gov:443/opendap/allData/ghrsst/data/GDS2/L4/GLOB/NCEI/AVHRR_OI/v2/2015/322/20151118120000-NCEI-L4_GHRSST-SSTblend-AVHRR_OI-GLOB-v02.0-fv02.0.nc https://podaac-opendap.jpl.nasa.gov:443/opendap/allData/ghrsst/data/GDS2/L4/GLOB/NCEI/AVHRR_OI/v2/2015/323/20151119120000-NCEI-L4_GHRSST-SSTblend-AVHRR_OI-GLOB-v02.0-fv02.0.nc https://podaac-opendap.jpl.nasa.gov:443/opendap/allData/ghrsst/data/GDS2/L4/GLOB/NCEI/AVHRR_OI/v2/2015/324/20151120120000-NCEI-L4_GHRSST-SSTblend-AVHRR_OI-GLOB-v02.0-fv02.0.nc https://podaac-opendap.jpl.nasa.gov:443/opendap/allData/ghrsst/data/GDS2/L4/GLOB/NCEI/AVHRR_OI/v2/2015/325/20151121120000-NCEI-L4_GHRSST-SSTblend-AVHRR_OI-GLOB-v02.0-fv02.0.nc https://podaac-opendap.jpl.nasa.gov:443/opendap/allData/ghrsst/data/GDS2/L4/GLOB/NCEI/AVHRR_OI/v2/2015/326/20151122120000-NCEI-L4_GHRSST-SSTblend-AVHRR_OI-GLOB-v02.0-fv02.0.nc https://podaac-opendap.jpl.nasa.gov:443/opendap/allData/ghrsst/data/GDS2/L4/GLOB/NCEI/AVHRR_OI/v2/2015/327/20151123120000-NCEI-L4_GHRSST-SSTblend-AVHRR_OI-GLOB-v02.0-fv02.0.nc https://podaac-opendap.jpl.nasa.gov:443/opendap/allData/ghrsst/data/GDS2/L4/GLOB/NCEI/AVHRR_OI/v2/2015/328/20151124120000-NCEI-L4_GHRSST-SSTblend-AVHRR_OI-GLOB-v02.0-fv02.0.nc https://podaac-opendap.jpl.nasa.gov:443/opendap/allData/ghrsst/data/GDS2/L4/GLOB/NCEI/AVHRR_OI/v2/2015/329/20151125120000-NCEI-L4_GHRSST-SSTblend-AVHRR_OI-GLOB-v02.0-fv02.0.nc https://podaac-opendap.jpl.nasa.gov:443/opendap/allData/ghrsst/data/GDS2/L4/GLOB/NCEI/AVHRR_OI/v2/2015/330/20151126120000-NCEI-L4_GHRSST-SSTblend-AVHRR_OI-GLOB-v02.0-fv02.0.nc https://podaac-opendap.jpl.nasa.gov:443/opendap/allData/ghrsst/data/GDS2/L4/GLOB/NCEI/AVHRR_OI/v2/2015/331/20151127120000-NCEI-L4_GHRSST-SSTblend-AVHRR_OI-GLOB-v02.0-fv02.0.nc https://podaac-opendap.jpl.nasa.gov:443/opendap/allData/ghrsst/data/GDS2/L4/GLOB/NCEI/AVHRR_OI/v2/2015/332/20151128120000-NCEI-L4_GHRSST-SSTblend-AVHRR_OI-GLOB-v02.0-fv02.0.nc https://podaac-opendap.jpl.nasa.gov:443/opendap/allData/ghrsst/data/GDS2/L4/GLOB/NCEI/AVHRR_OI/v2/2015/333/20151129120000-NCEI-L4_GHRSST-SSTblend-AVHRR_OI-GLOB-v02.0-fv02.0.nc https://podaac-opendap.jpl.nasa.gov:443/opendap/allData/ghrsst/data/GDS2/L4/GLOB/NCEI/AVHRR_OI/v2/2015/334/20151130120000-NCEI-L4_GHRSST-SSTblend-AVHRR_OI-GLOB-v02.0-fv02.0.nc"
+
+  for url in ${URL_LIST}; do
+    curl -O "${url}"
+  done
+
+You should now have 30 files downloaded to your data directory, one for each day in November 2015.
+
+Start Data Storage Containers
+==============================
+
+We will use Solr and Cassandra to store the tile metadata and data respectively.
+
+.. _quickstart-step4:
+
+Start Solr
+-----------
+
+SDAP is tested with Solr version 7.x with the JTS topology suite add-on installed. The SDAP docker image is based off of the official Solr image and simply adds the JTS topology suite and the nexustiles core.
+
+.. note:: Mounting a volume is optional but if you choose to do it, you can start and stop the Solr container without having to reingest your data every time. If you do not mount a volume, every time you stop your Solr container the data will be lost.
+
+To start Solr using a volume mount and expose the admin webapp on port 8983:
+
+.. code-block:: bash
+
+  export SOLR_DATA=~/nexus-quickstart/solr
+  docker run --name solr --network sdap-net -v ${SOLR_DATA}:/opt/solr/server/solr/nexustiles/data -p 8983:8983 -d sdap/solr-singlenode:${VERSION}
+
+If you don't want to use a volume, leave off the ``-v`` option.
+
+
+.. _quickstart-step5:
+
+Start Cassandra
+----------------
+
+SDAP is tested with Cassandra version 2.2.x. The SDAP docker image is based off of the official Cassandra image and simply mounts the schema DDL script into the container for easy initialization.
+
+.. note:: Similar to the Solr container, using a volume is recommended but not required.
+
+To start cassandra using a volume mount and expose the connection port 9042:
+
+.. code-block:: bash
+
+  export CASSANDRA_DATA=~/nexus-quickstart/cassandra
+  docker run --name cassandra --network sdap-net -p 9042:9042 -v ${CASSANDRA_DATA}:/var/lib/cassandra -d sdap/cassandra:${VERSION}
+
+If this is your first time starting the cassandra container, you need to initialize the database by running the DDL script included in the image. Execute the following command to create the needed keyspace and table:
+
+.. code-block:: bash
+
+  docker exec -it cassandra cqlsh -f /tmp/nexustiles.cql
+
+.. _quickstart-step6:
+
+Ingest Data
+============
+
+Now that Solr and Cassandra have both been started and configured, we can ingest some data. NEXUS ingests data using the ningester docker image. This image is designed to read configuration and data from volume mounts and then tile the data and save it to the datastores. More information can be found in the :ref:`ningester` section.
+
+Ningester needs 3 things to run:
+
+#. Tiling configuration. How should the dataset be tiled? What is the dataset called? Are there any transformations that need to happen (e.g. kelvin to celsius conversion)? etc...
+#. Connection configuration. What should be used for metadata storage and where can it be found? What should be used for data storage and where can it be found?
+#. Data files. The data that will be ingested.
+
+Tiling configuration
+---------------------
+
+For this quickstart we will use the AVHRR tiling configuration from the test job in the Apache project. It can be found here: `AvhrrJobTest.yml <https://github.com/apache/incubator-sdap-ningester/blob/bc596c2749a7a2b44a01558b60428f6d008f4f45/src/testJobs/resources/testjobs/AvhrrJobTest.yml>`_. Download that file into a temporary location on your laptop that can be mounted by Docker.
+
+.. code-block:: bash
+
+  export NINGESTER_CONFIG=~/nexus-quickstart/ningester/config
+  mkdir -p ${NINGESTER_CONFIG}
+  cd ${NINGESTER_CONFIG}
+  curl -O https://raw.githubusercontent.com/apache/incubator-sdap-ningester/bc596c2749a7a2b44a01558b60428f6d008f4f45/src/testJobs/resources/testjobs/AvhrrJobTest.yml
+
+Connection configuration
+-------------------------
+
+We want ningester to use Solr for its metadata store and Cassandra for its data store. We also want it to connect to the Solr and Cassandra instances we started earlier. In order to do this we need a connection configuration file that specifies how the application should connect to Solr and Cassandra. It looks like this:
+
+.. code-block:: yaml
+
+  # Tile writer configuration
+  ningester:
+    tile_writer:
+      data_store: cassandraStore
+      metadata_store: solrStore
+  ---
+  # Connection settings for the docker profile
+  spring:
+      profiles:
+        - docker
+      data:
+        cassandra:
+          keyspaceName: nexustiles
+          contactPoints: cassandra
+        solr:
+          host: http://solr:8983/solr/
+
+  datasource:
+    solrStore:
+      collection: nexustiles
+
+Save this configuration to a file on your local laptop that can be mounted into a Docker container:
+
+.. code-block:: bash
+
+  touch ${NINGESTER_CONFIG}/connectionsettings.yml
+  cat << EOF >> ${NINGESTER_CONFIG}/connectionsettings.yml
+  # Tile writer configuration
+  ningester:
+    tile_writer:
+      data_store: cassandraStore
+      metadata_store: solrStore
+  ---
+  # Connection settings for the docker profile
+  spring:
+      profiles:
+        - docker
+      data:
+        cassandra:
+          keyspaceName: nexustiles
+          contactPoints: cassandra
+        solr:
+          host: http://solr:8983/solr/
+
+  datasource:
+    solrStore:
+      collection: nexustiles
+  EOF
+
+Data files
+-----------
+
+We already downloaded the datafiles to ``${DATA_DIRECTORY}`` in :ref:`quickstart-step2` so we are ready to start ingesting.
+
+Launch Ningester
+-------------------
+
+The ningester docker image runs a batch job that will ingest one granule. Here, we do a quick for loop to cycle through each data file and run ingestion on it.
+
+.. note:: Ingestion takes about 60 seconds per file. Depending on how powerful your laptop is and what other programs you have running, you can choose to ingest more than one file at a time. If you use this example, we will be ingesting 1 file at a time. So, for 30 files this will take roughly 30 minutes. You can speed this up by reducing the time spent sleeping by changing ``sleep 60`` to something like ``sleep 30``.
+
+.. code-block:: bash
+
+  for g in `ls ${DATA_DIRECTORY} | awk "{print $1}"`
+  do
+    docker run -d --name $(echo avhrr_$g | cut -d'-' -f 1) --network sdap-net -v ${NINGESTER_CONFIG}:/config/ -v ${DATA_DIRECTORY}/${g}:/data/${g} sdap/ningester:${VERSION} docker,solr,cassandra
+    sleep 60
+  done
+
+Each container will be launched with a name of ``avhrr_<date>`` where ``<date>`` is the date from the filename of the granule being ingested. You can use ``docker ps`` to watch the containers launch and you can use ``docker logs <container name>`` to view the logs for any one container as the data is ingested.
+
+You can move on to the next section while the data ingests.
+
+.. note:: After the container finishes ingesting the file, the container will exit (with a ``0`` exit code) indicating completion. However, the containers will **not** automatically be removed for you. This is simply to allow you to inspect the containers even after they have exited if you want to. A useful command to clean up all of the stopped containers that we started is ``docker rm $(docker ps -a | grep avhrr | awk '{print $1}')``.
+
+
+.. _quickstart-step7:
+
+Start the Webapp
+=================
+
+Now that the data is being (has been) ingested, we need to start the webapp that provides the HTTP interface to the analysis capabilities. This is currently a python webapp running Tornado and is contained in the nexus-webapp Docker image. To start the webapp and expose port 8083 use the following command:
+
+.. code-block:: bash
+
+  docker run -d --name nexus-webapp --network sdap-net -p 8083:8083 -e SPARK_LOCAL_IP=127.0.0.1 -e MASTER=local[4] -e CASSANDRA_CONTACT_POINTS=cassandra -e SOLR_URL_PORT=solr:8983 sdap/nexus-webapp:${VERSION}
+
+This command starts the nexus webservice and connects it to the Solr and Cassandra containers. It also sets the configuration for Spark to use local mode with 4 executors.
+
+After running this command you should be able to access the NEXUS webservice by sending requests to http://localhost:8083. A good test is to query the ``/list`` endpoint which lists all of the datasets currently available to that instance of NEXUS. For example:
+
+.. code-block:: bash
+
+  curl -X GET http://localhost:8083/list
+
+
+.. _quickstart-step8:
+
+Launch Jupyter
+================
+
+At this point NEXUS is running and you can interact with the different API endpoints. However, there is a python client library called ``nexuscli`` which facilitates interacting with the webservice through the Python programming language. The easiest way to use this library is to start the `Jupyter notebook <http://jupyter.org/>`_ docker image from the SDAP repository. This image is based off of the ``jupyter/scipy-notebook`` docker image but comes pre-installed with the ``nexuscli`` module and an example notebook.
+
+To launch the Jupyter notebook use the following command:
+
+.. code-block:: bash
+
+  docker run -it --rm --name jupyter --network sdap-net -p 8888:8888 sdap/jupyter:${VERSION} start-notebook.sh --NotebookApp.password='sha1:a0d7f85e5fc4:0c173bb35c7dc0445b13865a38d25263db592938'
+
+This command launches a Juypter container and exposes it on port 8888.
+
+.. note:: The password for the Jupyter instance is ``quickstart``
+
+Once the container starts, navigate to http://localhost:8888/. You will be prompted for a password, use ``quickstart``. After entering the password, you will be presented with a directory structure that looks something like this:
+
+.. image:: images/Jupyter_Home.png
+
+Click on the ``Quickstart`` directory to open it. You should see a notebook called ``Time Series Example``:
+
+.. image:: images/Jupyter_Quickstart.png
+
+Click on the ``Time Series Example`` notebook to start it. This will open the notebook and allow you to run the two cells and execute a Time Series command against your local instance of NEXUS.
+
+.. _quickstart-step8:
+
+Finished!
+================
+
+Congratulations you have completed the quickstart! In this example you:
+
+#. Learned how to ingest data into NEXUS datastores
+#. Learned how to start the NEXUS webservice
+#. Learned how to start a Jupyter Notebook
+#. Ran a time series analysis on 1 month of AVHRR OI data and plotted the result


 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services