You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@bigtop.apache.org by kw...@apache.org on 2017/03/22 04:17:10 UTC

bigtop git commit: BIGTOP-2703: refresh juju bits with metric/CI support (fixes #187)

Repository: bigtop
Updated Branches:
  refs/heads/master 3a987865c -> 5c0dc2a29


BIGTOP-2703: refresh juju bits with metric/CI support (fixes #187)

Signed-off-by: Kevin W Monroe <ke...@canonical.com>


Project: http://git-wip-us.apache.org/repos/asf/bigtop/repo
Commit: http://git-wip-us.apache.org/repos/asf/bigtop/commit/5c0dc2a2
Tree: http://git-wip-us.apache.org/repos/asf/bigtop/tree/5c0dc2a2
Diff: http://git-wip-us.apache.org/repos/asf/bigtop/diff/5c0dc2a2

Branch: refs/heads/master
Commit: 5c0dc2a29ac174e3f6e6c2ee1a10909b06f08fd0
Parents: 3a98786
Author: Kevin W Monroe <ke...@canonical.com>
Authored: Mon Mar 20 21:20:22 2017 +0000
Committer: Kevin W Monroe <ke...@canonical.com>
Committed: Tue Mar 21 23:16:58 2017 -0500

----------------------------------------------------------------------
 bigtop-deploy/juju/hadoop-kafka/.gitignore      |   2 +
 bigtop-deploy/juju/hadoop-kafka/README.md       | 329 +++++++++++++++++++
 bigtop-deploy/juju/hadoop-kafka/bundle-dev.yaml | 151 +++++++++
 .../juju/hadoop-kafka/bundle-local.yaml         | 151 +++++++++
 bigtop-deploy/juju/hadoop-kafka/bundle.yaml     | 151 +++++++++
 bigtop-deploy/juju/hadoop-kafka/ci-info.yaml    |  34 ++
 bigtop-deploy/juju/hadoop-kafka/copyright       |  16 +
 .../juju/hadoop-kafka/tests/01-bundle.py        | 124 +++++++
 .../juju/hadoop-kafka/tests/tests.yaml          |   7 +
 bigtop-deploy/juju/hadoop-processing/README.md  |   7 +-
 .../juju/hadoop-processing/bundle-dev.yaml      |  22 +-
 .../juju/hadoop-processing/bundle-local.yaml    |  22 +-
 .../juju/hadoop-processing/bundle.yaml          |  28 +-
 .../juju/hadoop-processing/ci-info.yaml         |  26 ++
 .../juju/hadoop-processing/tests/01-bundle.py   |  11 -
 bigtop-deploy/juju/hadoop-spark/README.md       |  14 +-
 bigtop-deploy/juju/hadoop-spark/bundle-dev.yaml |  31 +-
 .../juju/hadoop-spark/bundle-local.yaml         |  31 +-
 bigtop-deploy/juju/hadoop-spark/bundle.yaml     |  39 +--
 bigtop-deploy/juju/hadoop-spark/ci-info.yaml    |  34 ++
 .../juju/hadoop-spark/tests/01-bundle.py        |  11 -
 .../juju/hadoop-spark/tests/tests.yaml          |   2 +-
 bigtop-deploy/juju/spark-processing/README.md   |   2 +-
 .../juju/spark-processing/bundle-dev.yaml       |  12 +-
 .../juju/spark-processing/bundle-local.yaml     |  12 +-
 bigtop-deploy/juju/spark-processing/bundle.yaml |  12 +-
 .../juju/spark-processing/ci-info.yaml          |  14 +
 .../juju/spark-processing/tests/01-bundle.py    |  11 -
 .../hadoop/layer-hadoop-namenode/README.md      |   2 +-
 .../hadoop/layer-hadoop-namenode/metrics.yaml   |  13 +
 .../charm/hadoop/layer-hadoop-plugin/README.md  |   2 +-
 .../layer-hadoop-resourcemanager/README.md      |   2 +-
 .../layer-hadoop-resourcemanager/metrics.yaml   |   5 +
 .../charm/hadoop/layer-hadoop-slave/README.md   |   2 +-
 .../src/charm/kafka/layer-kafka/README.md       |   2 +-
 .../src/charm/mahout/layer-mahout/README.md     |   2 +-
 .../src/charm/pig/layer-pig/README.md           |   2 +-
 .../src/charm/spark/layer-spark/README.md       |   2 +-
 .../src/charm/zeppelin/layer-zeppelin/README.md |   2 +-
 .../charm/zookeeper/layer-zookeeper/README.md   |   2 +-
 .../zookeeper/layer-zookeeper/metrics.yaml      |   5 +
 41 files changed, 1167 insertions(+), 182 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/bigtop/blob/5c0dc2a2/bigtop-deploy/juju/hadoop-kafka/.gitignore
----------------------------------------------------------------------
diff --git a/bigtop-deploy/juju/hadoop-kafka/.gitignore b/bigtop-deploy/juju/hadoop-kafka/.gitignore
new file mode 100644
index 0000000..a295864
--- /dev/null
+++ b/bigtop-deploy/juju/hadoop-kafka/.gitignore
@@ -0,0 +1,2 @@
+*.pyc
+__pycache__

http://git-wip-us.apache.org/repos/asf/bigtop/blob/5c0dc2a2/bigtop-deploy/juju/hadoop-kafka/README.md
----------------------------------------------------------------------
diff --git a/bigtop-deploy/juju/hadoop-kafka/README.md b/bigtop-deploy/juju/hadoop-kafka/README.md
new file mode 100644
index 0000000..4c7a581
--- /dev/null
+++ b/bigtop-deploy/juju/hadoop-kafka/README.md
@@ -0,0 +1,329 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+# Overview
+
+The Apache Hadoop software library is a framework that allows for the
+distributed processing of large data sets across clusters of computers
+using a simple programming model.
+
+Hadoop is designed to scale from a few servers to thousands of machines,
+each offering local computation and storage. Rather than rely on hardware
+to deliver high-availability, Hadoop can detect and handle failures at the
+application layer. This provides a highly-available service on top of a cluster
+of machines, each of which may be prone to failure.
+
+Apache Kafka is an open-source message broker project developed by the Apache
+Software Foundation written in Scala. The project aims to provide a unified,
+high-throughput, low-latency platform for handling real-time data feeds. Learn
+more at [kafka.apache.org][].
+
+This bundle provides a complete deployment of Hadoop and Kafka components from
+[Apache Bigtop][] that perform distributed data processing at scale. Ganglia
+and rsyslog applications are also provided to monitor cluster health and syslog
+activity.
+
+[kafka.apache.org]: http://kafka.apache.org/
+[Apache Bigtop]: http://bigtop.apache.org/
+
+## Bundle Composition
+
+The applications that comprise this bundle are spread across 9 units as
+follows:
+
+  * NameNode (HDFS)
+  * ResourceManager (YARN)
+    * Colocated on the NameNode unit
+  * Slave (DataNode and NodeManager)
+    * 3 separate units
+  * Kafka
+  * Flume-Kafka
+    * Colocated on the Kafka unit
+  * Zookeeper
+    * 3 separate units
+  * Client (Hadoop endpoint)
+  * Plugin (Facilitates communication with the Hadoop cluster)
+    * Colocated on the Client unit
+  * Flume-HDFS
+    * Colocated on the Client unit
+  * Ganglia (Web interface for monitoring cluster metrics)
+    * Colocated on the Client unit
+  * Rsyslog (Aggregate cluster syslog events in a single location)
+    * Colocated on the Client unit
+
+The Flume-HDFS unit provides an Apache Flume agent featuring an Avro source,
+memory channel, and HDFS sink. This agent supports a relation with the
+Flume-Kafka charm (apache-flume-kafka) to ingest messages published to a given
+Kafka topic into HDFS.
+
+Deploying this bundle results in a fully configured Apache Bigtop
+cluster on any supported cloud, which can be scaled to meet workload
+demands.
+
+
+# Deploying
+
+A working Juju installation is assumed to be present. If Juju is not yet set
+up, please follow the [getting-started][] instructions prior to deploying this
+bundle.
+
+> **Note**: This bundle requires hardware resources that may exceed limits
+of Free-tier or Trial accounts on some clouds. To deploy to these
+environments, modify a local copy of [bundle.yaml][] to set
+`services: 'X': num_units: 1` and `machines: 'X': constraints: mem=3G` as
+needed to satisfy account limits.
+
+Deploy this bundle from the Juju charm store with the `juju deploy` command:
+
+    juju deploy hadoop-kafka
+
+> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
+of Juju, use [juju-quickstart][] with the following syntax: `juju quickstart
+hadoop-kafka`.
+
+Alternatively, deploy a locally modified `bundle.yaml` with:
+
+    juju deploy /path/to/bundle.yaml
+
+> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
+of Juju, use [juju-quickstart][] with the following syntax: `juju quickstart
+/path/to/bundle.yaml`.
+
+The charms in this bundle can also be built from their source layers in the
+[Bigtop charm repository][].  See the [Bigtop charm README][] for instructions
+on building and deploying these charms locally.
+
+## Network-Restricted Environments
+Charms can be deployed in environments with limited network access. To deploy
+in this environment, configure a Juju model with appropriate proxy and/or
+mirror options. See [Configuring Models][] for more information.
+
+[getting-started]: https://jujucharms.com/docs/stable/getting-started
+[bundle.yaml]: https://github.com/apache/bigtop/blob/master/bigtop-deploy/juju/hadoop-kafka/bundle.yaml
+[juju-quickstart]: https://launchpad.net/juju-quickstart
+[Bigtop charm repository]: https://github.com/apache/bigtop/tree/master/bigtop-packages/src/charm
+[Bigtop charm README]: https://github.com/apache/bigtop/blob/master/bigtop-packages/src/charm/README.md
+[Configuring Models]: https://jujucharms.com/docs/stable/models-config
+
+
+# Configuring
+
+The default Kafka topic where messages are published is unset. Set this to
+an existing Kafka topic as follows:
+
+    juju config flume-kafka kafka_topic='<topic_name>'
+
+If no existing topic is available, create and verify a new topic with:
+
+    juju run-action kafka/0 create-topic topic=<topic_name> \
+     partitions=1 replication=1
+    juju show-action-output <id>  # <-- id from above command
+
+Once the Flume agents start, messages will start flowing into
+HDFS in year-month-day directories here: `/user/flume/flume-kafka/%y-%m-%d`.
+
+
+# Verifying
+
+## Status
+The applications that make up this bundle provide status messages to indicate
+when they are ready:
+
+    juju status
+
+This is particularly useful when combined with `watch` to track the on-going
+progress of the deployment:
+
+    watch -n 2 juju status
+
+The message for each unit will provide information about that unit's state.
+Once they all indicate that they are ready, perform application smoke tests
+to verify that the bundle is working as expected.
+
+## Smoke Test
+The charms for each core component (namenode, resourcemanager, slave, kafka,
+and zookeeper) provide a `smoke-test` action that can be used to verify the
+application is functioning as expected. Note that the 'slave' component runs
+extensive tests provided by Apache Bigtop and may take up to 30 minutes to
+complete. Run the smoke-test actions as follows:
+
+    juju run-action namenode/0 smoke-test
+    juju run-action resourcemanager/0 smoke-test
+    juju run-action slave/0 smoke-test
+    juju run-action kafka/0 smoke-test
+    juju run-action zookeeper/0 smoke-test
+
+> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
+of Juju, the syntax is `juju action do <application>/0 smoke-test`.
+
+Watch the progress of the smoke test actions with:
+
+    watch -n 2 juju show-action-status
+
+> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
+of Juju, the syntax is `juju action status`.
+
+Eventually, all of the actions should settle to `status: completed`.  If
+any report `status: failed`, that application is not working as expected. Get
+more information about a specific smoke test with:
+
+    juju show-action-output <action-id>
+
+> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
+of Juju, the syntax is `juju action fetch <action-id>`.
+
+## Utilities
+Applications in this bundle include command line and web utilities that
+can be used to verify information about the cluster.
+
+From the command line, show the HDFS dfsadmin report and view the current list
+of YARN NodeManager units with the following:
+
+    juju run --application namenode "su hdfs -c 'hdfs dfsadmin -report'"
+    juju run --application resourcemanager "su yarn -c 'yarn node -list'"
+
+Show the list of Zookeeper nodes with the following:
+
+    juju run --unit zookeeper/0 'echo "ls /" | /usr/lib/zookeeper/bin/zkCli.sh'
+
+To access the HDFS web console, find the `PUBLIC-ADDRESS` of the namenode
+application and expose it:
+
+    juju status namenode
+    juju expose namenode
+
+The web interface will be available at the following URL:
+
+    http://NAMENODE_PUBLIC_IP:50070
+
+Similarly, to access the Resource Manager web consoles, find the
+`PUBLIC-ADDRESS` of the resourcemanager application and expose it:
+
+    juju status resourcemanager
+    juju expose resourcemanager
+
+The YARN and Job History web interfaces will be available at the following URLs:
+
+    http://RESOURCEMANAGER_PUBLIC_IP:8088
+    http://RESOURCEMANAGER_PUBLIC_IP:19888
+
+
+# Monitoring
+
+This bundle includes Ganglia for system-level monitoring of the namenode,
+resourcemanager, slave, kafka, and zookeeper units. Metrics are sent to a
+centralized ganglia unit for easy viewing in a browser. To view the ganglia web
+interface, find the `PUBLIC-ADDRESS` of the Ganglia application and expose it:
+
+    juju status ganglia
+    juju expose ganglia
+
+The web interface will be available at:
+
+    http://GANGLIA_PUBLIC_IP/ganglia
+
+
+# Logging
+
+This bundle includes rsyslog to collect syslog data from the namenode,
+resourcemanager, slave, kafka, and zookeeper units. These logs are sent to a
+centralized rsyslog unit for easy syslog analysis. One method of viewing this
+log data is to simply cat syslog from the rsyslog unit:
+
+    juju run --unit rsyslog/0 'sudo cat /var/log/syslog'
+
+Logs may also be forwarded to an external rsyslog processing service. See
+the *Forwarding logs to a system outside of the Juju environment* section of
+the [rsyslog README](https://jujucharms.com/rsyslog/) for more information.
+
+
+# Benchmarking
+
+The `resourcemanager` charm in this bundle provide several benchmarks to gauge
+the performance of the Hadoop cluster. Each benchmark is an action that can be
+run with `juju run-action`:
+
+    $ juju actions resourcemanager
+    ACTION      DESCRIPTION
+    mrbench     Mapreduce benchmark for small jobs
+    nnbench     Load test the NameNode hardware and configuration
+    smoke-test  Run an Apache Bigtop smoke test.
+    teragen     Generate data with teragen
+    terasort    Runs teragen to generate sample data, and then runs terasort to sort that data
+    testdfsio   DFS IO Testing
+
+    $ juju run-action resourcemanager/0 nnbench
+    Action queued with id: 55887b40-116c-4020-8b35-1e28a54cc622
+
+    $ juju show-action-output 55887b40-116c-4020-8b35-1e28a54cc622
+    results:
+      meta:
+        composite:
+          direction: asc
+          units: secs
+          value: "128"
+        start: 2016-02-04T14:55:39Z
+        stop: 2016-02-04T14:57:47Z
+      results:
+        raw: '{"BAD_ID": "0", "FILE: Number of read operations": "0", "Reduce input groups":
+          "8", "Reduce input records": "95", "Map output bytes": "1823", "Map input records":
+          "12", "Combine input records": "0", "HDFS: Number of bytes read": "18635", "FILE:
+          Number of bytes written": "32999982", "HDFS: Number of write operations": "330",
+          "Combine output records": "0", "Total committed heap usage (bytes)": "3144749056",
+          "Bytes Written": "164", "WRONG_LENGTH": "0", "Failed Shuffles": "0", "FILE:
+          Number of bytes read": "27879457", "WRONG_MAP": "0", "Spilled Records": "190",
+          "Merged Map outputs": "72", "HDFS: Number of large read operations": "0", "Reduce
+          shuffle bytes": "2445", "FILE: Number of large read operations": "0", "Map output
+          materialized bytes": "2445", "IO_ERROR": "0", "CONNECTION": "0", "HDFS: Number
+          of read operations": "567", "Map output records": "95", "Reduce output records":
+          "8", "WRONG_REDUCE": "0", "HDFS: Number of bytes written": "27412", "GC time
+          elapsed (ms)": "603", "Input split bytes": "1610", "Shuffled Maps ": "72", "FILE:
+          Number of write operations": "0", "Bytes Read": "1490"}'
+    status: completed
+    timing:
+      completed: 2016-02-04 14:57:48 +0000 UTC
+      enqueued: 2016-02-04 14:55:14 +0000 UTC
+      started: 2016-02-04 14:55:27 +0000 UTC
+
+
+# Scaling
+
+By default, three Hadoop slave, one Kafka, and three zookeeper units are
+deployed with this bundle. Scaling these applications is as simple as adding
+more units. To add one unit:
+
+    juju add-unit kafka
+    juju add-unit slave
+    juju add-unit zookeeper
+
+Multiple units may be added at once.  For example, add four more slave units:
+
+    juju add-unit -n4 slave
+
+
+# Contact Information
+
+- <bi...@lists.ubuntu.com>
+
+
+# Resources
+
+- [Apache Bigtop home page](http://bigtop.apache.org/)
+- [Apache Bigtop issue tracking](http://bigtop.apache.org/issue-tracking.html)
+- [Apache Bigtop mailing lists](http://bigtop.apache.org/mail-lists.html)
+- [Juju Bigtop charms](https://jujucharms.com/q/apache/bigtop)
+- [Juju mailing list](https://lists.ubuntu.com/mailman/listinfo/juju)
+- [Juju community](https://jujucharms.com/community)

http://git-wip-us.apache.org/repos/asf/bigtop/blob/5c0dc2a2/bigtop-deploy/juju/hadoop-kafka/bundle-dev.yaml
----------------------------------------------------------------------
diff --git a/bigtop-deploy/juju/hadoop-kafka/bundle-dev.yaml b/bigtop-deploy/juju/hadoop-kafka/bundle-dev.yaml
new file mode 100644
index 0000000..36053a8
--- /dev/null
+++ b/bigtop-deploy/juju/hadoop-kafka/bundle-dev.yaml
@@ -0,0 +1,151 @@
+services:
+  namenode:
+    charm: "cs:~bigdata-dev/xenial/hadoop-namenode"
+    constraints: "mem=7G root-disk=32G"
+    num_units: 1
+    annotations:
+      gui-x: "500"
+      gui-y: "800"
+    to:
+      - "0"
+  resourcemanager:
+    charm: "cs:~bigdata-dev/xenial/hadoop-resourcemanager"
+    constraints: "mem=7G root-disk=32G"
+    num_units: 1
+    annotations:
+      gui-x: "500"
+      gui-y: "0"
+    to:
+      - "0"
+  slave:
+    charm: "cs:~bigdata-dev/xenial/hadoop-slave"
+    constraints: "mem=7G root-disk=32G"
+    num_units: 3
+    annotations:
+      gui-x: "0"
+      gui-y: "400"
+    to:
+      - "1"
+      - "2"
+      - "3"
+  plugin:
+    charm: "cs:~bigdata-dev/xenial/hadoop-plugin"
+    annotations:
+      gui-x: "1000"
+      gui-y: "400"
+  client:
+    charm: "cs:xenial/hadoop-client-3"
+    constraints: "mem=3G"
+    num_units: 1
+    annotations:
+      gui-x: "1250"
+      gui-y: "400"
+    to:
+      - "4"
+# TODO: Charm bigtop flume
+  flume-hdfs:
+    charm: "cs:~bigdata-dev/xenial/apache-flume-hdfs-37"
+    num_units: 1
+    annotations:
+      gui-x: "1500"
+      gui-y: "400"
+    to:
+      - "4"
+  zookeeper:
+    charm: "cs:~bigdata-dev/xenial/zookeeper"
+    constraints: "mem=3G root-disk=32G"
+    num_units: 3
+    annotations:
+      gui-x: "500"
+      gui-y: "400"
+    to:
+      - "5"
+      - "6"
+      - "7"
+  kafka:
+    charm: "cs:~bigdata-dev/xenial/kafka"
+    constraints: "mem=3G"
+    num_units: 1
+    annotations:
+      gui-x: "1250"
+      gui-y: "800"
+    to:
+      - "8"
+# NOTE: flume-kafka cannot be colocated with flume-hdfs as they both use /etc/flume/conf
+  flume-kafka:
+    charm: "cs:~bigdata-dev/xenial/apache-flume-kafka-11"
+    num_units: 1
+    annotations:
+      gui-x: "1500"
+      gui-y: "800"
+    to:
+      - "8"
+  ganglia:
+    charm: "cs:~bigdata-dev/xenial/ganglia-5"
+    num_units: 1
+    annotations:
+      gui-x: "0"
+      gui-y: "800"
+    to:
+      - "4"
+  ganglia-node:
+    charm: "cs:~bigdata-dev/xenial/ganglia-node-6"
+    annotations:
+      gui-x: "250"
+      gui-y: "400"
+  rsyslog:
+    charm: "cs:~bigdata-dev/xenial/rsyslog-7"
+    num_units: 1
+    annotations:
+      gui-x: "1000"
+      gui-y: "800"
+    to:
+      - "4"
+  rsyslog-forwarder-ha:
+    charm: "cs:~bigdata-dev/xenial/rsyslog-forwarder-ha-7"
+    annotations:
+      gui-x: "750"
+      gui-y: "400"
+series: xenial
+relations:
+  - [resourcemanager, namenode]
+  - [namenode, slave]
+  - [resourcemanager, slave]
+  - [plugin, namenode]
+  - [plugin, resourcemanager]
+  - [client, plugin]
+  - [flume-hdfs, plugin]
+  - [flume-kafka, flume-hdfs]
+  - [flume-kafka, kafka]
+  - [kafka, zookeeper]
+  - ["ganglia-node:juju-info", "namenode:juju-info"]
+  - ["ganglia-node:juju-info", "resourcemanager:juju-info"]
+  - ["ganglia-node:juju-info", "slave:juju-info"]
+  - ["ganglia-node:juju-info", "kafka:juju-info"]
+  - ["ganglia-node:juju-info", "zookeeper:juju-info"]
+  - ["ganglia:node", "ganglia-node:node"]
+  - ["rsyslog-forwarder-ha:juju-info", "namenode:juju-info"]
+  - ["rsyslog-forwarder-ha:juju-info", "resourcemanager:juju-info"]
+  - ["rsyslog-forwarder-ha:juju-info", "slave:juju-info"]
+  - ["rsyslog-forwarder-ha:juju-info", "kafka:juju-info"]
+  - ["rsyslog-forwarder-ha:juju-info", "zookeeper:juju-info"]
+  - ["rsyslog:aggregator", "rsyslog-forwarder-ha:syslog"]
+machines:
+  "0":
+    series: "xenial"
+  "1":
+    series: "xenial"
+  "2":
+    series: "xenial"
+  "3":
+    series: "xenial"
+  "4":
+    series: "xenial"
+  "5":
+    series: "xenial"
+  "6":
+    series: "xenial"
+  "7":
+    series: "xenial"
+  "8":
+    series: "xenial"

http://git-wip-us.apache.org/repos/asf/bigtop/blob/5c0dc2a2/bigtop-deploy/juju/hadoop-kafka/bundle-local.yaml
----------------------------------------------------------------------
diff --git a/bigtop-deploy/juju/hadoop-kafka/bundle-local.yaml b/bigtop-deploy/juju/hadoop-kafka/bundle-local.yaml
new file mode 100644
index 0000000..500503c
--- /dev/null
+++ b/bigtop-deploy/juju/hadoop-kafka/bundle-local.yaml
@@ -0,0 +1,151 @@
+services:
+  namenode:
+    charm: "/home/ubuntu/charms/xenial/hadoop-namenode"
+    constraints: "mem=7G root-disk=32G"
+    num_units: 1
+    annotations:
+      gui-x: "500"
+      gui-y: "800"
+    to:
+      - "0"
+  resourcemanager:
+    charm: "/home/ubuntu/charms/xenial/hadoop-resourcemanager"
+    constraints: "mem=7G root-disk=32G"
+    num_units: 1
+    annotations:
+      gui-x: "500"
+      gui-y: "0"
+    to:
+      - "0"
+  slave:
+    charm: "/home/ubuntu/charms/xenial/hadoop-slave"
+    constraints: "mem=7G root-disk=32G"
+    num_units: 3
+    annotations:
+      gui-x: "0"
+      gui-y: "400"
+    to:
+      - "1"
+      - "2"
+      - "3"
+  plugin:
+    charm: "/home/ubuntu/charms/xenial/hadoop-plugin"
+    annotations:
+      gui-x: "1000"
+      gui-y: "400"
+  client:
+    charm: "cs:xenial/hadoop-client-3"
+    constraints: "mem=3G"
+    num_units: 1
+    annotations:
+      gui-x: "1250"
+      gui-y: "400"
+    to:
+      - "4"
+# TODO: Charm bigtop flume
+  flume-hdfs:
+    charm: "cs:~bigdata-dev/xenial/apache-flume-hdfs-37"
+    num_units: 1
+    annotations:
+      gui-x: "1500"
+      gui-y: "400"
+    to:
+      - "4"
+  zookeeper:
+    charm: "/home/ubuntu/charms/xenial/zookeeper"
+    constraints: "mem=3G root-disk=32G"
+    num_units: 3
+    annotations:
+      gui-x: "500"
+      gui-y: "400"
+    to:
+      - "5"
+      - "6"
+      - "7"
+  kafka:
+    charm: "/home/ubuntu/charms/xenial/kafka"
+    constraints: "mem=3G"
+    num_units: 1
+    annotations:
+      gui-x: "1250"
+      gui-y: "800"
+    to:
+      - "8"
+# NOTE: flume-kafka cannot be colocated with flume-hdfs as they both use /etc/flume/conf
+  flume-kafka:
+    charm: "cs:~bigdata-dev/xenial/apache-flume-kafka-11"
+    num_units: 1
+    annotations:
+      gui-x: "1500"
+      gui-y: "800"
+    to:
+      - "8"
+  ganglia:
+    charm: "cs:~bigdata-dev/xenial/ganglia-5"
+    num_units: 1
+    annotations:
+      gui-x: "0"
+      gui-y: "800"
+    to:
+      - "4"
+  ganglia-node:
+    charm: "cs:~bigdata-dev/xenial/ganglia-node-6"
+    annotations:
+      gui-x: "250"
+      gui-y: "400"
+  rsyslog:
+    charm: "cs:~bigdata-dev/xenial/rsyslog-7"
+    num_units: 1
+    annotations:
+      gui-x: "1000"
+      gui-y: "800"
+    to:
+      - "4"
+  rsyslog-forwarder-ha:
+    charm: "cs:~bigdata-dev/xenial/rsyslog-forwarder-ha-7"
+    annotations:
+      gui-x: "750"
+      gui-y: "400"
+series: xenial
+relations:
+  - [resourcemanager, namenode]
+  - [namenode, slave]
+  - [resourcemanager, slave]
+  - [plugin, namenode]
+  - [plugin, resourcemanager]
+  - [client, plugin]
+  - [flume-hdfs, plugin]
+  - [flume-kafka, flume-hdfs]
+  - [flume-kafka, kafka]
+  - [kafka, zookeeper]
+  - ["ganglia-node:juju-info", "namenode:juju-info"]
+  - ["ganglia-node:juju-info", "resourcemanager:juju-info"]
+  - ["ganglia-node:juju-info", "slave:juju-info"]
+  - ["ganglia-node:juju-info", "kafka:juju-info"]
+  - ["ganglia-node:juju-info", "zookeeper:juju-info"]
+  - ["ganglia:node", "ganglia-node:node"]
+  - ["rsyslog-forwarder-ha:juju-info", "namenode:juju-info"]
+  - ["rsyslog-forwarder-ha:juju-info", "resourcemanager:juju-info"]
+  - ["rsyslog-forwarder-ha:juju-info", "slave:juju-info"]
+  - ["rsyslog-forwarder-ha:juju-info", "kafka:juju-info"]
+  - ["rsyslog-forwarder-ha:juju-info", "zookeeper:juju-info"]
+  - ["rsyslog:aggregator", "rsyslog-forwarder-ha:syslog"]
+machines:
+  "0":
+    series: "xenial"
+  "1":
+    series: "xenial"
+  "2":
+    series: "xenial"
+  "3":
+    series: "xenial"
+  "4":
+    series: "xenial"
+  "5":
+    series: "xenial"
+  "6":
+    series: "xenial"
+  "7":
+    series: "xenial"
+  "8":
+    series: "xenial"

http://git-wip-us.apache.org/repos/asf/bigtop/blob/5c0dc2a2/bigtop-deploy/juju/hadoop-kafka/bundle.yaml
----------------------------------------------------------------------
diff --git a/bigtop-deploy/juju/hadoop-kafka/bundle.yaml b/bigtop-deploy/juju/hadoop-kafka/bundle.yaml
new file mode 100644
index 0000000..d4cb91f
--- /dev/null
+++ b/bigtop-deploy/juju/hadoop-kafka/bundle.yaml
@@ -0,0 +1,151 @@
+services:
+  namenode:
+    charm: "cs:xenial/hadoop-namenode-11"
+    constraints: "mem=7G root-disk=32G"
+    num_units: 1
+    annotations:
+      gui-x: "500"
+      gui-y: "800"
+    to:
+      - "0"
+  resourcemanager:
+    charm: "cs:xenial/hadoop-resourcemanager-11"
+    constraints: "mem=7G root-disk=32G"
+    num_units: 1
+    annotations:
+      gui-x: "500"
+      gui-y: "0"
+    to:
+      - "0"
+  slave:
+    charm: "cs:xenial/hadoop-slave-11"
+    constraints: "mem=7G root-disk=32G"
+    num_units: 3
+    annotations:
+      gui-x: "0"
+      gui-y: "400"
+    to:
+      - "1"
+      - "2"
+      - "3"
+  plugin:
+    charm: "cs:xenial/hadoop-plugin-11"
+    annotations:
+      gui-x: "1000"
+      gui-y: "400"
+  client:
+    charm: "cs:xenial/hadoop-client-3"
+    constraints: "mem=3G"
+    num_units: 1
+    annotations:
+      gui-x: "1250"
+      gui-y: "400"
+    to:
+      - "4"
+# TODO: Charm bigtop flume
+  flume-hdfs:
+    charm: "cs:~bigdata-dev/xenial/apache-flume-hdfs-37"
+    num_units: 1
+    annotations:
+      gui-x: "1500"
+      gui-y: "400"
+    to:
+      - "4"
+  zookeeper:
+    charm: "cs:xenial/zookeeper-12"
+    constraints: "mem=3G root-disk=32G"
+    num_units: 3
+    annotations:
+      gui-x: "500"
+      gui-y: "400"
+    to:
+      - "5"
+      - "6"
+      - "7"
+  kafka:
+    charm: "cs:xenial/kafka-7"
+    constraints: "mem=3G"
+    num_units: 1
+    annotations:
+      gui-x: "1250"
+      gui-y: "800"
+    to:
+      - "8"
+# NOTE: flume-kafka cannot be colocated with flume-hdfs as they both use /etc/flume/conf
+  flume-kafka:
+    charm: "cs:~bigdata-dev/xenial/apache-flume-kafka-11"
+    num_units: 1
+    annotations:
+      gui-x: "1500"
+      gui-y: "800"
+    to:
+      - "8"
+  ganglia:
+    charm: "cs:~bigdata-dev/xenial/ganglia-5"
+    num_units: 1
+    annotations:
+      gui-x: "0"
+      gui-y: "800"
+    to:
+      - "4"
+  ganglia-node:
+    charm: "cs:~bigdata-dev/xenial/ganglia-node-6"
+    annotations:
+      gui-x: "250"
+      gui-y: "400"
+  rsyslog:
+    charm: "cs:~bigdata-dev/xenial/rsyslog-7"
+    num_units: 1
+    annotations:
+      gui-x: "1000"
+      gui-y: "800"
+    to:
+      - "4"
+  rsyslog-forwarder-ha:
+    charm: "cs:~bigdata-dev/xenial/rsyslog-forwarder-ha-7"
+    annotations:
+      gui-x: "750"
+      gui-y: "400"
+series: xenial
+relations:
+  - [resourcemanager, namenode]
+  - [namenode, slave]
+  - [resourcemanager, slave]
+  - [plugin, namenode]
+  - [plugin, resourcemanager]
+  - [client, plugin]
+  - [flume-hdfs, plugin]
+  - [flume-kafka, flume-hdfs]
+  - [flume-kafka, kafka]
+  - [kafka, zookeeper]
+  - ["ganglia-node:juju-info", "namenode:juju-info"]
+  - ["ganglia-node:juju-info", "resourcemanager:juju-info"]
+  - ["ganglia-node:juju-info", "slave:juju-info"]
+  - ["ganglia-node:juju-info", "kafka:juju-info"]
+  - ["ganglia-node:juju-info", "zookeeper:juju-info"]
+  - ["ganglia:node", "ganglia-node:node"]
+  - ["rsyslog-forwarder-ha:juju-info", "namenode:juju-info"]
+  - ["rsyslog-forwarder-ha:juju-info", "resourcemanager:juju-info"]
+  - ["rsyslog-forwarder-ha:juju-info", "slave:juju-info"]
+  - ["rsyslog-forwarder-ha:juju-info", "kafka:juju-info"]
+  - ["rsyslog-forwarder-ha:juju-info", "zookeeper:juju-info"]
+  - ["rsyslog:aggregator", "rsyslog-forwarder-ha:syslog"]
+machines:
+  "0":
+    series: "xenial"
+  "1":
+    series: "xenial"
+  "2":
+    series: "xenial"
+  "3":
+    series: "xenial"
+  "4":
+    series: "xenial"
+  "5":
+    series: "xenial"
+  "6":
+    series: "xenial"
+  "7":
+    series: "xenial"
+  "8":
+    series: "xenial"

http://git-wip-us.apache.org/repos/asf/bigtop/blob/5c0dc2a2/bigtop-deploy/juju/hadoop-kafka/ci-info.yaml
----------------------------------------------------------------------
diff --git a/bigtop-deploy/juju/hadoop-kafka/ci-info.yaml b/bigtop-deploy/juju/hadoop-kafka/ci-info.yaml
new file mode 100644
index 0000000..56f11bb
--- /dev/null
+++ b/bigtop-deploy/juju/hadoop-kafka/ci-info.yaml
@@ -0,0 +1,34 @@
+bundle:
+  name: hadoop-kafka
+  namespace: bigdata-charmers
+  release: true
+  to-channel: beta
+charm-upgrade:
+  hadoop-namenode:
+    from-channel: edge
+    release: true
+    to-channel: beta
+  hadoop-resourcemanager:
+    from-channel: edge
+    release: true
+    to-channel: beta
+  hadoop-slave:
+    from-channel: edge
+    release: true
+    to-channel: beta
+  hadoop-client:
+    from-channel: edge
+    release: true
+    to-channel: beta
+  hadoop-plugin:
+    from-channel: edge
+    release: true
+    to-channel: beta
+  kafka:
+    from-channel: edge
+    release: true
+    to-channel: beta
+  zookeeper:
+    from-channel: edge
+    release: true
+    to-channel: beta

http://git-wip-us.apache.org/repos/asf/bigtop/blob/5c0dc2a2/bigtop-deploy/juju/hadoop-kafka/copyright
----------------------------------------------------------------------
diff --git a/bigtop-deploy/juju/hadoop-kafka/copyright b/bigtop-deploy/juju/hadoop-kafka/copyright
new file mode 100644
index 0000000..e900b97
--- /dev/null
+++ b/bigtop-deploy/juju/hadoop-kafka/copyright
@@ -0,0 +1,16 @@
+Format: http://dep.debian.net/deps/dep5/
+
+Files: *
+Copyright: Copyright 2015, Canonical Ltd., All Rights Reserved.
+License: Apache License 2.0
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+ .
+     http://www.apache.org/licenses/LICENSE-2.0
+ .
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.

http://git-wip-us.apache.org/repos/asf/bigtop/blob/5c0dc2a2/bigtop-deploy/juju/hadoop-kafka/tests/01-bundle.py
----------------------------------------------------------------------
diff --git a/bigtop-deploy/juju/hadoop-kafka/tests/01-bundle.py b/bigtop-deploy/juju/hadoop-kafka/tests/01-bundle.py
new file mode 100755
index 0000000..ee35369
--- /dev/null
+++ b/bigtop-deploy/juju/hadoop-kafka/tests/01-bundle.py
@@ -0,0 +1,124 @@
+#!/usr/bin/env python3
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import amulet
+import os
+import re
+import unittest
+import yaml
+
+
+class TestBundle(unittest.TestCase):
+    bundle_file = os.path.join(os.path.dirname(__file__), '..', 'bundle.yaml')
+
+    @classmethod
+    def setUpClass(cls):
+        # classmethod inheritance doesn't work quite right with
+        # setUpClass / tearDownClass, so subclasses have to manually call this
+        cls.d = amulet.Deployment(series='xenial')
+        with open(cls.bundle_file) as f:
+            bun = f.read()
+        bundle = yaml.safe_load(bun)
+
+        cls.d.load(bundle)
+        cls.d.setup(timeout=3600)
+        # we need units reporting ready before we attempt our smoke tests
+        cls.d.sentry.wait_for_messages({'client': re.compile('ready'),
+                                        'namenode': re.compile('ready'),
+                                        'resourcemanager': re.compile('ready'),
+                                        'slave': re.compile('ready'),
+                                        }, timeout=3600)
+        cls.hdfs = cls.d.sentry['namenode'][0]
+        cls.yarn = cls.d.sentry['resourcemanager'][0]
+        cls.slave = cls.d.sentry['slave'][0]
+        cls.kafka = cls.d.sentry['kafka'][0]
+
+    def test_components(self):
+        """
+        Confirm that all of the required components are up and running.
+        """
+        hdfs, retcode = self.hdfs.run("pgrep -a java")
+        yarn, retcode = self.yarn.run("pgrep -a java")
+        slave, retcode = self.slave.run("pgrep -a java")
+        kafka, retcode = self.kafka.run("pgrep -a java")
+
+        assert 'NameNode' in hdfs, "NameNode not started"
+        assert 'NameNode' not in slave, "NameNode should not be running on slave"
+
+        assert 'ResourceManager' in yarn, "ResourceManager not started"
+        assert 'ResourceManager' not in slave, "ResourceManager should not be running on slave"
+
+        assert 'JobHistoryServer' in yarn, "JobHistoryServer not started"
+        assert 'JobHistoryServer' not in slave, "JobHistoryServer should not be running on slave"
+
+        assert 'NodeManager' in slave, "NodeManager not started"
+        assert 'NodeManager' not in yarn, "NodeManager should not be running on resourcemanager"
+        assert 'NodeManager' not in hdfs, "NodeManager should not be running on namenode"
+
+        assert 'DataNode' in slave, "DataServer not started"
+        assert 'DataNode' not in yarn, "DataNode should not be running on resourcemanager"
+        assert 'DataNode' not in hdfs, "DataNode should not be running on namenode"
+
+        assert 'Kafka' in kafka, 'Kafka should be running on kafka'
+
+    def test_hdfs(self):
+        """
+        Validates mkdir, ls, chmod, and rm HDFS operations.
+        """
+        uuid = self.hdfs.run_action('smoke-test')
+        result = self.d.action_fetch(uuid, timeout=600, full_output=True)
+        # action status=completed on success
+        if (result['status'] != "completed"):
+            self.fail('HDFS smoke-test did not complete: %s' % result)
+
+    def test_yarn(self):
+        """
+        Validates YARN using the Bigtop 'yarn' smoke test.
+        """
+        uuid = self.yarn.run_action('smoke-test')
+        # 'yarn' smoke takes a while (bigtop tests download lots of stuff)
+        result = self.d.action_fetch(uuid, timeout=1800, full_output=True)
+        # action status=completed on success
+        if (result['status'] != "completed"):
+            self.fail('YARN smoke-test did not complete: %s' % result)
+
+    def test_kafka(self):
+        """
+        Validates create/list/delete of a Kafka topic.
+        """
+        uuid = self.kafka.run_action('smoke-test')
+        result = self.d.action_fetch(uuid, timeout=600, full_output=True)
+        # action status=completed on success
+        if (result['status'] != "completed"):
+            self.fail('Kafka smoke-test did not complete: %s' % result)
+
+    @unittest.skip(
+        'Skipping slave smoke tests; they are too inconsistent and long running for CWR.')
+    def test_slave(self):
+        """
+        Validates slave using the Bigtop 'hdfs' and 'mapred' smoke test.
+        """
+        uuid = self.slave.run_action('smoke-test')
+        # 'hdfs+mapred' smoke takes a long while (bigtop tests are slow)
+        result = self.d.action_fetch(uuid, timeout=3600, full_output=True)
+        # action status=completed on success
+        if (result['status'] != "completed"):
+            self.fail('Slave smoke-test did not complete: %s' % result)
+
+
+if __name__ == '__main__':
+    unittest.main()

http://git-wip-us.apache.org/repos/asf/bigtop/blob/5c0dc2a2/bigtop-deploy/juju/hadoop-kafka/tests/tests.yaml
----------------------------------------------------------------------
diff --git a/bigtop-deploy/juju/hadoop-kafka/tests/tests.yaml b/bigtop-deploy/juju/hadoop-kafka/tests/tests.yaml
new file mode 100644
index 0000000..84f78d7
--- /dev/null
+++ b/bigtop-deploy/juju/hadoop-kafka/tests/tests.yaml
@@ -0,0 +1,7 @@
+reset: false
+deployment_timeout: 3600
+sources:
+  - 'ppa:juju/stable'
+packages:
+  - amulet
+  - python3-yaml

http://git-wip-us.apache.org/repos/asf/bigtop/blob/5c0dc2a2/bigtop-deploy/juju/hadoop-processing/README.md
----------------------------------------------------------------------
diff --git a/bigtop-deploy/juju/hadoop-processing/README.md b/bigtop-deploy/juju/hadoop-processing/README.md
index a3321ff..896a793 100644
--- a/bigtop-deploy/juju/hadoop-processing/README.md
+++ b/bigtop-deploy/juju/hadoop-processing/README.md
@@ -35,7 +35,7 @@ and syslog activity.
 
 ## Bundle Composition
 
-The applications that comprise this bundle are spread across 6 machines as
+The applications that comprise this bundle are spread across 5 machines as
 follows:
 
   * NameNode (HDFS)
@@ -47,8 +47,9 @@ follows:
   * Plugin (Facilitates communication with the Hadoop cluster)
     * Colocated on the Client unit
   * Ganglia (Web interface for monitoring cluster metrics)
+    * Colocated on the Client unit
   * Rsyslog (Aggregate cluster syslog events in a single location)
-    * Colocated on the Ganglia unit
+    * Colocated on the Client unit
 
 Deploying this bundle results in a fully configured Apache Bigtop
 cluster on any supported cloud, which can be scaled to meet workload
@@ -276,7 +277,7 @@ Multiple units may be added at once.  For example, add four more slave units:
 
 # Resources
 
-- [Apache Bigtop](http://bigtop.apache.org/) home page
+- [Apache Bigtop home page](http://bigtop.apache.org/)
 - [Apache Bigtop issue tracking](http://bigtop.apache.org/issue-tracking.html)
 - [Apache Bigtop mailing lists](http://bigtop.apache.org/mail-lists.html)
 - [Juju Bigtop charms](https://jujucharms.com/q/apache/bigtop)

http://git-wip-us.apache.org/repos/asf/bigtop/blob/5c0dc2a2/bigtop-deploy/juju/hadoop-processing/bundle-dev.yaml
----------------------------------------------------------------------
diff --git a/bigtop-deploy/juju/hadoop-processing/bundle-dev.yaml b/bigtop-deploy/juju/hadoop-processing/bundle-dev.yaml
index 1380076..00fbdff 100644
--- a/bigtop-deploy/juju/hadoop-processing/bundle-dev.yaml
+++ b/bigtop-deploy/juju/hadoop-processing/bundle-dev.yaml
@@ -1,6 +1,7 @@
 services:
   namenode:
     charm: "cs:~bigdata-dev/xenial/hadoop-namenode"
+    constraints: "mem=7G root-disk=32G"
     num_units: 1
     annotations:
       gui-x: "500"
@@ -9,6 +10,7 @@ services:
       - "0"
   resourcemanager:
     charm: "cs:~bigdata-dev/xenial/hadoop-resourcemanager"
+    constraints: "mem=7G root-disk=32G"
     num_units: 1
     annotations:
       gui-x: "500"
@@ -17,6 +19,7 @@ services:
       - "0"
   slave:
     charm: "cs:~bigdata-dev/xenial/hadoop-slave"
+    constraints: "mem=7G root-disk=32G"
     num_units: 3
     annotations:
       gui-x: "0"
@@ -31,7 +34,8 @@ services:
       gui-x: "1000"
       gui-y: "400"
   client:
-    charm: "cs:xenial/hadoop-client-2"
+    charm: "cs:xenial/hadoop-client-3"
+    constraints: "mem=3G"
     num_units: 1
     annotations:
       gui-x: "1250"
@@ -45,20 +49,20 @@ services:
       gui-x: "0"
       gui-y: "800"
     to:
-      - "5"
+      - "4"
   ganglia-node:
     charm: "cs:~bigdata-dev/xenial/ganglia-node-6"
     annotations:
       gui-x: "250"
       gui-y: "400"
   rsyslog:
-    charm: "cs:~bigdata-dev/xenial/rsyslog-6"
+    charm: "cs:~bigdata-dev/xenial/rsyslog-7"
     num_units: 1
     annotations:
       gui-x: "1000"
       gui-y: "800"
     to:
-      - "5"
+      - "4"
   rsyslog-forwarder-ha:
     charm: "cs:~bigdata-dev/xenial/rsyslog-forwarder-ha-7"
     annotations:
@@ -72,32 +76,22 @@ relations:
   - [plugin, namenode]
   - [plugin, resourcemanager]
   - [client, plugin]
-  - ["ganglia-node:juju-info", "client:juju-info"]
   - ["ganglia-node:juju-info", "namenode:juju-info"]
   - ["ganglia-node:juju-info", "resourcemanager:juju-info"]
   - ["ganglia-node:juju-info", "slave:juju-info"]
   - ["ganglia:node", "ganglia-node:node"]
-  - ["rsyslog-forwarder-ha:juju-info", "client:juju-info"]
   - ["rsyslog-forwarder-ha:juju-info", "namenode:juju-info"]
   - ["rsyslog-forwarder-ha:juju-info", "resourcemanager:juju-info"]
   - ["rsyslog-forwarder-ha:juju-info", "slave:juju-info"]
   - ["rsyslog:aggregator", "rsyslog-forwarder-ha:syslog"]
 machines:
   "0":
-    constraints: "mem=7G root-disk=32G"
     series: "xenial"
   "1":
-    constraints: "mem=7G root-disk=32G"
     series: "xenial"
   "2":
-    constraints: "mem=7G root-disk=32G"
     series: "xenial"
   "3":
-    constraints: "mem=7G root-disk=32G"
     series: "xenial"
   "4":
-    constraints: "mem=3G"
-    series: "xenial"
-  "5":
-    constraints: "mem=3G"
     series: "xenial"

http://git-wip-us.apache.org/repos/asf/bigtop/blob/5c0dc2a2/bigtop-deploy/juju/hadoop-processing/bundle-local.yaml
----------------------------------------------------------------------
diff --git a/bigtop-deploy/juju/hadoop-processing/bundle-local.yaml b/bigtop-deploy/juju/hadoop-processing/bundle-local.yaml
index 0492ef7..39e7a2a 100644
--- a/bigtop-deploy/juju/hadoop-processing/bundle-local.yaml
+++ b/bigtop-deploy/juju/hadoop-processing/bundle-local.yaml
@@ -1,6 +1,7 @@
 services:
   namenode:
     charm: "/home/ubuntu/charms/xenial/hadoop-namenode"
+    constraints: "mem=7G root-disk=32G"
     num_units: 1
     annotations:
       gui-x: "500"
@@ -9,6 +10,7 @@ services:
       - "0"
   resourcemanager:
     charm: "/home/ubuntu/charms/xenial/hadoop-resourcemanager"
+    constraints: "mem=7G root-disk=32G"
     num_units: 1
     annotations:
       gui-x: "500"
@@ -17,6 +19,7 @@ services:
       - "0"
   slave:
     charm: "/home/ubuntu/charms/xenial/hadoop-slave"
+    constraints: "mem=7G root-disk=32G"
     num_units: 3
     annotations:
       gui-x: "0"
@@ -31,7 +34,8 @@ services:
       gui-x: "1000"
       gui-y: "400"
   client:
-    charm: "cs:xenial/hadoop-client-2"
+    charm: "cs:xenial/hadoop-client-3"
+    constraints: "mem=3G"
     num_units: 1
     annotations:
       gui-x: "1250"
@@ -45,20 +49,20 @@ services:
       gui-x: "0"
       gui-y: "800"
     to:
-      - "5"
+      - "4"
   ganglia-node:
     charm: "cs:~bigdata-dev/xenial/ganglia-node-6"
     annotations:
       gui-x: "250"
       gui-y: "400"
   rsyslog:
-    charm: "cs:~bigdata-dev/xenial/rsyslog-6"
+    charm: "cs:~bigdata-dev/xenial/rsyslog-7"
     num_units: 1
     annotations:
       gui-x: "1000"
       gui-y: "800"
     to:
-      - "5"
+      - "4"
   rsyslog-forwarder-ha:
     charm: "cs:~bigdata-dev/xenial/rsyslog-forwarder-ha-7"
     annotations:
@@ -72,32 +76,22 @@ relations:
   - [plugin, namenode]
   - [plugin, resourcemanager]
   - [client, plugin]
-  - ["ganglia-node:juju-info", "client:juju-info"]
   - ["ganglia-node:juju-info", "namenode:juju-info"]
   - ["ganglia-node:juju-info", "resourcemanager:juju-info"]
   - ["ganglia-node:juju-info", "slave:juju-info"]
   - ["ganglia:node", "ganglia-node:node"]
-  - ["rsyslog-forwarder-ha:juju-info", "client:juju-info"]
   - ["rsyslog-forwarder-ha:juju-info", "namenode:juju-info"]
   - ["rsyslog-forwarder-ha:juju-info", "resourcemanager:juju-info"]
   - ["rsyslog-forwarder-ha:juju-info", "slave:juju-info"]
   - ["rsyslog:aggregator", "rsyslog-forwarder-ha:syslog"]
 machines:
   "0":
-    constraints: "mem=7G root-disk=32G"
     series: "xenial"
   "1":
-    constraints: "mem=7G root-disk=32G"
     series: "xenial"
   "2":
-    constraints: "mem=7G root-disk=32G"
     series: "xenial"
   "3":
-    constraints: "mem=7G root-disk=32G"
     series: "xenial"
   "4":
-    constraints: "mem=3G"
-    series: "xenial"
-  "5":
-    constraints: "mem=3G"
     series: "xenial"

http://git-wip-us.apache.org/repos/asf/bigtop/blob/5c0dc2a2/bigtop-deploy/juju/hadoop-processing/bundle.yaml
----------------------------------------------------------------------
diff --git a/bigtop-deploy/juju/hadoop-processing/bundle.yaml b/bigtop-deploy/juju/hadoop-processing/bundle.yaml
index 6c162f2..c4c6ad6 100644
--- a/bigtop-deploy/juju/hadoop-processing/bundle.yaml
+++ b/bigtop-deploy/juju/hadoop-processing/bundle.yaml
@@ -1,6 +1,7 @@
 services:
   namenode:
-    charm: "cs:xenial/hadoop-namenode-8"
+    charm: "cs:xenial/hadoop-namenode-11"
+    constraints: "mem=7G root-disk=32G"
     num_units: 1
     annotations:
       gui-x: "500"
@@ -8,7 +9,8 @@ services:
     to:
       - "0"
   resourcemanager:
-    charm: "cs:xenial/hadoop-resourcemanager-8"
+    charm: "cs:xenial/hadoop-resourcemanager-11"
+    constraints: "mem=7G root-disk=32G"
     num_units: 1
     annotations:
       gui-x: "500"
@@ -16,7 +18,8 @@ services:
     to:
       - "0"
   slave:
-    charm: "cs:xenial/hadoop-slave-8"
+    charm: "cs:xenial/hadoop-slave-11"
+    constraints: "mem=7G root-disk=32G"
     num_units: 3
     annotations:
       gui-x: "0"
@@ -26,12 +29,13 @@ services:
       - "2"
       - "3"
   plugin:
-    charm: "cs:xenial/hadoop-plugin-8"
+    charm: "cs:xenial/hadoop-plugin-11"
     annotations:
       gui-x: "1000"
       gui-y: "400"
   client:
-    charm: "cs:xenial/hadoop-client-2"
+    charm: "cs:xenial/hadoop-client-3"
+    constraints: "mem=3G"
     num_units: 1
     annotations:
       gui-x: "1250"
@@ -45,7 +49,7 @@ services:
       gui-x: "0"
       gui-y: "800"
     to:
-      - "5"
+      - "4"
   ganglia-node:
     charm: "cs:~bigdata-dev/xenial/ganglia-node-6"
     annotations:
@@ -58,7 +62,7 @@ services:
       gui-x: "1000"
       gui-y: "800"
     to:
-      - "5"
+      - "4"
   rsyslog-forwarder-ha:
     charm: "cs:~bigdata-dev/xenial/rsyslog-forwarder-ha-7"
     annotations:
@@ -72,32 +76,22 @@ relations:
   - [plugin, namenode]
   - [plugin, resourcemanager]
   - [client, plugin]
-  - ["ganglia-node:juju-info", "client:juju-info"]
   - ["ganglia-node:juju-info", "namenode:juju-info"]
   - ["ganglia-node:juju-info", "resourcemanager:juju-info"]
   - ["ganglia-node:juju-info", "slave:juju-info"]
   - ["ganglia:node", "ganglia-node:node"]
-  - ["rsyslog-forwarder-ha:juju-info", "client:juju-info"]
   - ["rsyslog-forwarder-ha:juju-info", "namenode:juju-info"]
   - ["rsyslog-forwarder-ha:juju-info", "resourcemanager:juju-info"]
   - ["rsyslog-forwarder-ha:juju-info", "slave:juju-info"]
   - ["rsyslog:aggregator", "rsyslog-forwarder-ha:syslog"]
 machines:
   "0":
-    constraints: "mem=7G root-disk=32G"
     series: "xenial"
   "1":
-    constraints: "mem=7G root-disk=32G"
     series: "xenial"
   "2":
-    constraints: "mem=7G root-disk=32G"
     series: "xenial"
   "3":
-    constraints: "mem=7G root-disk=32G"
     series: "xenial"
   "4":
-    constraints: "mem=3G"
-    series: "xenial"
-  "5":
-    constraints: "mem=3G"
     series: "xenial"

http://git-wip-us.apache.org/repos/asf/bigtop/blob/5c0dc2a2/bigtop-deploy/juju/hadoop-processing/ci-info.yaml
----------------------------------------------------------------------
diff --git a/bigtop-deploy/juju/hadoop-processing/ci-info.yaml b/bigtop-deploy/juju/hadoop-processing/ci-info.yaml
new file mode 100644
index 0000000..72e2082
--- /dev/null
+++ b/bigtop-deploy/juju/hadoop-processing/ci-info.yaml
@@ -0,0 +1,26 @@
+bundle:
+  name: hadoop-processing
+  namespace: bigdata-charmers
+  release: true
+  to-channel: beta
+charm-upgrade:
+  hadoop-namenode:
+    from-channel: edge
+    release: true
+    to-channel: beta
+  hadoop-resourcemanager:
+    from-channel: edge
+    release: true
+    to-channel: beta
+  hadoop-slave:
+    from-channel: edge
+    release: true
+    to-channel: beta
+  hadoop-client:
+    from-channel: edge
+    release: true
+    to-channel: beta
+  hadoop-plugin:
+    from-channel: edge
+    release: true
+    to-channel: beta

http://git-wip-us.apache.org/repos/asf/bigtop/blob/5c0dc2a2/bigtop-deploy/juju/hadoop-processing/tests/01-bundle.py
----------------------------------------------------------------------
diff --git a/bigtop-deploy/juju/hadoop-processing/tests/01-bundle.py b/bigtop-deploy/juju/hadoop-processing/tests/01-bundle.py
index 4fee723..b10ed22 100755
--- a/bigtop-deploy/juju/hadoop-processing/tests/01-bundle.py
+++ b/bigtop-deploy/juju/hadoop-processing/tests/01-bundle.py
@@ -34,17 +34,6 @@ class TestBundle(unittest.TestCase):
             bun = f.read()
         bundle = yaml.safe_load(bun)
 
-        # NB: strip machine ('to') placement out. amulet loses our machine spec
-        # somewhere between yaml and json; without that spec, charms specifying
-        # machine placement will not deploy. This is ok for now because all
-        # charms in this bundle are using 'reset: false' so we'll already
-        # have our deployment just the way we want it by the time this test
-        # runs. However, it's bad. Remove once this is fixed:
-        #  https://github.com/juju/amulet/issues/148
-        for service, service_config in bundle['services'].items():
-            if 'to' in service_config:
-                del service_config['to']
-
         cls.d.load(bundle)
         cls.d.setup(timeout=3600)
 

http://git-wip-us.apache.org/repos/asf/bigtop/blob/5c0dc2a2/bigtop-deploy/juju/hadoop-spark/README.md
----------------------------------------------------------------------
diff --git a/bigtop-deploy/juju/hadoop-spark/README.md b/bigtop-deploy/juju/hadoop-spark/README.md
index b2b936b..cc956e9 100644
--- a/bigtop-deploy/juju/hadoop-spark/README.md
+++ b/bigtop-deploy/juju/hadoop-spark/README.md
@@ -45,16 +45,16 @@ follows:
     * Colocated on the NameNode unit
   * Slave (DataNode and NodeManager)
     * 3 separate units
-  * Spark
-  * Plugin (Facilitates communication with the Hadoop cluster)
-    * Colocated on the Spark unit
-  * Client (Hadoop endpoint)
-    * Colocated on the Spark unit
+  * Spark (Master in yarn-client mode)
   * Zookeeper
     * 3 separate units
+  * Client (Hadoop endpoint)
+  * Plugin (Facilitates communication with the Hadoop cluster)
+    * Colocated on the Spark and Client units
   * Ganglia (Web interface for monitoring cluster metrics)
+    * Colocated on the Client unit
   * Rsyslog (Aggregate cluster syslog events in a single location)
-    * Colocated on the Ganglia unit
+    * Colocated on the Client unit
 
 Deploying this bundle results in a fully configured Apache Bigtop
 cluster on any supported cloud, which can be scaled to meet workload
@@ -348,7 +348,7 @@ Multiple units may be added at once.  For example, add four more slave units:
 
 # Resources
 
-- [Apache Bigtop](http://bigtop.apache.org/) home page
+- [Apache Bigtop home page](http://bigtop.apache.org/)
 - [Apache Bigtop issue tracking](http://bigtop.apache.org/issue-tracking.html)
 - [Apache Bigtop mailing lists](http://bigtop.apache.org/mail-lists.html)
 - [Juju Bigtop charms](https://jujucharms.com/q/apache/bigtop)

http://git-wip-us.apache.org/repos/asf/bigtop/blob/5c0dc2a2/bigtop-deploy/juju/hadoop-spark/bundle-dev.yaml
----------------------------------------------------------------------
diff --git a/bigtop-deploy/juju/hadoop-spark/bundle-dev.yaml b/bigtop-deploy/juju/hadoop-spark/bundle-dev.yaml
index 35623fd..0bd529c 100644
--- a/bigtop-deploy/juju/hadoop-spark/bundle-dev.yaml
+++ b/bigtop-deploy/juju/hadoop-spark/bundle-dev.yaml
@@ -1,6 +1,7 @@
 services:
   namenode:
     charm: "cs:~bigdata-dev/xenial/hadoop-namenode"
+    constraints: "mem=7G root-disk=32G"
     num_units: 1
     annotations:
       gui-x: "500"
@@ -9,6 +10,7 @@ services:
       - "0"
   resourcemanager:
     charm: "cs:~bigdata-dev/xenial/hadoop-resourcemanager"
+    constraints: "mem=7G root-disk=32G"
     num_units: 1
     annotations:
       gui-x: "500"
@@ -17,6 +19,7 @@ services:
       - "0"
   slave:
     charm: "cs:~bigdata-dev/xenial/hadoop-slave"
+    constraints: "mem=7G root-disk=32G"
     num_units: 3
     annotations:
       gui-x: "0"
@@ -31,7 +34,8 @@ services:
       gui-x: "1000"
       gui-y: "400"
   client:
-    charm: "cs:xenial/hadoop-client-2"
+    charm: "cs:xenial/hadoop-client-3"
+    constraints: "mem=3G"
     num_units: 1
     annotations:
       gui-x: "1250"
@@ -40,6 +44,7 @@ services:
       - "4"
   spark:
     charm: "cs:~bigdata-dev/xenial/spark"
+    constraints: "mem=7G root-disk=32G"
     num_units: 1
     options:
       spark_execution_mode: "yarn-client"
@@ -47,17 +52,18 @@ services:
       gui-x: "1000"
       gui-y: "0"
     to:
-      - "4"
+      - "5"
   zookeeper:
-    charm: "cs:xenial/zookeeper-10"
+    charm: "cs:~bigdata-dev/xenial/zookeeper"
+    constraints: "mem=3G root-disk=32G"
     num_units: 3
     annotations:
       gui-x: "500"
       gui-y: "400"
     to:
-      - "5"
       - "6"
       - "7"
+      - "8"
   ganglia:
     charm: "cs:~bigdata-dev/xenial/ganglia-5"
     num_units: 1
@@ -65,20 +71,20 @@ services:
       gui-x: "0"
       gui-y: "800"
     to:
-      - "8"
+      - "4"
   ganglia-node:
     charm: "cs:~bigdata-dev/xenial/ganglia-node-6"
     annotations:
       gui-x: "250"
       gui-y: "400"
   rsyslog:
-    charm: "cs:~bigdata-dev/xenial/rsyslog-6"
+    charm: "cs:~bigdata-dev/xenial/rsyslog-7"
     num_units: 1
     annotations:
       gui-x: "1000"
       gui-y: "800"
     to:
-      - "8"
+      - "4"
   rsyslog-forwarder-ha:
     charm: "cs:~bigdata-dev/xenial/rsyslog-forwarder-ha-7"
     annotations:
@@ -94,14 +100,12 @@ relations:
   - [client, plugin]
   - [spark, plugin]
   - [spark, zookeeper]
-  - ["ganglia-node:juju-info", "client:juju-info"]
   - ["ganglia-node:juju-info", "namenode:juju-info"]
   - ["ganglia-node:juju-info", "resourcemanager:juju-info"]
   - ["ganglia-node:juju-info", "slave:juju-info"]
   - ["ganglia-node:juju-info", "spark:juju-info"]
   - ["ganglia-node:juju-info", "zookeeper:juju-info"]
   - ["ganglia:node", "ganglia-node:node"]
-  - ["rsyslog-forwarder-ha:juju-info", "client:juju-info"]
   - ["rsyslog-forwarder-ha:juju-info", "namenode:juju-info"]
   - ["rsyslog-forwarder-ha:juju-info", "resourcemanager:juju-info"]
   - ["rsyslog-forwarder-ha:juju-info", "slave:juju-info"]
@@ -110,29 +114,20 @@ relations:
   - ["rsyslog:aggregator", "rsyslog-forwarder-ha:syslog"]
 machines:
   "0":
-    constraints: "mem=7G root-disk=32G"
     series: "xenial"
   "1":
-    constraints: "mem=7G root-disk=32G"
     series: "xenial"
   "2":
-    constraints: "mem=7G root-disk=32G"
     series: "xenial"
   "3":
-    constraints: "mem=7G root-disk=32G"
     series: "xenial"
   "4":
-    constraints: "mem=7G root-disk=32G"
     series: "xenial"
   "5":
-    constraints: "mem=3G root-disk=32G"
     series: "xenial"
   "6":
-    constraints: "mem=3G root-disk=32G"
     series: "xenial"
   "7":
-    constraints: "mem=3G root-disk=32G"
     series: "xenial"
   "8":
-    constraints: "mem=3G"
     series: "xenial"

http://git-wip-us.apache.org/repos/asf/bigtop/blob/5c0dc2a2/bigtop-deploy/juju/hadoop-spark/bundle-local.yaml
----------------------------------------------------------------------
diff --git a/bigtop-deploy/juju/hadoop-spark/bundle-local.yaml b/bigtop-deploy/juju/hadoop-spark/bundle-local.yaml
index 160683a..0c172ef 100644
--- a/bigtop-deploy/juju/hadoop-spark/bundle-local.yaml
+++ b/bigtop-deploy/juju/hadoop-spark/bundle-local.yaml
@@ -1,6 +1,7 @@
 services:
   namenode:
     charm: "/home/ubuntu/charms/xenial/hadoop-namenode"
+    constraints: "mem=7G root-disk=32G"
     num_units: 1
     annotations:
       gui-x: "500"
@@ -9,6 +10,7 @@ services:
       - "0"
   resourcemanager:
     charm: "/home/ubuntu/charms/xenial/hadoop-resourcemanager"
+    constraints: "mem=7G root-disk=32G"
     num_units: 1
     annotations:
       gui-x: "500"
@@ -17,6 +19,7 @@ services:
       - "0"
   slave:
     charm: "/home/ubuntu/charms/xenial/hadoop-slave"
+    constraints: "mem=7G root-disk=32G"
     num_units: 3
     annotations:
       gui-x: "0"
@@ -31,7 +34,8 @@ services:
       gui-x: "1000"
       gui-y: "400"
   client:
-    charm: "cs:xenial/hadoop-client-2"
+    charm: "cs:xenial/hadoop-client-3"
+    constraints: "mem=3G"
     num_units: 1
     annotations:
       gui-x: "1250"
@@ -40,6 +44,7 @@ services:
       - "4"
   spark:
     charm: "/home/ubuntu/charms/xenial/spark"
+    constraints: "mem=7G root-disk=32G"
     num_units: 1
     options:
       spark_execution_mode: "yarn-client"
@@ -47,17 +52,18 @@ services:
       gui-x: "1000"
       gui-y: "0"
     to:
-      - "4"
+      - "5"
   zookeeper:
-    charm: "cs:xenial/zookeeper-10"
+    charm: "/home/ubuntu/charms/xenial/zookeeper"
+    constraints: "mem=3G root-disk=32G"
     num_units: 3
     annotations:
       gui-x: "500"
       gui-y: "400"
     to:
-      - "5"
       - "6"
       - "7"
+      - "8"
   ganglia:
     charm: "cs:~bigdata-dev/xenial/ganglia-5"
     num_units: 1
@@ -65,20 +71,20 @@ services:
       gui-x: "0"
       gui-y: "800"
     to:
-      - "8"
+      - "4"
   ganglia-node:
     charm: "cs:~bigdata-dev/xenial/ganglia-node-6"
     annotations:
       gui-x: "250"
       gui-y: "400"
   rsyslog:
-    charm: "cs:~bigdata-dev/xenial/rsyslog-6"
+    charm: "cs:~bigdata-dev/xenial/rsyslog-7"
     num_units: 1
     annotations:
       gui-x: "1000"
       gui-y: "800"
     to:
-      - "8"
+      - "4"
   rsyslog-forwarder-ha:
     charm: "cs:~bigdata-dev/xenial/rsyslog-forwarder-ha-7"
     annotations:
@@ -94,14 +100,12 @@ relations:
   - [client, plugin]
   - [spark, plugin]
   - [spark, zookeeper]
-  - ["ganglia-node:juju-info", "client:juju-info"]
   - ["ganglia-node:juju-info", "namenode:juju-info"]
   - ["ganglia-node:juju-info", "resourcemanager:juju-info"]
   - ["ganglia-node:juju-info", "slave:juju-info"]
   - ["ganglia-node:juju-info", "spark:juju-info"]
   - ["ganglia-node:juju-info", "zookeeper:juju-info"]
   - ["ganglia:node", "ganglia-node:node"]
-  - ["rsyslog-forwarder-ha:juju-info", "client:juju-info"]
   - ["rsyslog-forwarder-ha:juju-info", "namenode:juju-info"]
   - ["rsyslog-forwarder-ha:juju-info", "resourcemanager:juju-info"]
   - ["rsyslog-forwarder-ha:juju-info", "slave:juju-info"]
@@ -110,29 +114,20 @@ relations:
   - ["rsyslog:aggregator", "rsyslog-forwarder-ha:syslog"]
 machines:
   "0":
-    constraints: "mem=7G root-disk=32G"
     series: "xenial"
   "1":
-    constraints: "mem=7G root-disk=32G"
     series: "xenial"
   "2":
-    constraints: "mem=7G root-disk=32G"
     series: "xenial"
   "3":
-    constraints: "mem=7G root-disk=32G"
     series: "xenial"
   "4":
-    constraints: "mem=7G root-disk=32G"
     series: "xenial"
   "5":
-    constraints: "mem=3G root-disk=32G"
     series: "xenial"
   "6":
-    constraints: "mem=3G root-disk=32G"
     series: "xenial"
   "7":
-    constraints: "mem=3G root-disk=32G"
     series: "xenial"
   "8":
-    constraints: "mem=3G"
     series: "xenial"

http://git-wip-us.apache.org/repos/asf/bigtop/blob/5c0dc2a2/bigtop-deploy/juju/hadoop-spark/bundle.yaml
----------------------------------------------------------------------
diff --git a/bigtop-deploy/juju/hadoop-spark/bundle.yaml b/bigtop-deploy/juju/hadoop-spark/bundle.yaml
index 2bb4e85..6346c01 100644
--- a/bigtop-deploy/juju/hadoop-spark/bundle.yaml
+++ b/bigtop-deploy/juju/hadoop-spark/bundle.yaml
@@ -1,6 +1,7 @@
 services:
   namenode:
-    charm: "cs:xenial/hadoop-namenode-8"
+    charm: "cs:xenial/hadoop-namenode-11"
+    constraints: "mem=7G root-disk=32G"
     num_units: 1
     annotations:
       gui-x: "500"
@@ -8,7 +9,8 @@ services:
     to:
       - "0"
   resourcemanager:
-    charm: "cs:xenial/hadoop-resourcemanager-8"
+    charm: "cs:xenial/hadoop-resourcemanager-11"
+    constraints: "mem=7G root-disk=32G"
     num_units: 1
     annotations:
       gui-x: "500"
@@ -16,7 +18,8 @@ services:
     to:
       - "0"
   slave:
-    charm: "cs:xenial/hadoop-slave-8"
+    charm: "cs:xenial/hadoop-slave-11"
+    constraints: "mem=7G root-disk=32G"
     num_units: 3
     annotations:
       gui-x: "0"
@@ -26,12 +29,13 @@ services:
       - "2"
       - "3"
   plugin:
-    charm: "cs:xenial/hadoop-plugin-8"
+    charm: "cs:xenial/hadoop-plugin-11"
     annotations:
       gui-x: "1000"
       gui-y: "400"
   client:
-    charm: "cs:xenial/hadoop-client-2"
+    charm: "cs:xenial/hadoop-client-3"
+    constraints: "mem=3G"
     num_units: 1
     annotations:
       gui-x: "1250"
@@ -39,7 +43,8 @@ services:
     to:
       - "4"
   spark:
-    charm: "cs:xenial/spark-17"
+    charm: "cs:xenial/spark-19"
+    constraints: "mem=7G root-disk=32G"
     num_units: 1
     options:
       spark_execution_mode: "yarn-client"
@@ -47,17 +52,18 @@ services:
       gui-x: "1000"
       gui-y: "0"
     to:
-      - "4"
+      - "5"
   zookeeper:
-    charm: "cs:xenial/zookeeper-10"
+    charm: "cs:xenial/zookeeper-12"
+    constraints: "mem=3G root-disk=32G"
     num_units: 3
     annotations:
       gui-x: "500"
       gui-y: "400"
     to:
-      - "5"
       - "6"
       - "7"
+      - "8"
   ganglia:
     charm: "cs:~bigdata-dev/xenial/ganglia-5"
     num_units: 1
@@ -65,7 +71,7 @@ services:
       gui-x: "0"
       gui-y: "800"
     to:
-      - "8"
+      - "4"
   ganglia-node:
     charm: "cs:~bigdata-dev/xenial/ganglia-node-6"
     annotations:
@@ -78,7 +84,7 @@ services:
       gui-x: "1000"
       gui-y: "800"
     to:
-      - "8"
+      - "4"
   rsyslog-forwarder-ha:
     charm: "cs:~bigdata-dev/xenial/rsyslog-forwarder-ha-7"
     annotations:
@@ -94,14 +100,12 @@ relations:
   - [client, plugin]
   - [spark, plugin]
   - [spark, zookeeper]
-  - ["ganglia-node:juju-info", "client:juju-info"]
   - ["ganglia-node:juju-info", "namenode:juju-info"]
   - ["ganglia-node:juju-info", "resourcemanager:juju-info"]
   - ["ganglia-node:juju-info", "slave:juju-info"]
   - ["ganglia-node:juju-info", "spark:juju-info"]
   - ["ganglia-node:juju-info", "zookeeper:juju-info"]
   - ["ganglia:node", "ganglia-node:node"]
-  - ["rsyslog-forwarder-ha:juju-info", "client:juju-info"]
   - ["rsyslog-forwarder-ha:juju-info", "namenode:juju-info"]
   - ["rsyslog-forwarder-ha:juju-info", "resourcemanager:juju-info"]
   - ["rsyslog-forwarder-ha:juju-info", "slave:juju-info"]
@@ -110,29 +114,20 @@ relations:
   - ["rsyslog:aggregator", "rsyslog-forwarder-ha:syslog"]
 machines:
   "0":
-    constraints: "mem=7G root-disk=32G"
     series: "xenial"
   "1":
-    constraints: "mem=7G root-disk=32G"
     series: "xenial"
   "2":
-    constraints: "mem=7G root-disk=32G"
     series: "xenial"
   "3":
-    constraints: "mem=7G root-disk=32G"
     series: "xenial"
   "4":
-    constraints: "mem=7G root-disk=32G"
     series: "xenial"
   "5":
-    constraints: "mem=3G root-disk=32G"
     series: "xenial"
   "6":
-    constraints: "mem=3G root-disk=32G"
     series: "xenial"
   "7":
-    constraints: "mem=3G root-disk=32G"
     series: "xenial"
   "8":
-    constraints: "mem=3G"
     series: "xenial"

http://git-wip-us.apache.org/repos/asf/bigtop/blob/5c0dc2a2/bigtop-deploy/juju/hadoop-spark/ci-info.yaml
----------------------------------------------------------------------
diff --git a/bigtop-deploy/juju/hadoop-spark/ci-info.yaml b/bigtop-deploy/juju/hadoop-spark/ci-info.yaml
new file mode 100644
index 0000000..ae79aee
--- /dev/null
+++ b/bigtop-deploy/juju/hadoop-spark/ci-info.yaml
@@ -0,0 +1,34 @@
+bundle:
+  name: hadoop-spark
+  namespace: bigdata-charmers
+  release: true
+  to-channel: beta
+charm-upgrade:
+  hadoop-namenode:
+    from-channel: edge
+    release: true
+    to-channel: beta
+  hadoop-resourcemanager:
+    from-channel: edge
+    release: true
+    to-channel: beta
+  hadoop-slave:
+    from-channel: edge
+    release: true
+    to-channel: beta
+  hadoop-client:
+    from-channel: edge
+    release: true
+    to-channel: beta
+  hadoop-plugin:
+    from-channel: edge
+    release: true
+    to-channel: beta
+  spark:
+    from-channel: edge
+    release: true
+    to-channel: beta
+  zookeeper:
+    from-channel: edge
+    release: true
+    to-channel: beta

http://git-wip-us.apache.org/repos/asf/bigtop/blob/5c0dc2a2/bigtop-deploy/juju/hadoop-spark/tests/01-bundle.py
----------------------------------------------------------------------
diff --git a/bigtop-deploy/juju/hadoop-spark/tests/01-bundle.py b/bigtop-deploy/juju/hadoop-spark/tests/01-bundle.py
index ba292bc..e8a0766 100755
--- a/bigtop-deploy/juju/hadoop-spark/tests/01-bundle.py
+++ b/bigtop-deploy/juju/hadoop-spark/tests/01-bundle.py
@@ -34,17 +34,6 @@ class TestBundle(unittest.TestCase):
             bun = f.read()
         bundle = yaml.safe_load(bun)
 
-        # NB: strip machine ('to') placement out. amulet loses our machine spec
-        # somewhere between yaml and json; without that spec, charms specifying
-        # machine placement will not deploy. This is ok for now because all
-        # charms in this bundle are using 'reset: false' so we'll already
-        # have our deployment just the way we want it by the time this test
-        # runs. However, it's bad. Remove once this is fixed:
-        #  https://github.com/juju/amulet/issues/148
-        for service, service_config in bundle['services'].items():
-            if 'to' in service_config:
-                del service_config['to']
-
         cls.d.load(bundle)
         cls.d.setup(timeout=3600)
 

http://git-wip-us.apache.org/repos/asf/bigtop/blob/5c0dc2a2/bigtop-deploy/juju/hadoop-spark/tests/tests.yaml
----------------------------------------------------------------------
diff --git a/bigtop-deploy/juju/hadoop-spark/tests/tests.yaml b/bigtop-deploy/juju/hadoop-spark/tests/tests.yaml
index c9325b0..84f78d7 100644
--- a/bigtop-deploy/juju/hadoop-spark/tests/tests.yaml
+++ b/bigtop-deploy/juju/hadoop-spark/tests/tests.yaml
@@ -1,5 +1,5 @@
 reset: false
-deployment_timeout: 7200
+deployment_timeout: 3600
 sources:
   - 'ppa:juju/stable'
 packages:

http://git-wip-us.apache.org/repos/asf/bigtop/blob/5c0dc2a2/bigtop-deploy/juju/spark-processing/README.md
----------------------------------------------------------------------
diff --git a/bigtop-deploy/juju/spark-processing/README.md b/bigtop-deploy/juju/spark-processing/README.md
index 36cf029..c39fa2c 100644
--- a/bigtop-deploy/juju/spark-processing/README.md
+++ b/bigtop-deploy/juju/spark-processing/README.md
@@ -251,7 +251,7 @@ Multiple units may be added at once.  For example, add four more spark units:
 
 # Resources
 
-- [Apache Bigtop](http://bigtop.apache.org/) home page
+- [Apache Bigtop home page](http://bigtop.apache.org/)
 - [Apache Bigtop issue tracking](http://bigtop.apache.org/issue-tracking.html)
 - [Apache Bigtop mailing lists](http://bigtop.apache.org/mail-lists.html)
 - [Juju Bigtop charms](https://jujucharms.com/q/apache/bigtop)

http://git-wip-us.apache.org/repos/asf/bigtop/blob/5c0dc2a2/bigtop-deploy/juju/spark-processing/bundle-dev.yaml
----------------------------------------------------------------------
diff --git a/bigtop-deploy/juju/spark-processing/bundle-dev.yaml b/bigtop-deploy/juju/spark-processing/bundle-dev.yaml
index c9689ec..df8306f 100644
--- a/bigtop-deploy/juju/spark-processing/bundle-dev.yaml
+++ b/bigtop-deploy/juju/spark-processing/bundle-dev.yaml
@@ -1,6 +1,7 @@
 services:
   spark:
     charm: "cs:~bigdata-dev/xenial/spark"
+    constraints: "mem=7G root-disk=32G"
     num_units: 2
     annotations:
       gui-x: "500"
@@ -9,7 +10,8 @@ services:
       - "0"
       - "1"
   zookeeper:
-    charm: "cs:xenial/zookeeper-10"
+    charm: "cs:~bigdata-dev/xenial/zookeeper"
+    constraints: "mem=3G root-disk=32G"
     num_units: 3
     annotations:
       gui-x: "500"
@@ -32,7 +34,7 @@ services:
       gui-x: "250"
       gui-y: "400"
   rsyslog:
-    charm: "cs:~bigdata-dev/xenial/rsyslog-6"
+    charm: "cs:~bigdata-dev/xenial/rsyslog-7"
     num_units: 1
     annotations:
       gui-x: "1000"
@@ -55,20 +57,14 @@ relations:
   - ["rsyslog:aggregator", "rsyslog-forwarder-ha:syslog"]
 machines:
   "0":
-    constraints: "mem=7G root-disk=32G"
     series: "xenial"
   "1":
-    constraints: "mem=7G root-disk=32G"
     series: "xenial"
   "2":
-    constraints: "mem=3G root-disk=32G"
     series: "xenial"
   "3":
-    constraints: "mem=3G root-disk=32G"
     series: "xenial"
   "4":
-    constraints: "mem=3G root-disk=32G"
     series: "xenial"
   "5":
-    constraints: "mem=3G"
     series: "xenial"

http://git-wip-us.apache.org/repos/asf/bigtop/blob/5c0dc2a2/bigtop-deploy/juju/spark-processing/bundle-local.yaml
----------------------------------------------------------------------
diff --git a/bigtop-deploy/juju/spark-processing/bundle-local.yaml b/bigtop-deploy/juju/spark-processing/bundle-local.yaml
index 90e51e7..063d5e7 100644
--- a/bigtop-deploy/juju/spark-processing/bundle-local.yaml
+++ b/bigtop-deploy/juju/spark-processing/bundle-local.yaml
@@ -1,6 +1,7 @@
 services:
   spark:
     charm: "/home/ubuntu/charms/xenial/spark"
+    constraints: "mem=7G root-disk=32G"
     num_units: 2
     annotations:
       gui-x: "500"
@@ -9,7 +10,8 @@ services:
       - "0"
       - "1"
   zookeeper:
-    charm: "cs:xenial/zookeeper-10"
+    charm: "/home/ubuntu/charms/xenial/zookeeper"
+    constraints: "mem=3G root-disk=32G"
     num_units: 3
     annotations:
       gui-x: "500"
@@ -32,7 +34,7 @@ services:
       gui-x: "250"
       gui-y: "400"
   rsyslog:
-    charm: "cs:~bigdata-dev/xenial/rsyslog-6"
+    charm: "cs:~bigdata-dev/xenial/rsyslog-7"
     num_units: 1
     annotations:
       gui-x: "1000"
@@ -55,20 +57,14 @@ relations:
   - ["rsyslog:aggregator", "rsyslog-forwarder-ha:syslog"]
 machines:
   "0":
-    constraints: "mem=7G root-disk=32G"
     series: "xenial"
   "1":
-    constraints: "mem=7G root-disk=32G"
     series: "xenial"
   "2":
-    constraints: "mem=3G root-disk=32G"
     series: "xenial"
   "3":
-    constraints: "mem=3G root-disk=32G"
     series: "xenial"
   "4":
-    constraints: "mem=3G root-disk=32G"
     series: "xenial"
   "5":
-    constraints: "mem=3G"
     series: "xenial"

http://git-wip-us.apache.org/repos/asf/bigtop/blob/5c0dc2a2/bigtop-deploy/juju/spark-processing/bundle.yaml
----------------------------------------------------------------------
diff --git a/bigtop-deploy/juju/spark-processing/bundle.yaml b/bigtop-deploy/juju/spark-processing/bundle.yaml
index 6ccb639..0a37882 100644
--- a/bigtop-deploy/juju/spark-processing/bundle.yaml
+++ b/bigtop-deploy/juju/spark-processing/bundle.yaml
@@ -1,6 +1,7 @@
 services:
   spark:
-    charm: "cs:xenial/spark-17"
+    charm: "cs:xenial/spark-19"
+    constraints: "mem=7G root-disk=32G"
     num_units: 2
     annotations:
       gui-x: "500"
@@ -9,7 +10,8 @@ services:
       - "0"
       - "1"
   zookeeper:
-    charm: "cs:xenial/zookeeper-10"
+    charm: "cs:xenial/zookeeper-12"
+    constraints: "mem=3G root-disk=32G"
     num_units: 3
     annotations:
       gui-x: "500"
@@ -55,20 +57,14 @@ relations:
   - ["rsyslog:aggregator", "rsyslog-forwarder-ha:syslog"]
 machines:
   "0":
-    constraints: "mem=7G root-disk=32G"
     series: "xenial"
   "1":
-    constraints: "mem=7G root-disk=32G"
     series: "xenial"
   "2":
-    constraints: "mem=3G root-disk=32G"
     series: "xenial"
   "3":
-    constraints: "mem=3G root-disk=32G"
     series: "xenial"
   "4":
-    constraints: "mem=3G root-disk=32G"
     series: "xenial"
   "5":
-    constraints: "mem=3G"
     series: "xenial"

http://git-wip-us.apache.org/repos/asf/bigtop/blob/5c0dc2a2/bigtop-deploy/juju/spark-processing/ci-info.yaml
----------------------------------------------------------------------
diff --git a/bigtop-deploy/juju/spark-processing/ci-info.yaml b/bigtop-deploy/juju/spark-processing/ci-info.yaml
new file mode 100644
index 0000000..4402a9a
--- /dev/null
+++ b/bigtop-deploy/juju/spark-processing/ci-info.yaml
@@ -0,0 +1,14 @@
+bundle:
+  name: spark-processing
+  namespace: bigdata-charmers
+  release: true
+  to-channel: beta
+charm-upgrade:
+  spark:
+    from-channel: edge
+    release: true
+    to-channel: beta
+  zookeeper:
+    from-channel: edge
+    release: true
+    to-channel: beta

http://git-wip-us.apache.org/repos/asf/bigtop/blob/5c0dc2a2/bigtop-deploy/juju/spark-processing/tests/01-bundle.py
----------------------------------------------------------------------
diff --git a/bigtop-deploy/juju/spark-processing/tests/01-bundle.py b/bigtop-deploy/juju/spark-processing/tests/01-bundle.py
index fbb4ebf..7782136 100755
--- a/bigtop-deploy/juju/spark-processing/tests/01-bundle.py
+++ b/bigtop-deploy/juju/spark-processing/tests/01-bundle.py
@@ -31,17 +31,6 @@ class TestBundle(unittest.TestCase):
             bun = f.read()
         bundle = yaml.safe_load(bun)
 
-        # NB: strip machine ('to') placement out. amulet loses our machine spec
-        # somewhere between yaml and json; without that spec, charms specifying
-        # machine placement will not deploy. This is ok for now because all
-        # charms in this bundle are using 'reset: false' so we'll already
-        # have our deployment just the way we want it by the time this test
-        # runs. However, it's bad. Remove once this is fixed:
-        #  https://github.com/juju/amulet/issues/148
-        for service, service_config in bundle['services'].items():
-            if 'to' in service_config:
-                del service_config['to']
-
         cls.d.load(bundle)
         cls.d.setup(timeout=3600)
         cls.d.sentry.wait_for_messages({'spark': 'ready (standalone - HA)'}, timeout=3600)

http://git-wip-us.apache.org/repos/asf/bigtop/blob/5c0dc2a2/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/README.md
----------------------------------------------------------------------
diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/README.md b/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/README.md
index 621a1e8..e5cbf63 100644
--- a/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/README.md
+++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/README.md
@@ -124,7 +124,7 @@ The web interface will be available at the following URL:
 
 # Resources
 
-- [Apache Bigtop](http://bigtop.apache.org/) home page
+- [Apache Bigtop home page](http://bigtop.apache.org/)
 - [Apache Bigtop issue tracking](http://bigtop.apache.org/issue-tracking.html)
 - [Apache Bigtop mailing lists](http://bigtop.apache.org/mail-lists.html)
 - [Juju Bigtop charms](https://jujucharms.com/q/apache/bigtop)

http://git-wip-us.apache.org/repos/asf/bigtop/blob/5c0dc2a2/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/metrics.yaml
----------------------------------------------------------------------
diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/metrics.yaml b/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/metrics.yaml
new file mode 100644
index 0000000..f091b67
--- /dev/null
+++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/metrics.yaml
@@ -0,0 +1,13 @@
+metrics:
+  namenodes:
+    type: gauge
+    description: number of namenodes in the cluster
+    command: hdfs getconf -namenodes 2>/dev/null | wc -l
+  offlinedatanodes:
+    type: gauge
+    description: number of dead datanodes in the cluster (must be run as hdfs)
+    command: su hdfs -c 'hdfs dfsadmin -report -dead 2>/dev/null | grep -i datanodes | grep -o [0-9] || echo 0'
+  onlinedatanodes:
+    type: gauge
+    description: number of live datanodes in the cluster (must be run as hdfs)
+    command: su hdfs -c 'hdfs dfsadmin -report -live 2>/dev/null | grep -i datanodes | grep -o [0-9] || echo 0'

http://git-wip-us.apache.org/repos/asf/bigtop/blob/5c0dc2a2/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/README.md
----------------------------------------------------------------------
diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/README.md b/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/README.md
index 405c08a..f9e4483 100644
--- a/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/README.md
+++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/README.md
@@ -114,7 +114,7 @@ Show the dfsadmin report on the command line with the following:
 
 # Resources
 
-- [Apache Bigtop](http://bigtop.apache.org/) home page
+- [Apache Bigtop home page](http://bigtop.apache.org/)
 - [Apache Bigtop issue tracking](http://bigtop.apache.org/issue-tracking.html)
 - [Apache Bigtop mailing lists](http://bigtop.apache.org/mail-lists.html)
 - [Juju Bigtop charms](https://jujucharms.com/q/apache/bigtop)

http://git-wip-us.apache.org/repos/asf/bigtop/blob/5c0dc2a2/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/README.md
----------------------------------------------------------------------
diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/README.md b/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/README.md
index 430cc97..829ea3a 100644
--- a/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/README.md
+++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/README.md
@@ -175,7 +175,7 @@ cluster. Each benchmark is an action that can be run with `juju run-action`:
 
 # Resources
 
-- [Apache Bigtop](http://bigtop.apache.org/) home page
+- [Apache Bigtop home page](http://bigtop.apache.org/)
 - [Apache Bigtop issue tracking](http://bigtop.apache.org/issue-tracking.html)
 - [Apache Bigtop mailing lists](http://bigtop.apache.org/mail-lists.html)
 - [Juju Bigtop charms](https://jujucharms.com/q/apache/bigtop)

http://git-wip-us.apache.org/repos/asf/bigtop/blob/5c0dc2a2/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/metrics.yaml
----------------------------------------------------------------------
diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/metrics.yaml b/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/metrics.yaml
new file mode 100644
index 0000000..137e07e
--- /dev/null
+++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/metrics.yaml
@@ -0,0 +1,5 @@
+metrics:
+  nodemanagers:
+    type: gauge
+    description: number of running node managers in the cluster
+    command: yarn node -list -all 2>/dev/null | grep RUNNING | wc -l

http://git-wip-us.apache.org/repos/asf/bigtop/blob/5c0dc2a2/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/README.md
----------------------------------------------------------------------
diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/README.md b/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/README.md
index 4bf240d..ee93239 100644
--- a/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/README.md
+++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/README.md
@@ -121,7 +121,7 @@ Multiple units may be added at once.  For example, add four more slave units:
 
 # Resources
 
-- [Apache Bigtop](http://bigtop.apache.org/) home page
+- [Apache Bigtop home page](http://bigtop.apache.org/)
 - [Apache Bigtop issue tracking](http://bigtop.apache.org/issue-tracking.html)
 - [Apache Bigtop mailing lists](http://bigtop.apache.org/mail-lists.html)
 - [Juju Bigtop charms](https://jujucharms.com/q/apache/bigtop)

http://git-wip-us.apache.org/repos/asf/bigtop/blob/5c0dc2a2/bigtop-packages/src/charm/kafka/layer-kafka/README.md
----------------------------------------------------------------------
diff --git a/bigtop-packages/src/charm/kafka/layer-kafka/README.md b/bigtop-packages/src/charm/kafka/layer-kafka/README.md
index 0d41677..93fca06 100644
--- a/bigtop-packages/src/charm/kafka/layer-kafka/README.md
+++ b/bigtop-packages/src/charm/kafka/layer-kafka/README.md
@@ -222,7 +222,7 @@ machine, simply pass ``0.0.0.0`` to ``network_interface``.
 
 # Resources
 
-- [Apache Bigtop](http://bigtop.apache.org/) home page
+- [Apache Bigtop home page](http://bigtop.apache.org/)
 - [Apache Bigtop mailing lists](http://bigtop.apache.org/mail-lists.html)
 - [Apache Kafka home page](http://kafka.apache.org/)
 - [Apache Kafka issue tracker](https://issues.apache.org/jira/browse/KAFKA)

http://git-wip-us.apache.org/repos/asf/bigtop/blob/5c0dc2a2/bigtop-packages/src/charm/mahout/layer-mahout/README.md
----------------------------------------------------------------------
diff --git a/bigtop-packages/src/charm/mahout/layer-mahout/README.md b/bigtop-packages/src/charm/mahout/layer-mahout/README.md
index 065fe00..2ecbf5a 100644
--- a/bigtop-packages/src/charm/mahout/layer-mahout/README.md
+++ b/bigtop-packages/src/charm/mahout/layer-mahout/README.md
@@ -112,7 +112,7 @@ of Juju, the syntax is `juju action fetch <action-id>`.
 
 # Resources
 
-- [Apache Bigtop](http://bigtop.apache.org/) home page
+- [Apache Bigtop home page](http://bigtop.apache.org/)
 - [Apache Bigtop mailing lists](http://bigtop.apache.org/mail-lists.html)
 - [Apache Mahout home page](https://mahout.apache.org/)
 - [Apache Mahout issue tracker](https://issues.apache.org/jira/browse/MAHOUT)

http://git-wip-us.apache.org/repos/asf/bigtop/blob/5c0dc2a2/bigtop-packages/src/charm/pig/layer-pig/README.md
----------------------------------------------------------------------
diff --git a/bigtop-packages/src/charm/pig/layer-pig/README.md b/bigtop-packages/src/charm/pig/layer-pig/README.md
index e174fd5..2334876 100644
--- a/bigtop-packages/src/charm/pig/layer-pig/README.md
+++ b/bigtop-packages/src/charm/pig/layer-pig/README.md
@@ -132,7 +132,7 @@ information.
 
 # Resources
 
-- [Apache Bigtop](http://bigtop.apache.org/) home page
+- [Apache Bigtop home page](http://bigtop.apache.org/)
 - [Apache Bigtop issue tracking](http://bigtop.apache.org/issue-tracking.html)
 - [Apache Bigtop mailing lists](http://bigtop.apache.org/mail-lists.html)
 - [Juju Bigtop charms](https://jujucharms.com/q/apache/bigtop)

http://git-wip-us.apache.org/repos/asf/bigtop/blob/5c0dc2a2/bigtop-packages/src/charm/spark/layer-spark/README.md
----------------------------------------------------------------------
diff --git a/bigtop-packages/src/charm/spark/layer-spark/README.md b/bigtop-packages/src/charm/spark/layer-spark/README.md
index da88676..6a55c82 100644
--- a/bigtop-packages/src/charm/spark/layer-spark/README.md
+++ b/bigtop-packages/src/charm/spark/layer-spark/README.md
@@ -329,7 +329,7 @@ Each benchmark is an action that can be run with `juju run-action`:
 
 # Resources
 
-- [Apache Bigtop](http://bigtop.apache.org/) home page
+- [Apache Bigtop home page](http://bigtop.apache.org/)
 - [Apache Bigtop issue tracking](http://bigtop.apache.org/issue-tracking.html)
 - [Apache Bigtop mailing lists](http://bigtop.apache.org/mail-lists.html)
 - [Juju Bigtop charms](https://jujucharms.com/q/apache/bigtop)

http://git-wip-us.apache.org/repos/asf/bigtop/blob/5c0dc2a2/bigtop-packages/src/charm/zeppelin/layer-zeppelin/README.md
----------------------------------------------------------------------
diff --git a/bigtop-packages/src/charm/zeppelin/layer-zeppelin/README.md b/bigtop-packages/src/charm/zeppelin/layer-zeppelin/README.md
index cd8df8f..c76ad1e 100644
--- a/bigtop-packages/src/charm/zeppelin/layer-zeppelin/README.md
+++ b/bigtop-packages/src/charm/zeppelin/layer-zeppelin/README.md
@@ -126,7 +126,7 @@ of Juju, the syntax is `juju action fetch <action-id>`.
 
 # Resources
 
-- [Apache Bigtop](http://bigtop.apache.org/) home page
+- [Apache Bigtop home page](http://bigtop.apache.org/)
 - [Apache Bigtop issue tracking](http://bigtop.apache.org/issue-tracking.html)
 - [Apache Bigtop mailing lists](http://bigtop.apache.org/mail-lists.html)
 - [Juju Bigtop charms](https://jujucharms.com/q/apache/bigtop)

http://git-wip-us.apache.org/repos/asf/bigtop/blob/5c0dc2a2/bigtop-packages/src/charm/zookeeper/layer-zookeeper/README.md
----------------------------------------------------------------------
diff --git a/bigtop-packages/src/charm/zookeeper/layer-zookeeper/README.md b/bigtop-packages/src/charm/zookeeper/layer-zookeeper/README.md
index af9f449..9c92d73 100644
--- a/bigtop-packages/src/charm/zookeeper/layer-zookeeper/README.md
+++ b/bigtop-packages/src/charm/zookeeper/layer-zookeeper/README.md
@@ -183,7 +183,7 @@ that require Zookeeper as follows:
 
 # Resources
 
-- [Apache Bigtop](http://bigtop.apache.org/) home page
+- [Apache Bigtop home page](http://bigtop.apache.org/)
 - [Apache Bigtop mailing lists](http://bigtop.apache.org/mail-lists.html)
 - [Apache Zookeeper home page](https://zookeeper.apache.org/)
 - [Apache Zookeeper issue tracker](https://issues.apache.org/jira/browse/ZOOKEEPER)

http://git-wip-us.apache.org/repos/asf/bigtop/blob/5c0dc2a2/bigtop-packages/src/charm/zookeeper/layer-zookeeper/metrics.yaml
----------------------------------------------------------------------
diff --git a/bigtop-packages/src/charm/zookeeper/layer-zookeeper/metrics.yaml b/bigtop-packages/src/charm/zookeeper/layer-zookeeper/metrics.yaml
new file mode 100644
index 0000000..b7fc353
--- /dev/null
+++ b/bigtop-packages/src/charm/zookeeper/layer-zookeeper/metrics.yaml
@@ -0,0 +1,5 @@
+metrics:
+  peers:
+    type: gauge
+    description: number of zookeeper servers in the cluster
+    command: grep ^server /etc/zookeeper/conf/zoo.cfg  | wc -l