You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@edgent.apache.org by dl...@apache.org on 2016/05/02 21:37:19 UTC

[3/4] incubator-quarks-website git commit: [QUARKS-159] Update website to follow style guide

[QUARKS-159] Update website to follow style guide


Project: http://git-wip-us.apache.org/repos/asf/incubator-quarks-website/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-quarks-website/commit/4a3afaef
Tree: http://git-wip-us.apache.org/repos/asf/incubator-quarks-website/tree/4a3afaef
Diff: http://git-wip-us.apache.org/repos/asf/incubator-quarks-website/diff/4a3afaef

Branch: refs/heads/master
Commit: 4a3afaefc23415144726e9383795f64aa49fe649
Parents: 5539ecc
Author: Queenie Ma <qu...@gmail.com>
Authored: Fri Apr 29 13:01:37 2016 -0700
Committer: Queenie Ma <qu...@gmail.com>
Committed: Fri Apr 29 13:08:49 2016 -0700

----------------------------------------------------------------------
 README.md                                       |  54 ++-
 site/docs/committers.md                         |   5 +-
 site/docs/common-quarks-operations.md           |  58 +--
 site/docs/community.md                          |  53 +--
 site/docs/console.md                            | 422 ++++++++++---------
 site/docs/faq.md                                |  28 +-
 site/docs/home.md                               |  28 +-
 site/docs/quarks-getting-started.md             | 157 +++----
 site/docs/quarks_index.md                       |  22 +-
 site/docs/quickstart.md                         |  36 +-
 site/docs/samples.md                            |  29 +-
 .../recipes/recipe_adaptable_deadtime_filter.md |  59 ++-
 site/recipes/recipe_adaptable_filter_range.md   |  52 +--
 site/recipes/recipe_adaptable_polling_source.md |  61 ++-
 ...cipe_combining_streams_processing_results.md | 326 +++++++-------
 ...ecipe_different_processing_against_stream.md | 282 ++++++-------
 site/recipes/recipe_dynamic_analytic_control.md |  52 +--
 site/recipes/recipe_external_filter_range.md    |  36 +-
 site/recipes/recipe_hello_quarks.md             |  61 ++-
 site/recipes/recipe_source_function.md          | 117 ++---
 site/recipes/recipe_value_out_of_range.md       | 247 ++++++-----
 21 files changed, 1081 insertions(+), 1104 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-quarks-website/blob/4a3afaef/README.md
----------------------------------------------------------------------
diff --git a/README.md b/README.md
index 324d139..b60e09b 100644
--- a/README.md
+++ b/README.md
@@ -25,35 +25,31 @@ http://quarks.incubator.apache.org/
 
 ## How it works
 
-This procedure was borrowed in part from the apex site. (https://git-wip-us.apache.org/repos/asf?p=incubator-apex-site.git) except we use jekyll.
+This procedure was borrowed in part from the [Apache Apex site](https://git-wip-us.apache.org/repos/asf?p=incubator-apex-site.git) except we use Jekyll.
 
- The master branch of this repo contains the source files that are used to generate the HTML that ultimately gets pushed to the incubator site.
-The `asf-site` branch is where the actual generated files are stored. Note that this branch must contain exactly one folder called `content`,
- and so has been checked out as an orphan branch with its own commit history apart from the master branch. See the *Contributing* section below.
- 
-Through a [gitpubsub](http://www.apache.org/dev/gitpubsub.html) mechanism on the apache.org server,
-files are taken from the `asf-site` branch and pushed to the live server.
+The `master` branch of this repo contains the source files that are used to generate the HTML that ultimately gets pushed to the incubator site. The `asf-site` branch is where the actual generated files are stored. Note that this branch must contain exactly one folder called `content`, and so has been checked out as an orphan branch with its own commit history apart from the `master` branch. See the *Contributing* section below.
+
+Through a [gitpubsub](http://www.apache.org/dev/gitpubsub.html) mechanism on the apache.org server, files are taken from the `asf-site` branch and pushed to the live server.
+
+## Contributing
 
-Contributing
-------------
 If you would like to make a change to the site:
- 
- 1. Fork the [github mirror](https://github.com/apache/incubator-quarks-website)
- 2. Create a new branch from `master`
- 3. Add commit(s) to your branch
- 4. Test your changes locally (see Developing)
- 5. Open a pull request on the github mirror
- 6. A committer will merge your changes if all is good 
+
+1. Fork the [GitHub mirror](https://github.com/apache/incubator-quarks-website)
+2. Create a new branch from `master`
+3. Add commit(s) to your branch
+4. Test your changes locally (see the *Developing* section)
+5. Open a pull request in the GitHub mirror
+6. A committer will merge your changes if all is good
 
 If you are a committer, do the following:
-  
- 1. Update the master branch with your (or a Pull Request's) change.
- 2. Push updated master to the asf remote master (https://git-wip-us.apache.org/repos/asf/incubator-quarks-site.git)
- 3. Run `build.sh` from the master branch directory (requires jekyll). This checks out and updates the `asf-site` branch with a new commit of the build from the current branch
- 
- 4. At this point, you should be on the `asf-site` branch. Simply push this branch to the asf remote with  `git push origin asf-site` and the site will automatically be updated within seconds.
 
-Note: If you want to try out the website locally on the asf-site branch before you push, you can do so with `jekyll serve -d content --skip-initial-build` and point your browser to http://localhost:4000
+1. Update the master branch with your (or a Pull Request's) change
+2. Push updated master to the [asf remote master](https://git-wip-us.apache.org/repos/asf/incubator-quarks-site.git)
+3. Run `build.sh` from the master branch directory (requires Jekyll). This checks out and updates the `asf-site` branch with a new commit of the build from the current branch.
+4. At this point, you should be on the `asf-site` branch. Simply push this branch to the asf remote with `git push origin asf-site` and the site will automatically be updated within seconds.
+
+Note: If you want to try out the website locally on the asf-site branch before you push, you can do so with `jekyll serve -d content --skip-initial-build` and point your browser to `http://localhost:4000`.
 
 ### Style Guide
 
@@ -72,16 +68,14 @@ In order to ensure a consistent user experience, these guidelines should be foll
 6. For [code blocks](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet#code), use three backticks `` ``` ``, and if applicable, specify the language for syntax highlighting
 7. Avoid using raw HTML tags. Use the equivalent Markdown syntax.
 8. Whitespaces
+   * Use one whitespace between sentences.
    * Use one blank line between paragraphs for the best readability
    * Do not use leading whitespace, except for special cases, such as indenting within list items
    * Do not use trailing whitespace, except for the case where a line break is needed. In that case, end a line with two spaces.
 9. Use correct spelling and grammar, especially for references to other projects. For example, use *GitHub*, not *Github*.
 
-Developing
------------
- 1. Make your changes under site
- 2. cd site
- 3. jekyll serve .
- 4. point your browser to http://localhost:4000/
-
+## Developing
 
+1. Make your changes under the `site` directory: `cd site`
+2. `jekyll serve`
+3. Point your browser to `http://localhost:4000`

http://git-wip-us.apache.org/repos/asf/incubator-quarks-website/blob/4a3afaef/site/docs/committers.md
----------------------------------------------------------------------
diff --git a/site/docs/committers.md b/site/docs/committers.md
index 6766404..bd04548 100644
--- a/site/docs/committers.md
+++ b/site/docs/committers.md
@@ -1,6 +1,5 @@
 ---
-title: Committers  
-description: Commit activity and how to become a committer
+title: Committers
 ---
 
 ## Commit activity
@@ -11,7 +10,7 @@ To see commit activity for Quarks, click [here](https://github.com/apache/incuba
 
 You can become a committer by contributing to Quarks. Qualifications for a new committer include:
 
-* **Sustained Contributions**: Potential committers should have a history of contributions to Quarks. They will create pull requests over a period of time.  
+* **Sustained Contributions**: Potential committers should have a history of contributions to Quarks. They will create pull requests over a period of time.
 
 * **Quality of Contributions**: Potential committers should submit code that adds value to Quarks, including tests and documentation as needed. They should comment in a positive way on issues and pull requests, providing guidance and input to improve Quarks.
 

http://git-wip-us.apache.org/repos/asf/incubator-quarks-website/blob/4a3afaef/site/docs/common-quarks-operations.md
----------------------------------------------------------------------
diff --git a/site/docs/common-quarks-operations.md b/site/docs/common-quarks-operations.md
index 273bdc2..d65a8ec 100644
--- a/site/docs/common-quarks-operations.md
+++ b/site/docs/common-quarks-operations.md
@@ -5,56 +5,58 @@ title: Common Quarks operations
 In the [Getting started guide](quarks-getting-started), we covered a Quarks application where we read from a device's simulated temperature sensor. Yet Quarks supports more operations than simple filtering. Data analysis and streaming require a suite of functionality, the most important components of which will be outlined below.
 
 ## TStream.map()
-TStream.map() is arguably the most used method in the Quarks API. Its two main purposes are to perform stateful or stateless operations on a stream's tuples, and to produce a TStream with tuples of a different type from that of the calling stream.
+
+`TStream.map()` is arguably the most used method in the Quarks API. Its two main purposes are to perform stateful or stateless operations on a stream's tuples, and to produce a `TStream` with tuples of a different type from that of the calling stream.
 
 ### Changing a TStream's tuple type
-In addition to filtering tuples, TStreams support operations that *transform* tuples from one Java type to another by invoking the TStream.map() method.
 
-<img src="images/Map_Type_Change.jpg" style="width:750px;height:150px;">
+In addition to filtering tuples, `TStream`s support operations that *transform* tuples from one Java type to another by invoking the `TStream.map()` method.
+
+<img src="images/Map_Type_Change.jpg" alt="Image of a type change" style="width:750px; height:150px;">
 
-This is useful in cases such as calculating the floating point average of a list of Integers, or tokenizing a Java String into a list of Strings. To demonstrate this, let's say we have a TStream which contains a few lines, each of which contains multiple words:
+This is useful in cases such as calculating the floating point average of a list of `Integer`s, or tokenizing a Java String into a list of `String`s. To demonstrate this, let's say we have a `TStream` which contains a few lines, each of which contains multiple words:
 
 ```java
-    TStream<String> lines = topology.strings(
-            "this is a line",
-            "this is another line",
-            "there are three lines now",
-            "and now four"
-        );
+TStream<String> lines = topology.strings(
+    "this is a line",
+    "this is another line",
+    "there are three lines now",
+    "and now four"
+);
 ```
 
-We then want to print the third word in each line. The best way to do this is to convert each line to a list of Strings by tokenizing them. We can do this in one line of code with the TStream.map() method:
+We then want to print the third word in each line. The best way to do this is to convert each line to a list of `String`s by tokenizing them. We can do this in one line of code with the `TStream.map()` method:
 
 ```java
-    TStream<List<String> > wordsInLine = lines.map(tuple -> Arrays.asList(tuple.split(" ")));
+TStream<List<String> > wordsInLine = lines.map(tuple -> Arrays.asList(tuple.split(" ")));
 ```
 
-Since each tuple is now a list of strings, the *wordsInLine* stream is of type List<String>. As you can see, the map() method has the ability to change the type of the TStream. Finally, we can use the *wordsInLine* stream to print the third word in each line.
+Since each tuple is now a list of strings, the `wordsInLine` stream is of type `List<String>`. As you can see, the `map()` method has the ability to change the type of the `TStream`. Finally, we can use the `wordsInLine` stream to print the third word in each line.
 
 ```java
-    wordsInLine.sink(list -> System.out.println(list.get(2)));
+wordsInLine.sink(list -> System.out.println(list.get(2)));
 ```
 
-As mentioned in the [Getting started guide](quarks-getting-started), a TStream can be parameterized to any serializable Java type, including ones created by the user.
+As mentioned in the [Getting started guide](quarks-getting-started), a `TStream` can be parameterized to any serializable Java type, including ones created by the user.
 
 ### Performing stateful operations
 
-In all previous examples, the operations performed on a TStream have been stateless; keeping track of information over multiple invocations of the same operation has not been necessary. What if we want to keep track of the number of Strings sent over a stream? To do this, we need our TStream.map() method to contain a counter as state.
+In all previous examples, the operations performed on a `TStream` have been stateless; keeping track of information over multiple invocations of the same operation has not been necessary. What if we want to keep track of the number of Strings sent over a stream? To do this, we need our `TStream.map()` method to contain a counter as state.
 
-<img src="images/Map_Stateful.jpg" style="width:750px;height:150px;">
+<img src="images/Map_Stateful.jpg" alt="Image of a stateful operation" style="width:750px; height:150px;">
 
-This can be achieved by creating an anonymous Function class, and giving it the required fields.
+This can be achieved by creating an anonymous `Function` class, and giving it the required fields.
 
 ```java
-	TStream<String> streamOfStrings = ...;
-    TStream<Integer> counts = streamOfStrings.map(new Function<String, Integer>(){
-            int count = 0;
-            @Override
-            public Integer apply(String arg0) {
-                count = count + 1;
-                return count;
-            }
-        });
+TStream<String> streamOfStrings = ...;
+TStream<Integer> counts = streamOfStrings.map(new Function<String, Integer>() {
+    int count = 0;
+    @Override
+    public Integer apply(String arg0) {
+        count = count + 1;
+        return count;
+    }
+});
 ```
 
-The *count* field will now contain the number of Strings which were sent over streamOfStrings. Although this is a simple example, the anonymous Function passed to TStream.map() can contain any kind of state! This could be a HashMap<K, T>, a running list of tuples, or any serializable Java type. The state will be maintained throughout the entire runtime of your application.
+The `count` field will now contain the number of `String`s which were sent over `streamOfStrings`. Although this is a simple example, the anonymous `Function` passed to `TStream.map()` can contain any kind of state! This could be a `HashMap<K,V>`, a running list of tuples, or any serializable Java type. The state will be maintained throughout the entire runtime of your application.

http://git-wip-us.apache.org/repos/asf/incubator-quarks-website/blob/4a3afaef/site/docs/community.md
----------------------------------------------------------------------
diff --git a/site/docs/community.md b/site/docs/community.md
index 09c79c5..439d2c1 100644
--- a/site/docs/community.md
+++ b/site/docs/community.md
@@ -1,8 +1,5 @@
 ---
-layout: page
 title: Apache Quarks community
-description: Project community page
-group: nav-right
 ---
 <!--
 {% comment %}
@@ -32,62 +29,52 @@ You can:
 * Report bugs and submit patches.
 * Contribute code, javadocs, documentation.
 
-Visit the [Contributing](http://www.apache.org/foundation/getinvolved.html) page for general Apache contribution information. If you plan to make any significant contribution, you will need to have an Individual Contributor License Agreement [\(ICLA\)](https://www.apache.org/licenses/icla.txt)  on file with Apache.
+Visit the [Contributing](http://www.apache.org/foundation/getinvolved.html) page for general Apache contribution information. If you plan to make any significant contribution, you will need to have an Individual Contributor License Agreement [\(ICLA\)](https://www.apache.org/licenses/icla.txt) on file with Apache.
 
-### Mailing list
+## Mailing list
 
 Get help using {{ site.data.project.short_name }} or contribute to the project on our mailing lists:
 
 {% if site.data.project.user_list %}
-* [site.data.project.user_list](mailto:{{ site.data.project.user_list }}) is for usage questions, help, and announcements. [subscribe](mailto:{{ site.data.project.user_list_subscribe }}?subject=send this email to subscribe),     [unsubscribe](mailto:{{ site.data.project.dev_list_unsubscribe }}?subject=send this email to unsubscribe), [archives]({{ site.data.project.user_list_archive_mailarchive }})
+* [site.data.project.user_list](mailto:{{ site.data.project.user_list }}) is for usage questions, help, and announcements. [subscribe](mailto:{{ site.data.project.user_list_subscribe }}?subject=send this email to subscribe), [unsubscribe](mailto:{{ site.data.project.dev_list_unsubscribe }}?subject=send this email to unsubscribe), [archives]({{ site.data.project.user_list_archive_mailarchive }})
 {% endif %}
 * [{{ site.data.project.dev_list }}](mailto:{{ site.data.project.dev_list }}) is for people who want to contribute code to {{ site.data.project.short_name }}. [subscribe](mailto:{{ site.data.project.dev_list_subscribe }}?subject=send this email to subscribe), [unsubscribe](mailto:{{ site.data.project.dev_list_unsubscribe }}?subject=send this email to unsubscribe), [Apache archives]({{ site.data.project.dev_list_archive }}), [mail-archive.com archives]({{ site.data.project.dev_list_archive_mailarchive }})
 * [{{ site.data.project.commits_list }}](mailto:{{ site.data.project.commits_list }}) is for commit messages and patches to {{ site.data.project.short_name }}. [subscribe](mailto:{{ site.data.project.commits_list_subscribe }}?subject=send this email to subscribe), [unsubscribe](mailto:{{ site.data.project.commits_list_unsubscribe }}?subject=send this email to unsubscribe), [Apache archives]({{ site.data.project.commits_list_archive }}), [mail-archive.com archives]({{ site.data.project.commits_list_archive_mailarchive }})
 
-
-### Issue tracker
+## Issue tracker
 
 We use Jira here: [https://issues.apache.org/jira/browse/{{ site.data.project.jira }}](https://issues.apache.org/jira/browse/{{ site.data.project.jira }})
 
-#### Bug reports
+### Bug reports
 
-Found bug? Enter an issue in  [Jira](https://issues.apache.org/jira/browse/{{ site.data.project.jira }}).
+Found bug? Create an issue in [Jira](https://issues.apache.org/jira/browse/{{ site.data.project.jira }}).
 
 Before submitting an issue, please:
 
-* Verify that the bug does in fact exist.
-* Search the issue tracker to verify there is no existing issue reporting the bug you've found.
-* Consider tracking down the bug yourself in the {{ site.data.project.short_name }} source and submitting a pull request  along with your bug report. This is a great time saver for the  {{ site.data.project.short_name }} developers and helps ensure the bug will be fixed quickly.
-
-
-
-#### Feature requests
-
-Enhancement requests for new features are also welcome. The more concrete the request is and the better rationale you provide, the greater the chance it will incorporated into future releases.
-
-
-  [https://issues.apache.org/jira/browse/{{ site.data.project.jira }}](https://issues.apache.org/jira/browse/{{ site.data.project.jira }})
-
+* Verify that the bug does in fact exist
+* Search the issue tracker to verify there is no existing issue reporting the bug you've found
+* Consider tracking down the bug yourself in the {{ site.data.project.short_name }} source and submitting a pull request along with your bug report. This is a great time saver for the {{ site.data.project.short_name }} developers and helps ensure the bug will be fixed quickly.
 
-### Source code
+### Feature requests
 
-The project sources are accessible via the [source code repository]({{ site.data.project.source_repository }}) which is also mirrored in [GitHub]({{ site.data.project.source_repository_mirror }}). 
+Enhancement requests for new features are also welcome. The more concrete the request is and the better rationale you provide, the greater the chance it will incorporated into future releases. To make a request, create an issue in [Jira](https://issues.apache.org/jira/browse/{{ site.data.project.jira }}).
 
+## Source code
 
-When you are considering a code contribution, make sure there is an [Issue](https://issues.apache.org/jira/browse/{{ site.data.project.jira }}) that describes your work or the bug you are fixing.  For significant contributions, please discuss your proposed changes in the Issue so that others can comment on your plans.  Someone else may be working on the same functionality, so it's good to communicate early and often.  A committer is more likely to accept your change if there is clear information in the Issue. 
+The project sources are accessible via the [source code repository]({{ site.data.project.source_repository }}) which is also mirrored in [GitHub]({{ site.data.project.source_repository_mirror }}).
 
-To contribute, [fork](https://help.github.com/articles/fork-a-repo/) the [mirror]({{ site.data.project.source_repository_mirror }}) and issue a pull request. Put the Jira issue number, e.g. {{ site.data.project.jira }}-100 in the pull request title. The tag [WIP] can also be used in the title of pull requests to indicate that you are not ready to merge but want feedback. Remove [WIP] when you are ready for merge. Make sure you document your code and contribute tests along with the code.
+When you are considering a code contribution, make sure there is an [Jira issue](https://issues.apache.org/jira/browse/{{ site.data.project.jira }}) that describes your work or the bug you are fixing. For significant contributions, please discuss your proposed changes in the issue so that others can comment on your plans. Someone else may be working on the same functionality, so it's good to communicate early and often. A committer is more likely to accept your change if there is clear information in the issue.
 
+To contribute, [fork](https://help.github.com/articles/fork-a-repo/) the [mirror]({{ site.data.project.source_repository_mirror }}) and issue a [pull request](https://help.github.com/articles/using-pull-requests/). Put the Jira issue number, e.g. {{ site.data.project.jira }}-100 in the pull request title. The tag [WIP] can also be used in the title of pull requests to indicate that you are not ready to merge but want feedback. Remove [WIP] when you are ready for merge. Make sure you document your code and contribute tests along with the code.
 
 Read [DEVELOPMENT.md](https://github.com/apache/incubator-quarks/blob/master/DEVELOPMENT.md) at the top of the code tree for details on setting up your development environment.
 
- 
-### Web site and documentation source code
+## Web site and documentation source code
 
-The project website and documentation sources are accessible via the [website source code repository]({{ site.data.project.website_repository }}) which is also mirrored in [GitHub]({{ site.data.project.website_repository_mirror }}). Contributing changes to the web site and documentation is similar to contributing code.  Follow the instructions in the Source Code section above, but fork and issue a pull request against the [web site mirror]({{ site.data.project.website_repository_mirror }}). Follow the instructions in the top level [README.md]({{ site.data.project.website_repository_mirror }}/blob/master/README.md) for details on contributing to the web site and documentation.
+The project website and documentation sources are accessible via the [website source code repository]({{ site.data.project.website_repository }}) which is also mirrored in [GitHub]({{ site.data.project.website_repository_mirror }}). Contributing changes to the web site and documentation is similar to contributing code. Follow the instructions in the *Source Code* section above, but fork and issue a pull request against the [web site mirror]({{ site.data.project.website_repository_mirror }}). Follow the instructions in the top-level [README.md]({{ site.data.project.website_repository_mirror }}/blob/master/README.md) for details on contributing to the web site and documentation.
 
-  You will need to use Markdown and Jekyll to develop pages. See:
+You will need to use [Markdown](https://daringfireball.net/projects/markdown/) and [Jekyll](http://jekyllrb.com) to develop pages. See:
 
 * [Markdown Cheat Sheet](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet)
-*  [Jekyll on linux and Mac](https://jekyllrb.com/)
-*  [Jekyll on Windows](https://jekyllrb.com/docs/windows/) is not officially supported but people have gotten it to work.
+* [Jekyll on linux and Mac](https://jekyllrb.com/)
+* [Jekyll on Windows](https://jekyllrb.com/docs/windows/) is not officially supported but people have gotten it to work

http://git-wip-us.apache.org/repos/asf/incubator-quarks-website/blob/4a3afaef/site/docs/console.md
----------------------------------------------------------------------
diff --git a/site/docs/console.md b/site/docs/console.md
index deebf79..dc18404 100644
--- a/site/docs/console.md
+++ b/site/docs/console.md
@@ -3,66 +3,71 @@ title: Application console
 ---
 
 ## Visualizing and monitoring your application
-The Quarks application console is a web application that enables you to visualize your application topology and monitor the tuples flowing through your application.  The kind of oplets used in the topology, as well as the stream tags included in the topology, are also visible in the console.
+
+The Quarks application console is a web application that enables you to visualize your application topology and monitor the tuples flowing through your application. The kind of oplets used in the topology, as well as the stream tags included in the topology, are also visible in the console.
 
 ## Adding the console web app to your application
+
 To use the console, you must use the Quarks classes that provide the service to access the console web application or directly call the `HttpServer` class itself, start the server and then obtain the console URL.
 
 The easiest way to include the console in your application is to use the the `DevelopmentProvider` class. `DevelopmentProvider` is a subclass of `DirectProvider` and adds services such as access to the console web application and counter oplets used to determine tuple counts. You can get the URL for the console from the `DevelopmentProvider` using the `getService` method as shown in a hypothetical application shown below:
 
-```
-	import java.util.concurrent.TimeUnit;
-
-	import quarks.console.server.HttpServer;
-	import quarks.providers.development.DevelopmentProvider;
-	import quarks.topology.TStream;
-	import quarks.topology.Topology;
-
-	public class TempSensorApplication {
-		public static void main(String[] args) throws Exception {
-		    TempSensor sensor = new TempSensor();
-		    DevelopmentProvider dp = new DevelopmentProvider();
-		    Topology topology = dp.newTopology();
-		    TStream<Double> tempReadings = topology.poll(sensor, 1, TimeUnit.MILLISECONDS);
-		    TStream<Double> filteredReadings = tempReadings.filter(reading -> reading < 50 || reading > 80);
-		    filteredReadings.print();
-
-		    System.out.println(dp.getServices().getService(HttpServer.class).getConsoleUrl());
-		    dp.submit(topology);
-		  }
-	}
+```java
+import java.util.concurrent.TimeUnit;
+
+import quarks.console.server.HttpServer;
+import quarks.providers.development.DevelopmentProvider;
+import quarks.topology.TStream;
+import quarks.topology.Topology;
+
+public class TempSensorApplication {
+    public static void main(String[] args) throws Exception {
+        TempSensor sensor = new TempSensor();
+        DevelopmentProvider dp = new DevelopmentProvider();
+        Topology topology = dp.newTopology();
+        TStream<Double> tempReadings = topology.poll(sensor, 1, TimeUnit.MILLISECONDS);
+        TStream<Double> filteredReadings = tempReadings.filter(reading -> reading < 50 || reading > 80);
+        filteredReadings.print();
+
+        System.out.println(dp.getServices().getService(HttpServer.class).getConsoleUrl());
+        dp.submit(topology);
+    }
+}
 ```
 
-Note that the console URL is being printed to System.out. The filteredReadings are as well, since filteredReadings.print() is being called in the application.  You may need to scroll your terminal window up to see the output for the console URL.
+Note that the console URL is being printed to `System.out`. The `filteredReadings` are as well, since `filteredReadings.print()` is being called in the application. You may need to scroll your terminal window up to see the output for the console URL.
 
-Optionally, you can modify the above code in the application to have a timeout before submitting the topology, which would allow you to see the console URL before any other output is shown.  The modification would look like this:
+Optionally, you can modify the above code in the application to have a timeout before submitting the topology, which would allow you to see the console URL before any other output is shown. The modification would look like this:
 
-```
-// print the console URL and wait for 10 seconds before submitting the topology
+```java
+// Print the console URL and wait for 10 seconds before submitting the topology
 System.out.println(dp.getServices().getService(HttpServer.class).getConsoleUrl());
 try {
-  TimeUnit.SECONDS.sleep(10);
+    TimeUnit.SECONDS.sleep(10);
 } catch (InterruptedException e) {
-  //do nothing
+    // Do nothing
 }
 dp.submit(topology);
 ```
 
-The other way to embed the console in your application is shown in the `HttpServerSample.java` example. It gets the HttpServer instance, starts it, and prints out the console URL.  Note that it does not submit a job, so when the console is displayed in the browser, there are no running jobs and therefore no Topology graph.  The example is meant to show how to get the `HttpServer` instance, start the console web app and get the URL of the console.
+The other way to embed the console in your application is shown in the `HttpServerSample.java` example (on [GitHub](https://github.com/apache/incubator-quarks/blob/master/samples/console/src/main/java/quarks/samples/console/HttpServerSample.java)). It gets the `HttpServer` instance, starts it, and prints out the console URL. Note that it does not submit a job, so when the console is displayed in the browser, there are no running jobs and therefore no topology graph. The example is meant to show how to get the `HttpServer` instance, start the console web app and get the URL of the console.
+
+## Accessing the console
 
-# Accessing the console
 The console URL has the following format:
 
-http://host_name:port_number/console
+`http://host_name:port_number/console`
 
-Once it is obtained from `System.out`, enter it in a browser window.  
+Once it is obtained from `System.out`, enter it in a browser window.
 
-If you cannot access the console at this URL, ensure there is a `console.war` file in the `webapps` directory.  If the `console.war` file cannot be found, an exception will be thrown (in std.out) indicating `console.war` was not found.
+If you cannot access the console at this URL, ensure there is a `console.war` file in the `webapps` directory. If the `console.war` file cannot be found, an exception will be thrown (in `std.out`) indicating `console.war` was not found.
 
 ## ConsoleWaterDetector sample
 
-To see the features of the console in action and as a way to demonstrate how to monitor a topology in the console, let's look at the `ConsoleWaterDetector` sample.
-Prior to running any console applications, the `console.war` file must be built as mentioned above.  If you are building quarks from a Git repository, go to the top level Quarks directory and run `ant`.
+To see the features of the console in action and as a way to demonstrate how to monitor a topology in the console, let's look at the `ConsoleWaterDetector` sample (on [GitHub](https://github.com/apache/incubator-quarks/blob/master/samples/console/src/main/java/quarks/samples/console/ConsoleWaterDetector.java)).
+
+Prior to running any console applications, the `console.war` file must be built as mentioned above. If you are building quarks from a Git repository, go to the top level Quarks directory and run `ant`.
+
 Here is an example in my environment:
 
 ```
@@ -98,6 +103,7 @@ all:
 BUILD SUCCESSFUL
 Total time: 3 seconds
 ```
+
 This command will let you know that `console.war` was built and is in the correct place, under the `webapps` directory.
 
 ```
@@ -105,8 +111,7 @@ Susans-MacBook-Pro-247:quarks susancline$ find . -name console.war -print
 ./target/java8/console/webapps/console.war
 ```
 
-Now we know we have built `console.war`, so we're good to go.
-To run this sample from the command line:
+Now we know we have built `console.war`, so we're good to go. To run this sample from the command line:
 
 ```
 Susans-MacBook-Pro-247:quarks susancline$ pwd
@@ -114,7 +119,7 @@ Susans-MacBook-Pro-247:quarks susancline$ pwd
 Susans-MacBook-Pro-247:quarks susancline$ java -cp target/java8/samples/lib/quarks.samples.console.jar:. quarks.samples.console.ConsoleWaterDetector
 ```
 
-If everything is successful, you'll start seeing output.  You may have to scroll back up to get the URL of the console:
+If everything is successful, you'll start seeing output. You may have to scroll back up to get the URL of the console:
 
 ```
 Susans-MacBook-Pro-247:quarks susancline$ java -cp target/java8/samples/lib/quarks.samples.console.jar:. quarks.samples.console.ConsoleWaterDetector
@@ -140,15 +145,18 @@ Well1 alert, ecoli value is 1
 Well1 alert, temp value is 48
 Well3 alert, ecoli value is 1
 ```
+
 Now point your browser to the URL displayed above in the output from running the Java command to launch the `ConsoleWaterDetector` application. In this case, the URL is `http://localhost:57964/console`.
 
 Below is a screen shot of what you should see if everything is working properly:
 
 <img src='images/console_overview.jpg' alt='First view of the ConsoleWaterDetector app in the console' width='100%'/>
 
-# ConsoleWaterDetector application scenario
+## ConsoleWaterDetector application scenario
+
 The application is now running in your browser. Let's discuss the scenario for the application.
-A county agency is responsible for ensuring the safety of residents well water.  Each well they monitor has four different sensor types:
+
+A county agency is responsible for ensuring the safety of residents well water. Each well they monitor has four different sensor types:
 
 * Temperature
 * Acidity
@@ -157,77 +165,81 @@ A county agency is responsible for ensuring the safety of residents well water.
 
 The sample application topology monitors 3 wells:
 
-* For the hypothetical scenario, Well1 and Well3 produce 'unhealthy' values from their sensors on occasion.  Well2 always produces 'healthy' values.  
-
-* Each well that is to be measured is added to the topology.  The topology polls each sensor (temp, ecoli, etc) for each well as a unit.  A TStream&lt;Integer&gt; is returned from polling the toplogy and represents a sensor reading.  Each sensor reading for the well has a tag added to it with the reading type i.e, "temp", and the well id.  Once all of the sensor readings are obtained and the tags added, each sensor reading is 'unioned' into a single TStream&lt;JsonObject&gt;.  Look at the `waterDetector` method for details on this.
-* Now, each well has a single stream with each of the sensors readings as a property with a name and value in the TStream&lt;JsonObject&gt;.  Next the `alertFilter` method is called on the TStream&lt;JsonObject&gt; representing each well.  This method checks the values for each well's sensors to determine if they are 'out of range' for healthy values. The `filter` oplet is used to do this. If any of the sensor's readings are out of the acceptable range the tuple is passed along. Those that are within an acceptable range are discarded.
-* Next the applications `splitAlert` method is called on each well's stream that contains the union of all the sensor readings that are out of range.  The `splitAlert` method uses the `split` oplet to split the incoming stream into 5 different streams.  Only those tuples that are out of range for each stream, which represents each sensor type, will be returned. The object returned from `splitAlert` is a list of TStream&lt;JsonObject&gt; objects. The `splitAlert` method is shown below:
-```
-public static List<TStream<JsonObject>> splitAlert(TStream<JsonObject> alertStream, int wellId) {
+* For the hypothetical scenario, Well1 and Well3 produce 'unhealthy' values from their sensors on occasion. Well2 always produces 'healthy' values.
+* Each well that is to be measured is added to the topology. The topology polls each sensor (temp, ecoli, etc.) for each well as a unit. A `TStream<Integer>` is returned from polling the toplogy and represents a sensor reading. Each sensor reading for the well has a tag added to it with the reading type i.e, "temp", and the well id. Once all of the sensor readings are obtained and the tags added, each sensor reading is 'unioned' into a single `TStream<JsonObject>`. Look at the `waterDetector` method for details on this.
+* Now, each well has a single stream with each of the sensors readings as a property with a name and value in the `TStream<JsonObject>`. Next the `alertFilter` method is called on the `TStream<JsonObject>` representing each well. This method checks the values for each well's sensors to determine if they are 'out of range' for healthy values. The `filter` oplet is used to do this. If any of the sensor's readings are out of the acceptable range the tuple is passed along. Those that are within an acceptable range are discarded.
+* Next the applications' `splitAlert` method is called on each well's stream that contains the union of all the sensor readings that are out of range. The `splitAlert` method uses the `split` oplet to split the incoming stream into 5 different streams. Only those tuples that are out of range for each stream, which represents each sensor type, will be returned. The object returned from `splitAlert` is a list of `TStream<JsonObject>` objects. The `splitAlert` method is shown below:
 
-		List<TStream<JsonObject>> allStreams = alertStream.split(5, tuple -> {
+    ```java
+    public static List<TStream<JsonObject>> splitAlert(TStream<JsonObject> alertStream, int wellId) {
+        List<TStream<JsonObject>> allStreams = alertStream.split(5, tuple -> {
             if (tuple.get("temp") != null) {
-            	JsonObject tempObj = new JsonObject();
-            	int temp = tuple.get("temp").getAsInt();
-            	if (temp <= TEMP_ALERT_MIN || temp >= TEMP_ALERT_MAX) {
-            		tempObj.addProperty("temp", temp);
-            		return 0;
-            	} else {
-            		return -1;
-            	}
-
+                JsonObject tempObj = new JsonObject();
+                int temp = tuple.get("temp").getAsInt();
+                if (temp <= TEMP_ALERT_MIN || temp >= TEMP_ALERT_MAX) {
+                    tempObj.addProperty("temp", temp);
+                    return 0;
+                } else {
+                    return -1;
+                }
             } else if (tuple.get("acidity") != null){
-            	JsonObject acidObj = new JsonObject();
-            	int acid = tuple.get("acidity").getAsInt();
-            	if (acid <= ACIDITY_ALERT_MIN || acid >= ACIDITY_ALERT_MAX) {
-            		acidObj.addProperty("acidity", acid);
-            		return 1;
-            	} else {
-            		return -1;
-            	}
+                JsonObject acidObj = new JsonObject();
+                int acid = tuple.get("acidity").getAsInt();
+                if (acid <= ACIDITY_ALERT_MIN || acid >= ACIDITY_ALERT_MAX) {
+                    acidObj.addProperty("acidity", acid);
+                    return 1;
+                } else {
+                    return -1;
+                }
             } else if (tuple.get("ecoli") != null) {
-            	JsonObject ecoliObj = new JsonObject();
-            	int ecoli = tuple.get("ecoli").getAsInt();
-            	if (ecoli >= ECOLI_ALERT) {
-            		ecoliObj.addProperty("ecoli", ecoli);
-            		return 2;
-            	} else {
-            		return -1;
-            	}
+                JsonObject ecoliObj = new JsonObject();
+                int ecoli = tuple.get("ecoli").getAsInt();
+                if (ecoli >= ECOLI_ALERT) {
+                    ecoliObj.addProperty("ecoli", ecoli);
+                    return 2;
+                } else {
+                    return -1;
+                }
             } else if (tuple.get("lead") != null) {
-            	JsonObject leadObj = new JsonObject();
-            	int lead = tuple.get("lead").getAsInt();
-            	if (lead >= LEAD_ALERT_MAX) {
-            		leadObj.addProperty("lead", lead);
-            		return 3;
-            	} else {
-            		return -1;
-            	}
+                JsonObject leadObj = new JsonObject();
+                int lead = tuple.get("lead").getAsInt();
+                if (lead >= LEAD_ALERT_MAX) {
+                    leadObj.addProperty("lead", lead);
+                    return 3;
+                } else {
+                    return -1;
+                }
             } else {
-            	 return -1;
+                 return -1;
             }
         });
 
-		return allStreams;
-	}
-```
-* Next we want to get the temperature stream from the first well and put a rate meter on it to determine the rate at which the out of range values are flowing in the stream.
-```
-   List<TStream<JsonObject>> individualAlerts1 = splitAlert(filteredReadings1, 1);
+        return allStreams;
+    }
+    ```
+
+* Next we want to get the temperature stream from the first well and put a rate meter on it to determine the rate at which the out of range values are flowing in the stream
+
+    ```java
+    List<TStream<JsonObject>> individualAlerts1 = splitAlert(filteredReadings1, 1);
+
+    // Put a rate meter on well1's temperature sensor output
+    Metrics.rateMeter(individualAlerts1.get(0));
+    ```
+
+* Next all the sensors for well 1 have tags added to the stream indicating the stream is out of range for that sensor and the well id. Next a sink is added, passing the tuple to a `Consumer` that formats a string to `System.out` containing the well id, alert type (sensor type) and value of the sensor.
+
+    ```java
+    // Put a rate meter on well1's temperature sensor output
+    Metrics.rateMeter(individualAlerts1.get(0));
+    individualAlerts1.get(0).tag(TEMP_ALERT_TAG, "well1").sink(tuple -> System.out.println("\n" + formatAlertOutput(tuple, "1", "temp")));
+    individualAlerts1.get(1).tag(ACIDITY_ALERT_TAG, "well1").sink(tuple -> System.out.println(formatAlertOutput(tuple, "1", "acidity")));
+    individualAlerts1.get(2).tag(ECOLI_ALERT_TAG, "well1").sink(tuple -> System.out.println(formatAlertOutput(tuple, "1", "ecoli")));
+    individualAlerts1.get(3).tag(LEAD_ALERT_TAG, "well1").sink(tuple -> System.out.println(formatAlertOutput(tuple, "1", "lead")));
+    ```
 
-   // Put a rate meter on well1's temperature sensor output
-   Metrics.rateMeter(individualAlerts1.get(0));
-```
-* Next all the sensors for well 1 have tags added to the stream indicating the stream is out of range for that sensor and the well id.  Next a sink is added, passing the tuple to a `Consumer` that formats a string to `System.out` containing the well Id, alert type (sensor type) and value of the sensor.  
-```
-// Put a rate meter on well1's temperature sensor output
-Metrics.rateMeter(individualAlerts1.get(0));
-individualAlerts1.get(0).tag(TEMP_ALERT_TAG, "well1").sink(tuple -> System.out.println("\n" + formatAlertOutput(tuple, "1", "temp")));
-individualAlerts1.get(1).tag(ACIDITY_ALERT_TAG, "well1").sink(tuple -> System.out.println(formatAlertOutput(tuple, "1", "acidity")));
-individualAlerts1.get(2).tag(ECOLI_ALERT_TAG, "well1").sink(tuple -> System.out.println(formatAlertOutput(tuple, "1", "ecoli")));
-individualAlerts1.get(3).tag(LEAD_ALERT_TAG, "well1").sink(tuple -> System.out.println(formatAlertOutput(tuple, "1", "lead")));
-```
 Output in the terminal window from the `formatAlertOutput` method will look like this:
+
 ```
 Well1 alert, temp value is 86
 Well3 alert, ecoli value is 2
@@ -245,24 +257,24 @@ Notice how only those streams that are out of range for the temperature sensor t
 
 At the end of the `ConsoleWaterDetector` application is this snippet of code, added after the topology has been submitted:
 
-```
+```java
 dp.submit(wellTopology);
 
 while (true) {
-				MetricRegistry metricRegistry = dp.getServices().getService(MetricRegistry.class);
-				SortedMap<String, Counter> counters = metricRegistry.getCounters();
-
-				Set<Entry<String, Counter>> values = counters.entrySet();
-				for (Entry<String, Counter> e : values) {
-					if (e.getValue().getCount() == 0) {
-						System.out.println("Counter Op:" + e.getKey() + " tuple count: " + e.getValue().getCount());
-					}
-				}
-				Thread.sleep(2000);
-		}
+    MetricRegistry metricRegistry = dp.getServices().getService(MetricRegistry.class);
+    SortedMap<String, Counter> counters = metricRegistry.getCounters();
+
+    Set<Entry<String, Counter>> values = counters.entrySet();
+    for (Entry<String, Counter> e : values) {
+        if (e.getValue().getCount() == 0) {
+            System.out.println("Counter Op:" + e.getKey() + " tuple count: " + e.getValue().getCount());
+        }
+    }
+    Thread.sleep(2000);
+}
 ```
 
-What this does is get all the counters in the `MetricRegistry` class and print out the name of the counter oplet they are monitoring along with the tuple count if it is zero.  Here is some sample output:
+What this does is get all the counters in the `MetricRegistry` class and print out the name of the counter oplet they are monitoring along with the tuple count if it is zero. Here is some sample output:
 
 ```
 Counter Op:TupleCounter.quarks.oplet.JOB_0.OP_44 has a tuple count of zero!
@@ -278,36 +290,36 @@ Counter Op:TupleCounter.quarks.oplet.JOB_0.OP_98 has a tuple count of zero!
 
 To summarize what the application is doing:
 
-- Unions all sensor type readings for a single well.
-- Filters all sensor type readings for a single well, passing on an object that only contains tuples for the object that have at least one sensor type with out of range values.
-- Splits the object that contained name/value pairs for sensor type and readings into individual sensor types returning only those streams that contain out of range values.
-- Outputs to the command line the well id, sensor type and value that is out of range.
-- Tags are added at various points in the topology for easier identification of either the well or some out of range condition.
-- The topology contains counters to measure tuple counts since `DevelopmentProvider` was used.
-- Individual rate meters were placed on well1 and well3's temperature sensors to determine the rate of 'unhealthy' values.
-- Prints out the name of the counter oplets whose tuple counts are zero.
+* Unions all sensor type readings for a single well
+* Filters all sensor type readings for a single well, passing on an object that only contains tuples for the object that have at least one sensor type with out of range values
+* Splits the object that contained name/value pairs for sensor type and readings into individual sensor types returning only those streams that contain out of range values
+* Outputs to the command line the well id, sensor type and value that is out of range
+* Tags are added at various points in the topology for easier identification of either the well or some out of range condition
+* The topology contains counters to measure tuple counts since `DevelopmentProvider` was used
+* Individual rate meters were placed on `well1` and `well3`'s temperature sensors to determine the rate of 'unhealthy' values
+* Prints out the name of the counter oplets whose tuple counts are zero
 
-# Topology graph controls
+## Topology graph controls
 
-Now that you have an understanding of what the application is doing, let's look at some of the controls in the console, so we can learn how to monitor the application.  Below is a screen shot of the top controls: the controls that affect the Topology Graph.
+Now that you have an understanding of what the application is doing, let's look at some of the controls in the console, so we can learn how to monitor the application. Below is a screen shot of the top controls: the controls that affect the Topology Graph.
 
 <img src='images/console_top_controls.jpg' alt='The controls that impact the topology graph' width='100%'/>
 
-* **Job**: A drop down to select which job is being displayed in the Topology Graph.  An application can contain multiple jobs.
-* **State**: Hovering over the 'State' icon shows information about the selected job.  The current and next states of the job, the job id and the job name.
-* **View by**: This select is used to change how the topology graph is displayed.  The three options for this select are:
+* **Job**: A drop down to select which job is being displayed in the Topology Graph. An application can contain multiple jobs.
+* **State**: Hovering over the 'State' icon shows information about the selected job. The current and next states of the job, the job id and the job name.
+* **View by**: This select is used to change how the topology graph is displayed. The three options for this select are:
   - Static flow
   - Tuple count
   - Oplet kind
-  - Currently it is set to 'Static flow'. This means the oplets (represented as circles in the topology graph) do not change size, nor do the lines or links (representing the edges of the topology graph) change width or position.  The graph is not being refreshed when it is in 'Static flow' mode.
-* **Refresh interval**: Allows the user to select an interval between 3 - 20 seconds to refresh the tuple count values in the graph. Every X seconds the metrics for the topology graph are refreshed.  More about metrics a little bit later.
-* **Pause graph**: Stops the refresh interval timer.  Once the 'Pause graph' button is clicked, the user must push 'Resume graph' for the graph to be updated, and then refreshed at the interval set in the 'Refresh interval' timer.  It can be helpful to pause the graph if multiple oplets are occupying the same area on the graph, and their names become unreadable. Once the graph is paused, the user can drag an oplet off of another oplet to better view the name and see the edge(s) that connect them.
+  - Currently it is set to 'Static flow'. This means the oplets (represented as circles in the topology graph) do not change size, nor do the lines or links (representing the edges of the topology graph) change width or position. The graph is not being refreshed when it is in 'Static flow' mode.
+* **Refresh interval**: Allows the user to select an interval between 3 - 20 seconds to refresh the tuple count values in the graph. Every X seconds the metrics for the topology graph are refreshed. More about metrics a little bit later.
+* **Pause graph**: Stops the refresh interval timer. Once the 'Pause graph' button is clicked, the user must push 'Resume graph' for the graph to be updated, and then refreshed at the interval set in the 'Refresh interval' timer. It can be helpful to pause the graph if multiple oplets are occupying the same area on the graph, and their names become unreadable. Once the graph is paused, the user can drag an oplet off of another oplet to better view the name and see the edge(s) that connect them.
 * **Show tags**: If the checkbox appears in the top controls, it means:
-  - The 'View by' layer is capable of displaying stream tags.
-  - The topology currently shown in the topology graph has stream tags associated with it.
-* **Show all tags**: Selecting this checkbox shows all the tags present in the topology.  If you want to see only certain tags, uncheck this box and select the button labeled 'Select individual tags ...'.  A dialog will appear, and you can select one or all of the tags listed in the dialog which are present in the topology.
+  - The 'View by' layer is capable of displaying stream tags
+  - The topology currently shown in the topology graph has stream tags associated with it
+* **Show all tags**: Selecting this checkbox shows all the tags present in the topology. If you want to see only certain tags, uncheck this box and select the button labeled 'Select individual tags ...'. A dialog will appear, and you can select one or all of the tags listed in the dialog which are present in the topology.
 
-<img src='images/console_select_individual_tags.jpg' />
+    <img src='images/console_select_individual_tags.jpg'/>
 
 The next aspect of the console we'll look at are the popups available when selecting 'View all oplet properties', hovering over an oplet and hovering over an edge (link).
 
@@ -315,81 +327,82 @@ The screen shot below shows the output from clicking on the 'View all oplet prop
 
 <img src='images/console_oplet_properties.jpg' alt='Displays a table showing the relationship between the oplets and vertices' width='100%'/>
 
-Looking at the sixth line in the table, where the Name is 'OP_5', we can see that the Oplet kind is a Map, a (quarks.oplet.functional.Map), the Tuple count is 0 (this is because the view is in Static flow mode - the graph does not show the number of tuples flowing in it), the source oplet is 'OP_55', the target oplet is 'OP_60', and there are no stream tags coming from the source or target streams.  Relationships for all oplets can be viewed in this manner.
+Looking at the sixth line in the table, where the Name is 'OP_5', we can see that the Oplet kind is a `Map`, a `quarks.oplet.functional.Map`, the Tuple count is 0 (this is because the view is in Static flow mode - the graph does not show the number of tuples flowing in it), the source oplet is 'OP_55', the target oplet is 'OP_60', and there are no stream tags coming from the source or target streams. Relationships for all oplets can be viewed in this manner.
 
 Now, looking at the graph, if we want to see the relationships for a single oplet, we can hover over it. The image below shows the hover when we are over 'OP_5'.
 
 <img src='images/console_hover_over_op.jpg' width='100%'/>
 
-You can also hover over the edges of the topology graph to get information.  Hover over the edge (link) between 'OP_0' and 'OP_55'.  The image shows the name and kind of the oplet as the source, and the name and kind of oplet as the target.  Again the tuple count is 0 since this is the 'Static flow' view.  The last item of information in the tooltip is the tags on the stream.
-One or many tags can be added to a stream.  In this case we see the tags 'temperature' and 'well1'.
+You can also hover over the edges of the topology graph to get information. Hover over the edge (link) between 'OP_0' and 'OP_55'. The image shows the name and kind of the oplet as the source, and the name and kind of oplet as the target. Again the tuple count is 0 since this is the 'Static flow' view. The last item of information in the tooltip is the tags on the stream.
+
+One or many tags can be added to a stream. In this case we see the tags 'temperature' and 'well1'.
 
-<img src='images/console_hover_over_link.jpg' />
+<img src='images/console_hover_over_link.jpg'/>
 
 The section of the code that adds the tags 'temperature' and 'well1' is in the `waterDetector` method of the `ConsoleWaterDetector` class.
 
-```
+```java
 public static TStream<JsonObject> waterDetector(Topology topology, int wellId) {
-  Random rNum = new Random();
-  TStream<Integer> temp = topology.poll(() -> rNum.nextInt(TEMP_RANDOM_HIGH - TEMP_RANDOM_LOW) + TEMP_RANDOM_LOW, 1, TimeUnit.SECONDS);
-  TStream<Integer> acidity = topology.poll(() -> rNum.nextInt(ACIDITY_RANDOM_HIGH - ACIDITY_RANDOM_LOW) + ACIDITY_RANDOM_LOW, 1, TimeUnit.SECONDS);
-  TStream<Integer> ecoli = topology.poll(() -> rNum.nextInt(ECOLI_RANDOM_HIGH - ECOLI_RANDOM_LOW) + ECOLI_RANDOM_LOW, 1, TimeUnit.SECONDS);
-  TStream<Integer> lead = topology.poll(() -> rNum.nextInt(LEAD_RANDOM_HIGH - LEAD_RANDOM_LOW) + LEAD_RANDOM_LOW, 1, TimeUnit.SECONDS);
-  TStream<Integer> id = topology.poll(() -> wellId, 1, TimeUnit.SECONDS);
-
-  // add tags to each sensor
-  temp.tag("temperature", "well" + wellId);
-  ```
+    Random rNum = new Random();
+    TStream<Integer> temp = topology.poll(() -> rNum.nextInt(TEMP_RANDOM_HIGH - TEMP_RANDOM_LOW) + TEMP_RANDOM_LOW, 1, TimeUnit.SECONDS);
+    TStream<Integer> acidity = topology.poll(() -> rNum.nextInt(ACIDITY_RANDOM_HIGH - ACIDITY_RANDOM_LOW) + ACIDITY_RANDOM_LOW, 1, TimeUnit.SECONDS);
+    TStream<Integer> ecoli = topology.poll(() -> rNum.nextInt(ECOLI_RANDOM_HIGH - ECOLI_RANDOM_LOW) + ECOLI_RANDOM_LOW, 1, TimeUnit.SECONDS);
+    TStream<Integer> lead = topology.poll(() -> rNum.nextInt(LEAD_RANDOM_HIGH - LEAD_RANDOM_LOW) + LEAD_RANDOM_LOW, 1, TimeUnit.SECONDS);
+    TStream<Integer> id = topology.poll(() -> wellId, 1, TimeUnit.SECONDS);
+
+    // add tags to each sensor
+    temp.tag("temperature", "well" + wellId);
+```
 
-# Legend
+### Legend
 
-The legend(s) that appear in the console depend on the view currently displayed.  In the static flow mode, if no stream tags are present, there is no legend.  In this example we have stream tags in the topology, so the static flow mode gives us the option to select 'Show tags'.  If selected, the result is the addition of the Stream tags legend:
+The legend(s) that appear in the console depend on the view currently displayed. In the static flow mode, if no stream tags are present, there is no legend. In this example we have stream tags in the topology, so the static flow mode gives us the option to select 'Show tags'. If selected, the result is the addition of the stream tags legend:
 
-<img src='images/console_stream_tags_legend.jpg' />
+<img src='images/console_stream_tags_legend.jpg'/>
 
 This legend shows all the tags that have been added to the topology, regardless of whether or not 'Show all tags' is checked or specific tags have been selected from the dialog that appears when the 'Select individual tags ...' button is clicked.
 
-# Topology graph
+### Topology graph
+
 Now that we've covered most of the ways to modify the view of the topology graph and discussed the application, let's look at the topology graph as a way to understand our application.
 
 When analyzing what is happening in your application, here are some ways you might use the console to help you understand it:
 
 * Topology of the application - how the edges and vertices of the graph are related
-* Tuple flow  - tuple counts since the application was started
+* Tuple flow - tuple counts since the application was started
 * The affect of filters or maps on the downstream streams
 * Stream tags - if tags are added dynamically based on a condition, where the streams with tags are displayed in the topology
 
-Let's start with the static flow view of the topology.  We can look at the graph, and we can also hover over any of the oplets or streams to better understand the connections.  Also, we can click 'View all oplet properties' and see the relationships in a tabular format.
+Let's start with the static flow view of the topology. We can look at the graph, and we can also hover over any of the oplets or streams to better understand the connections. Also, we can click 'View all oplet properties' and see the relationships in a tabular format.
 
-The other thing to notice in the static flow view are the tags.  Look for any colored edges (the links between the oplets).
-All of the left-most oplets have streams with tags.  Most of them have the color that corresponds to 'Multiple tags'.  If you hover over the edges, you can see the tags.  It's obvious that we have tagged each sensor with the sensor type and the well id.
+The other thing to notice in the static flow view are the tags. Look for any colored edges (the links between the oplets). All of the left-most oplets have streams with tags. Most of them have the color that corresponds to 'Multiple tags'. If you hover over the edges, you can see the tags. It's obvious that we have tagged each sensor with the sensor type and the well id.
 
-Now, if you look to the far right, you can see more tags on streams coming out of a `split` oplet.  They also have multiple tags, and hovering over them you can determine that they represent out of range values for each sensor type for the well.  Notice how the `split` oplet, OP_43, has no tags in the streams coming out of it.  If you follow that split oplet back, you can determine from the first tags that it is part of the well 2 stream.
+Now, if you look to the far right, you can see more tags on streams coming out of a `split` oplet. They also have multiple tags, and hovering over them you can determine that they represent out of range values for each sensor type for the well. Notice how the `split` oplet, OP_43, has no tags in the streams coming out of it. If you follow that split oplet back, you can determine from the first tags that it is part of the well 2 stream.
 
-If you refer back to the `ConsoleWaterDetector` source, you can see that no tags were placed on the streams coming out of well2's split because they contained no out of range values.
+If you refer back to the `ConsoleWaterDetector` source, you can see that no tags were placed on the streams coming out of `well2`'s split because they contained no out of range values.
 
-Let's switch the view to Oplet kind now.  It will make more clear which oplets are producing the streams with the tags on them.
-Below is an image of how the graph looks after switching to the Oplet kind view.
+Let's switch the view to Oplet kind now. It will make more clear which oplets are producing the streams with the tags on them. Below is an image of how the graph looks after switching to the Oplet kind view.
 
 <img src="images/console_oplet_kind.jpg" width='100%'/>
 
-In the Oplet kind view the links are all the same width, but the circles representing the oplets are sized according to tuple flow.  Notice how the circles representing OP_10, OP_32 and OP_21 are large in relation to OP_80, OP_88 and OP_89.  As a matter of fact, we can't even see the circle representing OP_89.  Looking at OP_35 and then the Oplet kind legend, you can see by the color that it is a Filter oplet.  This is because the filter that we used against well2, which is the stream that OP_35 is part of returned no tuples.  This is a bit difficult to see. Let's look at the Tuple count view.
+In the Oplet kind view the links are all the same width, but the circles representing the oplets are sized according to tuple flow. Notice how the circles representing OP_10, OP_32 and OP_21 are large in relation to OP_80, OP_88 and OP_89. As a matter of fact, we can't even see the circle representing OP_89. Looking at OP_35 and then the Oplet kind legend, you can see by the color that it is a Filter oplet. This is because the filter that we used against `well2`, which is the stream that OP_35 is part of returned no tuples. This is a bit difficult to see. Let's look at the Tuple count view.
 
-The Tuple count view will make it more clear that no tuples are following out of OP_35, which represents the filter for well2 and only returns out of range values.  You may recall that in this example well2 returned no out of range values.  Below is the screen shot of the graph in 'Tuple count' view mode.
+The Tuple count view will make it more clear that no tuples are following out of OP_35, which represents the filter for `well2` and only returns out of range values. You may recall that in this example `well2` returned no out of range values. Below is the screen shot of the graph in 'Tuple count' view mode.
 
 <img src="images/console_tuple_count.jpg" width='100%'/>
 
-The topology graph oplets can sometimes sit on top of each other.  If this is the case, pause the refresh and use your mouse to pull down on the oplets that are in the same position. This will allow you to see their name.  Alternately, you can use the 'View all properties' table to see the relationships between oplets.
+The topology graph oplets can sometimes sit on top of each other. If this is the case, pause the refresh and use your mouse to pull down on the oplets that are in the same position. This will allow you to see their name. Alternately, you can use the 'View all properties' table to see the relationships between oplets.
+
+### Metrics
 
-# Metrics
-If you scroll the browser window down, you can see a Metrics section.  This section appears when the application contains the following:
+If you scroll the browser window down, you can see a Metrics section. This section appears when the application contains the following:
 
-* A ```DevelopmentProvider``` is used; this automatically inserts counters on the streams of the topology.
-* A ```quarks.metrics.Metric.Counter``` or ```quarks.metrics.Metric.RateMeter``` is added to an individual stream.
+* A `DevelopmentProvider` is used; this automatically inserts counters on the streams of the topology
+* A `quarks.metrics.Metric.Counter` or `quarks.metrics.Metric.RateMeter` is added to an individual stream
 
 ## Counters
 
-In the ```ConsoleWaterDetector``` application we used a ```DevelopmentProvider```.  Therefore, counters were added to most streams (edges) with the following exceptions (from the javadoc for ```quarks.metrics.Metric.Counter```):
+In the `ConsoleWaterDetector` application we used a `DevelopmentProvider`. Therefore, counters were added to most streams (edges) with the following exceptions (from the [Javadoc](http://quarks-edge.github.io/quarks/docs/javadoc/quarks/metrics/Metrics.html#counter-quarks.topology.TStream-) for `quarks.metrics.Metrics`):
 
 *Oplets are only inserted upstream from a FanOut oplet.*
 
@@ -398,71 +411,72 @@ In the ```ConsoleWaterDetector``` application we used a ```DevelopmentProvider``
 *If a chain of Peek oplets is followed by a FanOut, a metric oplet is inserted between the last Peek and the FanOut oplet.
 The implementation is not idempotent; previously inserted metric oplets are treated as regular graph vertices. Calling the method twice will insert a new set of metric oplets into the graph.*
 
-Also, the application inserts counters on well2's streams after the streams from the individual sensors were unioned and then split:
+Also, the application inserts counters on `well2`'s streams after the streams from the individual sensors were unioned and then split:
 
-```
-	List<TStream<JsonObject>> individualAlerts2 = splitAlert(filteredReadings2, 2);
+```java
+List<TStream<JsonObject>> individualAlerts2 = splitAlert(filteredReadings2, 2);
 
-	TStream<JsonObject> alert0Well2 = individualAlerts2.get(0);
-	alert0Well2  = Metrics.counter(alert0Well2);
-	alert0Well2.tag("well2", "temp");
+TStream<JsonObject> alert0Well2 = individualAlerts2.get(0);
+alert0Well2  = Metrics.counter(alert0Well2);
+alert0Well2.tag("well2", "temp");
 
-	TStream<JsonObject> alert1Well2 = individualAlerts2.get(1);
-	alert1Well2  = Metrics.counter(alert1Well2);
-	alert1Well2.tag("well2", "acidity");
+TStream<JsonObject> alert1Well2 = individualAlerts2.get(1);
+alert1Well2  = Metrics.counter(alert1Well2);
+alert1Well2.tag("well2", "acidity");
 
-	TStream<JsonObject> alert2Well2 = individualAlerts2.get(2);
-	alert2Well2  = Metrics.counter(alert2Well2);
-	alert2Well2.tag("well2", "ecoli");
+TStream<JsonObject> alert2Well2 = individualAlerts2.get(2);
+alert2Well2  = Metrics.counter(alert2Well2);
+alert2Well2.tag("well2", "ecoli");
 
-	TStream<JsonObject> alert3Well2 = individualAlerts2.get(3);
-	alert3Well2  = Metrics.counter(alert3Well2);
-	alert3Well2.tag("well2", "lead");
+TStream<JsonObject> alert3Well2 = individualAlerts2.get(3);
+alert3Well2  = Metrics.counter(alert3Well2);
+alert3Well2.tag("well2", "lead");
 ```
 
 When looking at the select next to the label 'Metrics', make sure the 'Count, oplets OP_37, OP_49 ...' is selected.  This select compares all of the counters in the topology visualized as a bar graph.  An image is shown below:
 
 <img src="images/console_counter_metrics_bar.jpg" width='100%'/>
 
-Hover over individual bars to get the value of the number of tuples flowing through that oplet since the application was started.  You can also see the oplet name.  You can see that some of the oplets have zero tuples flowing through them.
+Hover over individual bars to get the value of the number of tuples flowing through that oplet since the application was started. You can also see the oplet name. You can see that some of the oplets have zero tuples flowing through them.
+
 The bars that are the tallest and therefore have the highest tuple count are OP_76, OP_67 and OP_65.  If you look back up to the topology graph, in the Tuple count view, you can see that the edges (streams) surrounding these oplets have the color that corresponds to the highest tuple count (in the pictures above that color is bright orange in the Tuple count legend).
 
-## RateMeters
+### Rate meters
 
-The other type of metric we can look at are ```RateMeter``` metrics.  In the ```ConsoleWaterDetector``` application we added two rate meters here with the objective of comparing the rate of out of range readings between well1 and well3:
+The other type of metric we can look at are rate meter metrics. In the `ConsoleWaterDetector` application we added two rate meters here with the objective of comparing the rate of out of range readings between `well1` and `well3`:
 
 ```
-	List<TStream<JsonObject>> individualAlerts1 = splitAlert(filteredReadings1, 1);
+List<TStream<JsonObject>> individualAlerts1 = splitAlert(filteredReadings1, 1);
 
-	// Put a rate meter on well1's temperature sensor output
-	Metrics.rateMeter(individualAlerts1.get(0));
-	...
-	List<TStream<JsonObject>> individualAlerts3 = splitAlert(filteredReadings3, 3);
+// Put a rate meter on well1's temperature sensor output
+Metrics.rateMeter(individualAlerts1.get(0));
+...
+List<TStream<JsonObject>> individualAlerts3 = splitAlert(filteredReadings3, 3);
 
-	// Put a rate meter on well3's temperature sensor output
-	Metrics.rateMeter(individualAlerts3.get(0));
+// Put a rate meter on well3's temperature sensor output
+Metrics.rateMeter(individualAlerts3.get(0));
 ```
 
-RateMeters contain the following metrics for each stream they are added to:
+Rate meters contain the following metrics for each stream they are added to:
 
   * Tuple count
   * The rate of change in the tuple count. The following rates are available for a single stream:
-    * 1 minute rate change
-    * 5 minute rate change
-    * 15 minute rate change
-    * Mean rate change
+    - 1 minute rate change
+    - 5 minute rate change
+    - 15 minute rate change
+    - Mean rate change
 
-Now change the Metrics select to the 'MeanRate'.  In our example these correspond to oplets OP_37 and OP_49:
+Now change the Metrics select to the 'MeanRate'. In our example these correspond to oplets OP_37 and OP_49:
 
 <img src="images/console_rate_metrics.jpg" width='100%'/>
 
-Hovering over the slightly larger bar, the one to the right, the name is OP_49.  Looking at the topology graph and changing the view to 'Static flow', follow the edges back from OP_49 until you can see an edge with a tag on it. You can see that OP_49's source is OP_51, whose source is OP_99.  The edge between OP_99 and it's source OP_48 has multiple tags.  Hovering over this stream, the tags are 'TEMP out of range' and 'well3'.
+Hovering over the slightly larger bar, the one to the right, the name is OP_49. Looking at the topology graph and changing the view to 'Static flow', follow the edges back from OP_49 until you can see an edge with a tag on it. You can see that OP_49's source is OP_51, whose source is OP_99.  The edge between OP_99 and it's source OP_48 has multiple tags. Hovering over this stream, the tags are 'TEMP out of range' and 'well3'.
 
-If a single Rate Meter is placed on a stream, in addition to plotting a bar chart, a line chart over the last 20 measures can be viewed.  For example, if I comment out the addition of the rateMeter for well1 and then rerun the application, the Metrics section will look like the image below.  I selected the OneMinuteRate and 'Line chart' for Chart type:
+If a single rate meter is placed on a stream, in addition to plotting a bar chart, a line chart over the last 20 measures can be viewed. For example, if I comment out the addition of the rate meter for `well1` and then rerun the application, the Metrics section will look like the image below. I selected the 'OneMinuteRate' and 'Line chart' for Chart type:
 
 <img src="images/console_metrics_line_chart.jpg" width='100%'/>
 
-#Summary
+## Summary
 
 The intent of the information on this page is to help you understand the following:
 
@@ -476,6 +490,6 @@ The intent of the information on this page is to help you understand the followi
 * How to use the metrics section to understand tuple counters and rate meters
 * How to correlate values from the metrics section with the topology graph
 
-The Quarks console will continue to evolve and improve.  Please open an issue if you see a problem with the existing console, but more importantly add an issue if you have an idea of how to make the console better.  
+The Quarks console will continue to evolve and improve. Please open an issue if you see a problem with the existing console, but more importantly add an issue if you have an idea of how to make the console better.
 
-The more folks write Quarks applications and view them in the console, the more information we can gather from the community about what is needed in the console.  Please consider making a contribution if there is a feature in the console that would really help you and others!
+The more folks write Quarks applications and view them in the console, the more information we can gather from the community about what is needed in the console. Please consider making a contribution if there is a feature in the console that would really help you and others!

http://git-wip-us.apache.org/repos/asf/incubator-quarks-website/blob/4a3afaef/site/docs/faq.md
----------------------------------------------------------------------
diff --git a/site/docs/faq.md b/site/docs/faq.md
index 5b8711e..566784f 100644
--- a/site/docs/faq.md
+++ b/site/docs/faq.md
@@ -1,6 +1,7 @@
 ---
-title: FAQ  
+title: FAQ
 ---
+
 ## What is Apache Quarks?
 
 Quarks provides APIs and a lightweight runtime to analyze streaming data at the edge.
@@ -11,23 +12,23 @@ The edge includes devices, gateways, equipment, vehicles, systems, appliances an
 
 ## How is Apache Quarks used?
 
-Quarks can be used at the edge of the Internet of Things, for example, to analyze data on devices, engines, connected cars, etc.  Quarks could be on the device itself, or a gateway device collecting data from local devices.  You can write an edge application on Quarks and connect it to a Cloud service, such as the IBM Watson IoT Platform. It can also be used for enterprise data collection and analysis; for example log collectors, application data, and data center analytics.
+Quarks can be used at the edge of the Internet of Things, for example, to analyze data on devices, engines, connected cars, etc. Quarks could be on the device itself, or a gateway device collecting data from local devices. You can write an edge application on Quarks and connect it to a Cloud service, such as the IBM Watson IoT Platform. It can also be used for enterprise data collection and analysis; for example log collectors, application data, and data center analytics.
 
 ## How are applications developed?
 
-Applications are developed using a functional flow API to define operations on data streams that are executed as a graph of "oplets" in a lightweight embeddable runtime.  The SDK provides capabilities like windowing, aggregation and connectors with an extensible model for the community to expand its capabilities.
+Applications are developed using a functional flow API to define operations on data streams that are executed as a graph of "oplets" in a lightweight embeddable runtime. The SDK provides capabilities like windowing, aggregation and connectors with an extensible model for the community to expand its capabilities.
 
 ## What APIs does Apache Quarks support?
 
-Currently, Quarks supports APIs for Java and Android. Support for additional languages, such as Python, is likely as more developers get involved.  Please consider joining the Quarks open source development community to accelerate the contributions of additional APIs.
+Currently, Quarks supports APIs for Java and Android. Support for additional languages, such as Python, is likely as more developers get involved. Please consider joining the Quarks open source development community to accelerate the contributions of additional APIs.
 
 ## What type of analytics can be done with Apache Quarks?
 
-Quarks provides windowing, aggregation and simple filtering. It uses Apache Common Math to provide simple analytics aimed at device sensors.  Quarks is also extensible, so you can call existing libraries from within your Quarks application.  In the future, Quarks will include more analytics, either exposing more functionality from Apache Common Math, other libraries or hand-coded analytics.
+Quarks provides windowing, aggregation and simple filtering. It uses Apache Common Math to provide simple analytics aimed at device sensors. Quarks is also extensible, so you can call existing libraries from within your Quarks application. In the future, Quarks will include more analytics, either exposing more functionality from Apache Common Math, other libraries or hand-coded analytics.
 
 ## What connectors does Apache Quarks support?
 
-Quarks supports connectors for MQTT, HTTP, JDBC, File, Apache Kafka and IBM Watson IoT Platform.  Quarks is extensible; you can add the connector of your choice.
+Quarks supports connectors for MQTT, HTTP, JDBC, File, Apache Kafka and IBM Watson IoT Platform. Quarks is extensible; you can add the connector of your choice.
 
 ## What centralized streaming analytic systems does Apache Quarks support?
 
@@ -35,7 +36,7 @@ Quarks supports open source technology (such as Apache Spark, Apache Storm, Flin
 
 ## Why do I need Apache Quarks on the edge, rather than my streaming analytic system?
 
-Quarks is designed for the edge, rather than a more centralized system.  It has a small footprint, suitable for running on devices.  Quarks provides simple analytics, allowing a device to analyze data locally and to only send to the centralized system if there is a need, reducing communication costs.
+Quarks is designed for the edge, rather than a more centralized system. It has a small footprint, suitable for running on devices. Quarks provides simple analytics, allowing a device to analyze data locally and to only send to the centralized system if there is a need, reducing communication costs.
 
 ## Why do I need Apache Quarks, rather than coding the complete application myself?
 
@@ -43,7 +44,7 @@ Quarks is a tool for edge analytics that allows you to be more productive. Quark
 
 ## Where can I download Apache Quarks to try it out?
 
-Quarks is migrating from github quarks-edge to Apache. You can download the source from Apache and build it yourself [here](https://github.com/apache/incubator-quarks).  You can also  find already built pre-Apache releases of Quarks for download [here](https://github.com/quarks-edge/quarks/releases/latest). These releases are not associated with Apache.
+Quarks is migrating from github quarks-edge to Apache. You can download the source from Apache and build it yourself [here](https://github.com/apache/incubator-quarks). You can also find already built pre-Apache releases of Quarks for download [here](https://github.com/quarks-edge/quarks/releases/latest). These releases are not associated with Apache.
 
 ## How do I get started?
 
@@ -51,11 +52,11 @@ Getting started is simple. Once you have downloaded Quarks, everything you need
 
 ## How can I get involved?
 
- We would love to have your help! Visit [Get Involved](community) to learn more about how to get involved.
+We would love to have your help! Visit [Get Involved](community) to learn more about how to get involved.
 
 ## How can I contribute code?
 
-Just submit a [pull request](https://github.com/apache/incubator-quarks) and wait for a committer to review.  For more information, visit our [committer page](committers) and read [DEVELOPMENT.md] (https://github.com/apache/incubator-quarks/blob/master/DEVELOPMENT.md) at the top of the code tree.
+Just submit a [pull request](https://github.com/apache/incubator-quarks) and wait for a committer to review. For more information, visit our [committer page](committers) and read [DEVELOPMENT.md](https://github.com/apache/incubator-quarks/blob/master/DEVELOPMENT.md) at the top of the code tree.
 
 ## Can I become a committer?
 
@@ -67,12 +68,11 @@ The source code is available [here](https://github.com/apache/incubator-quarks).
 
 ## Can I take a copy of the code and fork it for my own use?
 
-Yes. Quarks is available under the Apache 2.0 license which allows you to fork the code.  We hope you will contribute your changes back to the Quarks community.
+Yes. Quarks is available under the Apache 2.0 license which allows you to fork the code. We hope you will contribute your changes back to the Quarks community.
 
 ## How do I suggest new features?
 
-Click [Issues](https://issues.apache.org/jira/browse/QUARKS)
- to submit requests for new features. You may browse or query the Issues database to see what other members of the Quarks community have already requested.
+Click [Issues](https://issues.apache.org/jira/browse/QUARKS) to submit requests for new features. You may browse or query the Issues database to see what other members of the Quarks community have already requested.
 
 ## How do I submit bug reports?
 
@@ -84,4 +84,4 @@ Use [site.data.project.user_list](mailto:{{ site.data.project.user_list }}) to s
 
 ## Why is Apache Quarks open source?
 
-With the growth of the Internet of Things there is a need to execute analytics at the edge. Quarks was developed to address requirements for analytics at the edge for IoT use cases that were not addressed by central analytic solutions.  These capabilities will be useful to many organizations and that the diverse nature of edge devices and use cases is best addressed by an open community.  Our goal is to develop a vibrant community of developers and users to expand the capabilities and real-world use of Quarks by companies and individuals to enable edge analytics and further innovation for the IoT space.
+With the growth of the Internet of Things there is a need to execute analytics at the edge. Quarks was developed to address requirements for analytics at the edge for IoT use cases that were not addressed by central analytic solutions. These capabilities will be useful to many organizations and that the diverse nature of edge devices and use cases is best addressed by an open community. Our goal is to develop a vibrant community of developers and users to expand the capabilities and real-world use of Quarks by companies and individuals to enable edge analytics and further innovation for the IoT space.

http://git-wip-us.apache.org/repos/asf/incubator-quarks-website/blob/4a3afaef/site/docs/home.md
----------------------------------------------------------------------
diff --git a/site/docs/home.md b/site/docs/home.md
index d2e5d07..6a9c01b 100644
--- a/site/docs/home.md
+++ b/site/docs/home.md
@@ -7,6 +7,7 @@ homepage: true
 ---
 
 ## Apache Quarks overview
+
 Devices and sensors are everywhere, and more are coming online every day. You need a way to analyze all of the data coming from your devices, but it can be expensive to transmit all of the data from a sensor to your central analytics engine.
 
 Quarks is an open source programming model and runtime for edge devices that enables you to analyze data and events at the device. When you analyze on the edge, you can:
@@ -17,19 +18,20 @@ Quarks is an open source programming model and runtime for edge devices that ena
 
 A Quarks application uses analytics to determine when data needs to be sent to a back-end system for further analysis, action, or storage. For example, you can use Quarks to determine whether a system is running outside of normal parameters, such as an engine that is running too hot.
 
-If the system is running normally, you don\u2019t need to send this data to your back-end system; it\u2019s an added cost and an additional load on your system to process and store. However, if Quarks detects an issue, you can transmit that data to your back-end system to determine why the issue is occurring and how to resolve the issue.   
+If the system is running normally, you don\u2019t need to send this data to your back-end system; it\u2019s an added cost and an additional load on your system to process and store. However, if Quarks detects an issue, you can transmit that data to your back-end system to determine why the issue is occurring and how to resolve the issue.
 
 Quarks enables you to shift from sending a continuous flow of trivial data to the server to sending only essential and meaningful data as it occurs. This is especially important when the cost of communication is high, such as when using a cellular network to transmit data, or when bandwidth is limited.
 
 The following use cases describe the primary situations in which you would use Quarks:
 
-* *Internet of Things (IoT):* Analyze data on distributed edge devices and mobile devices to:
-  * Reduce the cost of transmitting data
-  * Provide local feedback at the devices
-* *Embedded in an application server instance:* Analyze application server error logs in real time without impacting network traffic
-* *Server rooms and machine rooms:* Analyze machine health in real time without impacting network traffic or when bandwidth is limited
+* **Internet of Things (IoT)**: Analyze data on distributed edge devices and mobile devices to:
+  - Reduce the cost of transmitting data
+  - Provide local feedback at the devices
+* **Embedded in an application server instance**: Analyze application server error logs in real time without impacting network traffic
+* **Server rooms and machine rooms**: Analyze machine health in real time without impacting network traffic or when bandwidth is limited
 
 ### Deployment environments
+
 The following environments have been tested for deployment on edge devices:
 
 * Java 8, including Raspberry Pi B and Pi2 B
@@ -37,16 +39,16 @@ The following environments have been tested for deployment on edge devices:
 * Android
 
 ### Edge devices and back-end systems
+
 You can send data from an Apache Quarks application to your back-end system when you need to perform analysis that cannot be performed on the edge device, such as:
 
-* Running a complex analytic algorithm that requires more resources, such as CPU or memory, than are available on the edge device.
-* Maintaining large amounts of state information about a device, such as several hours worth of state information for a patient\u2019s
-medical device.
+* Running a complex analytic algorithm that requires more resources, such as CPU or memory, than are available on the edge device
+* Maintaining large amounts of state information about a device, such as several hours worth of state information for a patient\u2019s medical device
 * Correlating data from the device with data from other sources, such as:
-  * Weather data
-  * Social media data
-  * Data of record, such as a patient\u2019s medical history or trucking manifests
-  * Data from other devices
+  - Weather data
+  - Social media data
+  - Data of record, such as a patient\u2019s medical history or trucking manifests
+  - Data from other devices
 
 Quarks communicates with your back-end systems through the following message hubs: