You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@edgent.apache.org by queeniema <gi...@git.apache.org> on 2016/04/29 22:10:16 UTC

[GitHub] incubator-quarks-website pull request: [QUARKS-159] Update website...

GitHub user queeniema opened a pull request:

    https://github.com/apache/incubator-quarks-website/pull/53

    [QUARKS-159] Update website to follow style guide

    

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/queeniema/incubator-quarks-website QUARKS-159

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/incubator-quarks-website/pull/53.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #53
    
----
commit 4a3afaefc23415144726e9383795f64aa49fe649
Author: Queenie Ma <qu...@gmail.com>
Date:   2016-04-29T20:01:37Z

    [QUARKS-159] Update website to follow style guide

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-quarks-website pull request: [QUARKS-159] Update website...

Posted by dlaboss <gi...@git.apache.org>.
Github user dlaboss commented on a diff in the pull request:

    https://github.com/apache/incubator-quarks-website/pull/53#discussion_r61791036
  
    --- Diff: site/docs/quarks-getting-started.md ---
    @@ -42,152 +44,151 @@ The Quarks Java 8 JAR files are located in the `quarks/java8/lib` directory.
     
         <img src="images/Build_Path_Jars.JPG" style="width:661px;height:444px;">
     
    -<br/>
     Your environment is set up! You can start writing your first Quarks application.
     
    -
     ## Creating a simple application
    +
     If you're new to Quarks or to writing streaming applications, the best way to get started is to write a simple program.
     
    -Quarks is a framework that pushes data analytics and machine learning to *edge devices*. (Edge devices include things like routers, gateways, machines, equipment, sensors, appliances, or vehicles that are connected to a network.) Quarks enables you to process data locally---such as, in a car engine, on an Android phone, or Raspberry Pi---before you send data over a network.
    +Quarks is a framework that pushes data analytics and machine learning to *edge devices*. (Edge devices include things like routers, gateways, machines, equipment, sensors, appliances, or vehicles that are connected to a network.) Quarks enables you to process data locally&mdash;such as, in a car engine, on an Android phone, or Raspberry Pi&mdash;before you send data over a network.
     
     For example, if your device takes temperature readings from a sensor 1,000 times per second, it is more efficient to process the data locally and send only interesting or unexpected results over the network. To simulate this, let's define a (simulated) TempSensor class:
     
    -
    -
     ```java
    -  	import java.util.Random;
    -
    -  	import quarks.function.Supplier;
    -
    -  	/**
    -     * Every time get() is called, TempSensor generates a temperature reading.
    -     */
    -    public class TempSensor implements Supplier<Double> {
    -  		double currentTemp = 65.0;
    -  		Random rand;
    -
    -  		TempSensor(){
    -  			rand = new Random();
    -  		}
    -
    -  		@Override
    -  		public Double get() {
    -  			// Change the current temperature some random amount
    -  			double newTemp = rand.nextGaussian() + currentTemp;
    -  			currentTemp = newTemp;
    -  			return currentTemp;
    -  		}
    -  	}
    +import java.util.Random;
    +
    +import quarks.function.Supplier;
    +
    +/**
    + * Every time get() is called, TempSensor generates a temperature reading.
    + */
    +public class TempSensor implements Supplier<Double> {
    +    double currentTemp = 65.0;
    +    Random rand;
    +
    +    TempSensor(){
    +        rand = new Random();
    +    }
    +
    +    @Override
    +    public Double get() {
    +        // Change the current temperature some random amount
    +        double newTemp = rand.nextGaussian() + currentTemp;
    +        currentTemp = newTemp;
    +        return currentTemp;
    +    }
    +}
     ```
     
    -
     Every time you call `TempSensor.get()`, it returns a new temperature reading. The continuous temperature readings are a stream of data that a Quarks application can process.
     
     Our sample Quarks application processes this stream by filtering the data and printing the results. Let's define a TempSensorApplication class for the application:
     
     ```java
    -	import java.util.concurrent.TimeUnit;
    -
    -	import quarks.providers.direct.DirectProvider;
    -	import quarks.topology.TStream;
    -	import quarks.topology.Topology;
    -
    -	public class TempSensorApplication {
    -		public static void main(String[] args) throws Exception {
    -		    TempSensor sensor = new TempSensor();
    -		    DirectProvider dp = new DirectProvider();      
    -		    Topology topology = dp.newTopology();
    -		    TStream<Double> tempReadings = topology.poll(sensor, 1, TimeUnit.MILLISECONDS);
    -		    TStream<Double> filteredReadings = tempReadings.filter(reading -> reading < 50 || reading > 80);
    -
    -		    filteredReadings.print();
    -		    dp.submit(topology);
    -		  }
    -	}
    +import java.util.concurrent.TimeUnit;
    +
    +import quarks.providers.direct.DirectProvider;
    +import quarks.topology.TStream;
    +import quarks.topology.Topology;
    +
    +public class TempSensorApplication {
    +    public static void main(String[] args) throws Exception {
    +        TempSensor sensor = new TempSensor();
    +        DirectProvider dp = new DirectProvider();
    +        Topology topology = dp.newTopology();
    +        TStream<Double> tempReadings = topology.poll(sensor, 1, TimeUnit.MILLISECONDS);
    +        TStream<Double> filteredReadings = tempReadings.filter(reading -> reading < 50 || reading > 80);
    +
    +        filteredReadings.print();
    +        dp.submit(topology);
    +    }
    +}
     ```
     
     To understand how the application processes the stream, let's review each line.
     
     ### Specifying a provider
    -Your first step when you write a Quarks application is to create a
    -[`DirectProvider`](http://quarks-edge.github.io/quarks/docs/javadoc/index.html?quarks/providers/direct/DirectProvider.html) :
    +
    +Your first step when you write a Quarks application is to create a [`DirectProvider`](http://quarks-edge.github.io/quarks/docs/javadoc/index.html?quarks/providers/direct/DirectProvider.html):
     
     ```java
    -    DirectProvider dp = new DirectProvider();
    +DirectProvider dp = new DirectProvider();
     ```
     
    -A **Provider** is an object that contains information on how and where your Quarks application will run. A **DirectProvider** is a type of Provider that runs your application directly within the current virtual machine when its `submit()` method is called.
    +A `Provider` is an object that contains information on how and where your Quarks application will run. A `DirectProvider` is a type of Provider that runs your application directly within the current virtual machine when its `submit()` method is called.
     
     ### Creating a topology
    -Additionally a Provider is used to create a
    -[`Topology`](http://quarks-edge.github.io/quarks/docs/javadoc/index.html?quarks/topology/Topology.html) instance :
    +
    +Additionally a Provider is used to create a [`Topology`](http://quarks-edge.github.io/quarks/docs/javadoc/index.html?quarks/topology/Topology.html) instance:
     
     ```java
    -    Topology topology = dp.newTopology();
    +Topology topology = dp.newTopology();
     ```
     
    -In Quarks, **Topology** is a container that describes the structure of your application:
    +In Quarks, `Topology` is a container that describes the structure of your application:
     
     * Where the streams in the application come from
    -
     * How the data in the stream is modified
     
    -In the TempSensor application above, we have exactly one data source: the `TempSensor` object. We define the source stream by calling `topology.poll()`, which takes both a Supplier function and a time parameter to indicate how frequently readings should be taken. In our case, we read from the sensor every millisecond:
    +In the TempSensor application above, we have exactly one data source: the `TempSensor` object. We define the source stream by calling `topology.poll()`, which takes both a `Supplier` function and a time parameter to indicate how frequently readings should be taken. In our case, we read from the sensor every millisecond:
     
     ```java
    -    TStream<Double> tempReadings = topology.poll(sensor, 1, TimeUnit.MILLISECONDS);
    +TStream<Double> tempReadings = topology.poll(sensor, 1, TimeUnit.MILLISECONDS);
     ```
     
    -### Defining the TStream object
    +### Defining the `TStream` object
    +
     Calling `topology.poll()` to define a source stream creates a `TStream<Double>` instance, which represents the series of readings taken from the temperature sensor.
     
    -A streaming application can run indefinitely, so the TStream might see an arbitrarily large number of readings pass through it. Because a TStream represents the flow of your data, it supports a number of operations which allow you to modify your data.
    +A streaming application can run indefinitely, so the `TStream` might see an arbitrarily large number of readings pass through it. Because a `TStream` represents the flow of your data, it supports a number of operations which allow you to modify your data.
    +
    +## Filtering a `TStream`
    --- End diff --
    
    I made the adjustment when merging


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-quarks-website pull request: [QUARKS-159] Update website...

Posted by dlaboss <gi...@git.apache.org>.
Github user dlaboss commented on a diff in the pull request:

    https://github.com/apache/incubator-quarks-website/pull/53#discussion_r61644260
  
    --- Diff: site/docs/quarks-getting-started.md ---
    @@ -42,152 +44,151 @@ The Quarks Java 8 JAR files are located in the `quarks/java8/lib` directory.
     
         <img src="images/Build_Path_Jars.JPG" style="width:661px;height:444px;">
     
    -<br/>
     Your environment is set up! You can start writing your first Quarks application.
     
    -
     ## Creating a simple application
    +
     If you're new to Quarks or to writing streaming applications, the best way to get started is to write a simple program.
     
    -Quarks is a framework that pushes data analytics and machine learning to *edge devices*. (Edge devices include things like routers, gateways, machines, equipment, sensors, appliances, or vehicles that are connected to a network.) Quarks enables you to process data locally---such as, in a car engine, on an Android phone, or Raspberry Pi---before you send data over a network.
    +Quarks is a framework that pushes data analytics and machine learning to *edge devices*. (Edge devices include things like routers, gateways, machines, equipment, sensors, appliances, or vehicles that are connected to a network.) Quarks enables you to process data locally&mdash;such as, in a car engine, on an Android phone, or Raspberry Pi&mdash;before you send data over a network.
     
     For example, if your device takes temperature readings from a sensor 1,000 times per second, it is more efficient to process the data locally and send only interesting or unexpected results over the network. To simulate this, let's define a (simulated) TempSensor class:
     
    -
    -
     ```java
    -  	import java.util.Random;
    -
    -  	import quarks.function.Supplier;
    -
    -  	/**
    -     * Every time get() is called, TempSensor generates a temperature reading.
    -     */
    -    public class TempSensor implements Supplier<Double> {
    -  		double currentTemp = 65.0;
    -  		Random rand;
    -
    -  		TempSensor(){
    -  			rand = new Random();
    -  		}
    -
    -  		@Override
    -  		public Double get() {
    -  			// Change the current temperature some random amount
    -  			double newTemp = rand.nextGaussian() + currentTemp;
    -  			currentTemp = newTemp;
    -  			return currentTemp;
    -  		}
    -  	}
    +import java.util.Random;
    +
    +import quarks.function.Supplier;
    +
    +/**
    + * Every time get() is called, TempSensor generates a temperature reading.
    + */
    +public class TempSensor implements Supplier<Double> {
    +    double currentTemp = 65.0;
    +    Random rand;
    +
    +    TempSensor(){
    +        rand = new Random();
    +    }
    +
    +    @Override
    +    public Double get() {
    +        // Change the current temperature some random amount
    +        double newTemp = rand.nextGaussian() + currentTemp;
    +        currentTemp = newTemp;
    +        return currentTemp;
    +    }
    +}
     ```
     
    -
     Every time you call `TempSensor.get()`, it returns a new temperature reading. The continuous temperature readings are a stream of data that a Quarks application can process.
     
     Our sample Quarks application processes this stream by filtering the data and printing the results. Let's define a TempSensorApplication class for the application:
     
     ```java
    -	import java.util.concurrent.TimeUnit;
    -
    -	import quarks.providers.direct.DirectProvider;
    -	import quarks.topology.TStream;
    -	import quarks.topology.Topology;
    -
    -	public class TempSensorApplication {
    -		public static void main(String[] args) throws Exception {
    -		    TempSensor sensor = new TempSensor();
    -		    DirectProvider dp = new DirectProvider();      
    -		    Topology topology = dp.newTopology();
    -		    TStream<Double> tempReadings = topology.poll(sensor, 1, TimeUnit.MILLISECONDS);
    -		    TStream<Double> filteredReadings = tempReadings.filter(reading -> reading < 50 || reading > 80);
    -
    -		    filteredReadings.print();
    -		    dp.submit(topology);
    -		  }
    -	}
    +import java.util.concurrent.TimeUnit;
    +
    +import quarks.providers.direct.DirectProvider;
    +import quarks.topology.TStream;
    +import quarks.topology.Topology;
    +
    +public class TempSensorApplication {
    +    public static void main(String[] args) throws Exception {
    +        TempSensor sensor = new TempSensor();
    +        DirectProvider dp = new DirectProvider();
    +        Topology topology = dp.newTopology();
    +        TStream<Double> tempReadings = topology.poll(sensor, 1, TimeUnit.MILLISECONDS);
    +        TStream<Double> filteredReadings = tempReadings.filter(reading -> reading < 50 || reading > 80);
    +
    +        filteredReadings.print();
    +        dp.submit(topology);
    +    }
    +}
     ```
     
     To understand how the application processes the stream, let's review each line.
     
     ### Specifying a provider
    -Your first step when you write a Quarks application is to create a
    -[`DirectProvider`](http://quarks-edge.github.io/quarks/docs/javadoc/index.html?quarks/providers/direct/DirectProvider.html) :
    +
    +Your first step when you write a Quarks application is to create a [`DirectProvider`](http://quarks-edge.github.io/quarks/docs/javadoc/index.html?quarks/providers/direct/DirectProvider.html):
     
     ```java
    -    DirectProvider dp = new DirectProvider();
    +DirectProvider dp = new DirectProvider();
     ```
     
    -A **Provider** is an object that contains information on how and where your Quarks application will run. A **DirectProvider** is a type of Provider that runs your application directly within the current virtual machine when its `submit()` method is called.
    +A `Provider` is an object that contains information on how and where your Quarks application will run. A `DirectProvider` is a type of Provider that runs your application directly within the current virtual machine when its `submit()` method is called.
     
     ### Creating a topology
    -Additionally a Provider is used to create a
    -[`Topology`](http://quarks-edge.github.io/quarks/docs/javadoc/index.html?quarks/topology/Topology.html) instance :
    +
    +Additionally a Provider is used to create a [`Topology`](http://quarks-edge.github.io/quarks/docs/javadoc/index.html?quarks/topology/Topology.html) instance:
     
     ```java
    -    Topology topology = dp.newTopology();
    +Topology topology = dp.newTopology();
     ```
     
    -In Quarks, **Topology** is a container that describes the structure of your application:
    +In Quarks, `Topology` is a container that describes the structure of your application:
     
     * Where the streams in the application come from
    -
     * How the data in the stream is modified
     
    -In the TempSensor application above, we have exactly one data source: the `TempSensor` object. We define the source stream by calling `topology.poll()`, which takes both a Supplier function and a time parameter to indicate how frequently readings should be taken. In our case, we read from the sensor every millisecond:
    +In the TempSensor application above, we have exactly one data source: the `TempSensor` object. We define the source stream by calling `topology.poll()`, which takes both a `Supplier` function and a time parameter to indicate how frequently readings should be taken. In our case, we read from the sensor every millisecond:
     
     ```java
    -    TStream<Double> tempReadings = topology.poll(sensor, 1, TimeUnit.MILLISECONDS);
    +TStream<Double> tempReadings = topology.poll(sensor, 1, TimeUnit.MILLISECONDS);
     ```
     
    -### Defining the TStream object
    +### Defining the `TStream` object
    +
     Calling `topology.poll()` to define a source stream creates a `TStream<Double>` instance, which represents the series of readings taken from the temperature sensor.
     
    -A streaming application can run indefinitely, so the TStream might see an arbitrarily large number of readings pass through it. Because a TStream represents the flow of your data, it supports a number of operations which allow you to modify your data.
    +A streaming application can run indefinitely, so the `TStream` might see an arbitrarily large number of readings pass through it. Because a `TStream` represents the flow of your data, it supports a number of operations which allow you to modify your data.
    +
    +## Filtering a `TStream`
    --- End diff --
    
    An impressive set of all-encompassing changes!  I just got lucky noticing this :-)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-quarks-website pull request: [QUARKS-159] Update website...

Posted by dlaboss <gi...@git.apache.org>.
Github user dlaboss commented on a diff in the pull request:

    https://github.com/apache/incubator-quarks-website/pull/53#discussion_r61643044
  
    --- Diff: site/recipes/recipe_adaptable_deadtime_filter.md ---
    @@ -4,63 +4,63 @@ title: Using an adaptable deadtime filter
     
     Oftentimes, an application wants to control the frequency that continuously generated analytic results are made available to other parts of the application or published to other applications or an event hub.
     
    -For example, an application polls an engine temperature sensor every second and performs various analytics on each reading - an analytic result is generated every second.  By default, the application only wants to publish a (healthy) analytic result every 30 minutes.  However, under certain conditions, the desire is to publish every per-second analytic result.
    +For example, an application polls an engine temperature sensor every second and performs various analytics on each reading &mdash; an analytic result is generated every second. By default, the application only wants to publish a (healthy) analytic result every 30 minutes. However, under certain conditions, the desire is to publish every per-second analytic result.
     
     Such a condition may be locally detected, such as detecting a sudden rise in the engine temperature or it may be as a result of receiving some external command to change the publishing frequency.
     
     Note this is a different case than simply changing the polling frequency for the sensor as doing that would disable local continuous monitoring and analysis of the engine temperature.
     
    -This case needs a *deadtime filter* and Quarks provides one for your use!  In contrast to a *deadband filter*, which skips tuples based on a deadband value range, a deadtime filter skips tuples based on a *deadtime period* following a tuple that is allowed to pass through.  E.g., if the deadtime period is 30 minutes, after allowing a tuple to pass, the filter skips any tuples received for the next 30 minutes.  The next tuple received after that is allowed to pass through, and a new deadtime period is begun.
    +This case needs a *deadtime filter* and Quarks provides one for your use! In contrast to a *deadband filter*, which skips tuples based on a deadband value range, a deadtime filter skips tuples based on a *deadtime period* following a tuple that is allowed to pass through. For example, if the deadtime period is 30 minutes, after allowing a tuple to pass, the filter skips any tuples received for the next 30 minutes. The next tuple received after that is allowed to pass through, and a new deadtime period is begun.
     
    -See ``quarks.analytics.sensors.Filters.deadtime()`` and ``quarks.analytics.sensors.Deadtime``.
    +See `quarks.analytics.sensors.Filters.deadtime()` (on [GitHub](https://github.com/apache/incubator-quarks/blob/master/analytics/sensors/src/main/java/quarks/analytics/sensors/Filters.java)) and `quarks.analytics.sensors.Deadtime` (on [GitHub](https://github.com/apache/incubator-quarks/blob/master/analytics/sensors/src/main/java/quarks/analytics/sensors/Deadtime.java)).
    --- End diff --
    
    I appreciate the effort to supply helpful links! :-) But users really don't really want a link to the source code. 
    Is there an intent (jira?) to go fix these sort of things to link to javadoc once that's available?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-quarks-website pull request: [QUARKS-159] Update website...

Posted by dlaboss <gi...@git.apache.org>.
Github user dlaboss commented on a diff in the pull request:

    https://github.com/apache/incubator-quarks-website/pull/53#discussion_r61641412
  
    --- Diff: site/docs/quarks-getting-started.md ---
    @@ -42,152 +44,151 @@ The Quarks Java 8 JAR files are located in the `quarks/java8/lib` directory.
     
         <img src="images/Build_Path_Jars.JPG" style="width:661px;height:444px;">
     
    -<br/>
     Your environment is set up! You can start writing your first Quarks application.
     
    -
     ## Creating a simple application
    +
     If you're new to Quarks or to writing streaming applications, the best way to get started is to write a simple program.
     
    -Quarks is a framework that pushes data analytics and machine learning to *edge devices*. (Edge devices include things like routers, gateways, machines, equipment, sensors, appliances, or vehicles that are connected to a network.) Quarks enables you to process data locally---such as, in a car engine, on an Android phone, or Raspberry Pi---before you send data over a network.
    +Quarks is a framework that pushes data analytics and machine learning to *edge devices*. (Edge devices include things like routers, gateways, machines, equipment, sensors, appliances, or vehicles that are connected to a network.) Quarks enables you to process data locally&mdash;such as, in a car engine, on an Android phone, or Raspberry Pi&mdash;before you send data over a network.
     
     For example, if your device takes temperature readings from a sensor 1,000 times per second, it is more efficient to process the data locally and send only interesting or unexpected results over the network. To simulate this, let's define a (simulated) TempSensor class:
     
    -
    -
     ```java
    -  	import java.util.Random;
    -
    -  	import quarks.function.Supplier;
    -
    -  	/**
    -     * Every time get() is called, TempSensor generates a temperature reading.
    -     */
    -    public class TempSensor implements Supplier<Double> {
    -  		double currentTemp = 65.0;
    -  		Random rand;
    -
    -  		TempSensor(){
    -  			rand = new Random();
    -  		}
    -
    -  		@Override
    -  		public Double get() {
    -  			// Change the current temperature some random amount
    -  			double newTemp = rand.nextGaussian() + currentTemp;
    -  			currentTemp = newTemp;
    -  			return currentTemp;
    -  		}
    -  	}
    +import java.util.Random;
    +
    +import quarks.function.Supplier;
    +
    +/**
    + * Every time get() is called, TempSensor generates a temperature reading.
    + */
    +public class TempSensor implements Supplier<Double> {
    +    double currentTemp = 65.0;
    +    Random rand;
    +
    +    TempSensor(){
    +        rand = new Random();
    +    }
    +
    +    @Override
    +    public Double get() {
    +        // Change the current temperature some random amount
    +        double newTemp = rand.nextGaussian() + currentTemp;
    +        currentTemp = newTemp;
    +        return currentTemp;
    +    }
    +}
     ```
     
    -
     Every time you call `TempSensor.get()`, it returns a new temperature reading. The continuous temperature readings are a stream of data that a Quarks application can process.
     
     Our sample Quarks application processes this stream by filtering the data and printing the results. Let's define a TempSensorApplication class for the application:
     
     ```java
    -	import java.util.concurrent.TimeUnit;
    -
    -	import quarks.providers.direct.DirectProvider;
    -	import quarks.topology.TStream;
    -	import quarks.topology.Topology;
    -
    -	public class TempSensorApplication {
    -		public static void main(String[] args) throws Exception {
    -		    TempSensor sensor = new TempSensor();
    -		    DirectProvider dp = new DirectProvider();      
    -		    Topology topology = dp.newTopology();
    -		    TStream<Double> tempReadings = topology.poll(sensor, 1, TimeUnit.MILLISECONDS);
    -		    TStream<Double> filteredReadings = tempReadings.filter(reading -> reading < 50 || reading > 80);
    -
    -		    filteredReadings.print();
    -		    dp.submit(topology);
    -		  }
    -	}
    +import java.util.concurrent.TimeUnit;
    +
    +import quarks.providers.direct.DirectProvider;
    +import quarks.topology.TStream;
    +import quarks.topology.Topology;
    +
    +public class TempSensorApplication {
    +    public static void main(String[] args) throws Exception {
    +        TempSensor sensor = new TempSensor();
    +        DirectProvider dp = new DirectProvider();
    +        Topology topology = dp.newTopology();
    +        TStream<Double> tempReadings = topology.poll(sensor, 1, TimeUnit.MILLISECONDS);
    +        TStream<Double> filteredReadings = tempReadings.filter(reading -> reading < 50 || reading > 80);
    +
    +        filteredReadings.print();
    +        dp.submit(topology);
    +    }
    +}
     ```
     
     To understand how the application processes the stream, let's review each line.
     
     ### Specifying a provider
    -Your first step when you write a Quarks application is to create a
    -[`DirectProvider`](http://quarks-edge.github.io/quarks/docs/javadoc/index.html?quarks/providers/direct/DirectProvider.html) :
    +
    +Your first step when you write a Quarks application is to create a [`DirectProvider`](http://quarks-edge.github.io/quarks/docs/javadoc/index.html?quarks/providers/direct/DirectProvider.html):
     
     ```java
    -    DirectProvider dp = new DirectProvider();
    +DirectProvider dp = new DirectProvider();
     ```
     
    -A **Provider** is an object that contains information on how and where your Quarks application will run. A **DirectProvider** is a type of Provider that runs your application directly within the current virtual machine when its `submit()` method is called.
    +A `Provider` is an object that contains information on how and where your Quarks application will run. A `DirectProvider` is a type of Provider that runs your application directly within the current virtual machine when its `submit()` method is called.
     
     ### Creating a topology
    -Additionally a Provider is used to create a
    -[`Topology`](http://quarks-edge.github.io/quarks/docs/javadoc/index.html?quarks/topology/Topology.html) instance :
    +
    +Additionally a Provider is used to create a [`Topology`](http://quarks-edge.github.io/quarks/docs/javadoc/index.html?quarks/topology/Topology.html) instance:
     
     ```java
    -    Topology topology = dp.newTopology();
    +Topology topology = dp.newTopology();
     ```
     
    -In Quarks, **Topology** is a container that describes the structure of your application:
    +In Quarks, `Topology` is a container that describes the structure of your application:
     
     * Where the streams in the application come from
    -
     * How the data in the stream is modified
     
    -In the TempSensor application above, we have exactly one data source: the `TempSensor` object. We define the source stream by calling `topology.poll()`, which takes both a Supplier function and a time parameter to indicate how frequently readings should be taken. In our case, we read from the sensor every millisecond:
    +In the TempSensor application above, we have exactly one data source: the `TempSensor` object. We define the source stream by calling `topology.poll()`, which takes both a `Supplier` function and a time parameter to indicate how frequently readings should be taken. In our case, we read from the sensor every millisecond:
     
     ```java
    -    TStream<Double> tempReadings = topology.poll(sensor, 1, TimeUnit.MILLISECONDS);
    +TStream<Double> tempReadings = topology.poll(sensor, 1, TimeUnit.MILLISECONDS);
     ```
     
    -### Defining the TStream object
    +### Defining the `TStream` object
    +
     Calling `topology.poll()` to define a source stream creates a `TStream<Double>` instance, which represents the series of readings taken from the temperature sensor.
     
    -A streaming application can run indefinitely, so the TStream might see an arbitrarily large number of readings pass through it. Because a TStream represents the flow of your data, it supports a number of operations which allow you to modify your data.
    +A streaming application can run indefinitely, so the `TStream` might see an arbitrarily large number of readings pass through it. Because a `TStream` represents the flow of your data, it supports a number of operations which allow you to modify your data.
    +
    +## Filtering a `TStream`
    --- End diff --
    
    Looks like "Filtering a TStream" this is supposed @ level3 (###) based on looking at the new webpage.  Ah see this isn't related to your changes.  But it seems a shame for the page to not be pristine after all the work you've done to clean things up :-) 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-quarks-website pull request: [QUARKS-159] Update website...

Posted by dlaboss <gi...@git.apache.org>.
Github user dlaboss commented on a diff in the pull request:

    https://github.com/apache/incubator-quarks-website/pull/53#discussion_r61643402
  
    --- Diff: site/recipes/recipe_adaptable_deadtime_filter.md ---
    @@ -4,63 +4,63 @@ title: Using an adaptable deadtime filter
     
     Oftentimes, an application wants to control the frequency that continuously generated analytic results are made available to other parts of the application or published to other applications or an event hub.
     
    -For example, an application polls an engine temperature sensor every second and performs various analytics on each reading - an analytic result is generated every second.  By default, the application only wants to publish a (healthy) analytic result every 30 minutes.  However, under certain conditions, the desire is to publish every per-second analytic result.
    +For example, an application polls an engine temperature sensor every second and performs various analytics on each reading &mdash; an analytic result is generated every second. By default, the application only wants to publish a (healthy) analytic result every 30 minutes. However, under certain conditions, the desire is to publish every per-second analytic result.
     
     Such a condition may be locally detected, such as detecting a sudden rise in the engine temperature or it may be as a result of receiving some external command to change the publishing frequency.
     
     Note this is a different case than simply changing the polling frequency for the sensor as doing that would disable local continuous monitoring and analysis of the engine temperature.
     
    -This case needs a *deadtime filter* and Quarks provides one for your use!  In contrast to a *deadband filter*, which skips tuples based on a deadband value range, a deadtime filter skips tuples based on a *deadtime period* following a tuple that is allowed to pass through.  E.g., if the deadtime period is 30 minutes, after allowing a tuple to pass, the filter skips any tuples received for the next 30 minutes.  The next tuple received after that is allowed to pass through, and a new deadtime period is begun.
    +This case needs a *deadtime filter* and Quarks provides one for your use! In contrast to a *deadband filter*, which skips tuples based on a deadband value range, a deadtime filter skips tuples based on a *deadtime period* following a tuple that is allowed to pass through. For example, if the deadtime period is 30 minutes, after allowing a tuple to pass, the filter skips any tuples received for the next 30 minutes. The next tuple received after that is allowed to pass through, and a new deadtime period is begun.
     
    -See ``quarks.analytics.sensors.Filters.deadtime()`` and ``quarks.analytics.sensors.Deadtime``.
    +See `quarks.analytics.sensors.Filters.deadtime()` (on [GitHub](https://github.com/apache/incubator-quarks/blob/master/analytics/sensors/src/main/java/quarks/analytics/sensors/Filters.java)) and `quarks.analytics.sensors.Deadtime` (on [GitHub](https://github.com/apache/incubator-quarks/blob/master/analytics/sensors/src/main/java/quarks/analytics/sensors/Deadtime.java)).
     
     This recipe demonstrates how to use an adaptable deadtime filter.
     
    -A Quarks ``IotProvider`` and ``IoTDevice`` with its command streams would be a natural way to control the application.  In this recipe we will just simulate a "set deadtime period" command stream.
    +A Quarks `IotProvider` ad `IoTDevice` with its command streams would be a natural way to control the application. In this recipe we will just simulate a "set deadtime period" command stream.
     
     ## Create a polled sensor readings stream
     
     ```java
    -        Topology top = ...;
    -        SimulatedTemperatureSensor tempSensor = new SimulatedTemperatureSensor();
    -        TStream<Double> engineTemp = top.poll(tempSensor, 1, TimeUnit.SECONDS)
    -                                      .tag("engineTemp");
    +Topology top = ...;
    +SimulatedTemperatureSensor tempSensor = new SimulatedTemperatureSensor();
    +TStream<Double> engineTemp = top.poll(tempSensor, 1, TimeUnit.SECONDS)
    +                              .tag("engineTemp");
     ```
     
     It's also a good practice to add tags to streams to improve the usability of the development mode Quarks console.
     
    -## Create a deadtime filtered stream - initially no deadtime
    +## Create a deadtime filtered stream&mdash;initially no deadtime
     
    -In this recipe we'll just filter the direct ``engineTemp`` sensor reading stream.  In practice this filtering would be performed after some analytics stages and used as the input to ``IotDevice.event()`` or some other connector publish operation.
    +In this recipe we'll just filter the direct ``engineTemp`` sensor reading stream. In practice this filtering would be performed after some analytics stages and used as the input to ``IotDevice.event()`` or some other connector publish operation.
     
     ```java
    -        Deadtime<Double> deadtime = new Deadtime<>();
    -        TStream<Double> deadtimeFilteredEngineTemp = engineTemp.filter(deadtime)
    -                                      .tag("deadtimeFilteredEngineTemp");
    +Deadtime<Double> deadtime = new Deadtime<>();
    +TStream<Double> deadtimeFilteredEngineTemp = engineTemp.filter(deadtime)
    +                              .tag("deadtimeFilteredEngineTemp");
     ```
     
     ## Define a "set deadtime period" method
     
     ```java
    -    static <T> void setDeadtimePeriod(Deadtime<T> deadtime, long period, TimeUnit unit) {
    -        System.out.println("Setting deadtime period="+period+" "+unit);
    -        deadtime.setPeriod(period, unit);
    -    }
    +static <T> void setDeadtimePeriod(Deadtime<T> deadtime, long period, TimeUnit unit) {
    +    System.out.println("Setting deadtime period="+period+" "+unit);
    +    deadtime.setPeriod(period, unit);
    +}
     ```
     
     ## Process the "set deadtime period" command stream
     
    -Our commands are on the ``TStream<JsonObject> cmds`` stream.  Each ``JsonObject`` tuple is a command with the properties "period" and "unit".
    +Our commands are on the ``TStream<JsonObject> cmds`` stream. Each ``JsonObject`` tuple is a command with the properties "period" and "unit".
     
     ```java
    -        cmds.sink(json -> setDeadtimePeriod(deadtimeFilteredEngineTemp,
    -            json.getAsJsonPrimitive("period").getAsLong(),
    -            TimeUnit.valueOf(json.getAsJsonPrimitive("unit").getAsString())));
    +cmds.sink(json -> setDeadtimePeriod(deadtimeFilteredEngineTemp,
    +    json.getAsJsonPrimitive("period").getAsLong(),
    +    TimeUnit.valueOf(json.getAsJsonPrimitive("unit").getAsString())));
     ```
     
     ## The final application
     
    -When the application is run it will initially print out temperature sensor readings every second for 15 seconds - the deadtime period is 0.  Then every 15 seconds the application will toggle the deadtime period between 5 seconds and 0 seconds, resulting in a reduction in tuples being printed during the 5 second deadtime period.
    +When the application is run it will initially print out temperature sensor readings every second for 15 seconds&mdash;the deadtime period is 0. `Then every 15 seconds the application will toggle the deadtime period between 5 seconds and 0 seconds, resulting in a reduction in tuples being printed during the 5 second deadtime period.
    --- End diff --
    
    A  backtick was added: `Then every 15 ...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-quarks-website pull request: [QUARKS-159] Update website...

Posted by asfgit <gi...@git.apache.org>.
Github user asfgit closed the pull request at:

    https://github.com/apache/incubator-quarks-website/pull/53


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-quarks-website pull request: [QUARKS-159] Update website...

Posted by queeniema <gi...@git.apache.org>.
Github user queeniema commented on a diff in the pull request:

    https://github.com/apache/incubator-quarks-website/pull/53#discussion_r61641834
  
    --- Diff: site/docs/quarks-getting-started.md ---
    @@ -42,152 +44,151 @@ The Quarks Java 8 JAR files are located in the `quarks/java8/lib` directory.
     
         <img src="images/Build_Path_Jars.JPG" style="width:661px;height:444px;">
     
    -<br/>
     Your environment is set up! You can start writing your first Quarks application.
     
    -
     ## Creating a simple application
    +
     If you're new to Quarks or to writing streaming applications, the best way to get started is to write a simple program.
     
    -Quarks is a framework that pushes data analytics and machine learning to *edge devices*. (Edge devices include things like routers, gateways, machines, equipment, sensors, appliances, or vehicles that are connected to a network.) Quarks enables you to process data locally---such as, in a car engine, on an Android phone, or Raspberry Pi---before you send data over a network.
    +Quarks is a framework that pushes data analytics and machine learning to *edge devices*. (Edge devices include things like routers, gateways, machines, equipment, sensors, appliances, or vehicles that are connected to a network.) Quarks enables you to process data locally&mdash;such as, in a car engine, on an Android phone, or Raspberry Pi&mdash;before you send data over a network.
     
     For example, if your device takes temperature readings from a sensor 1,000 times per second, it is more efficient to process the data locally and send only interesting or unexpected results over the network. To simulate this, let's define a (simulated) TempSensor class:
     
    -
    -
     ```java
    -  	import java.util.Random;
    -
    -  	import quarks.function.Supplier;
    -
    -  	/**
    -     * Every time get() is called, TempSensor generates a temperature reading.
    -     */
    -    public class TempSensor implements Supplier<Double> {
    -  		double currentTemp = 65.0;
    -  		Random rand;
    -
    -  		TempSensor(){
    -  			rand = new Random();
    -  		}
    -
    -  		@Override
    -  		public Double get() {
    -  			// Change the current temperature some random amount
    -  			double newTemp = rand.nextGaussian() + currentTemp;
    -  			currentTemp = newTemp;
    -  			return currentTemp;
    -  		}
    -  	}
    +import java.util.Random;
    +
    +import quarks.function.Supplier;
    +
    +/**
    + * Every time get() is called, TempSensor generates a temperature reading.
    + */
    +public class TempSensor implements Supplier<Double> {
    +    double currentTemp = 65.0;
    +    Random rand;
    +
    +    TempSensor(){
    +        rand = new Random();
    +    }
    +
    +    @Override
    +    public Double get() {
    +        // Change the current temperature some random amount
    +        double newTemp = rand.nextGaussian() + currentTemp;
    +        currentTemp = newTemp;
    +        return currentTemp;
    +    }
    +}
     ```
     
    -
     Every time you call `TempSensor.get()`, it returns a new temperature reading. The continuous temperature readings are a stream of data that a Quarks application can process.
     
     Our sample Quarks application processes this stream by filtering the data and printing the results. Let's define a TempSensorApplication class for the application:
     
     ```java
    -	import java.util.concurrent.TimeUnit;
    -
    -	import quarks.providers.direct.DirectProvider;
    -	import quarks.topology.TStream;
    -	import quarks.topology.Topology;
    -
    -	public class TempSensorApplication {
    -		public static void main(String[] args) throws Exception {
    -		    TempSensor sensor = new TempSensor();
    -		    DirectProvider dp = new DirectProvider();      
    -		    Topology topology = dp.newTopology();
    -		    TStream<Double> tempReadings = topology.poll(sensor, 1, TimeUnit.MILLISECONDS);
    -		    TStream<Double> filteredReadings = tempReadings.filter(reading -> reading < 50 || reading > 80);
    -
    -		    filteredReadings.print();
    -		    dp.submit(topology);
    -		  }
    -	}
    +import java.util.concurrent.TimeUnit;
    +
    +import quarks.providers.direct.DirectProvider;
    +import quarks.topology.TStream;
    +import quarks.topology.Topology;
    +
    +public class TempSensorApplication {
    +    public static void main(String[] args) throws Exception {
    +        TempSensor sensor = new TempSensor();
    +        DirectProvider dp = new DirectProvider();
    +        Topology topology = dp.newTopology();
    +        TStream<Double> tempReadings = topology.poll(sensor, 1, TimeUnit.MILLISECONDS);
    +        TStream<Double> filteredReadings = tempReadings.filter(reading -> reading < 50 || reading > 80);
    +
    +        filteredReadings.print();
    +        dp.submit(topology);
    +    }
    +}
     ```
     
     To understand how the application processes the stream, let's review each line.
     
     ### Specifying a provider
    -Your first step when you write a Quarks application is to create a
    -[`DirectProvider`](http://quarks-edge.github.io/quarks/docs/javadoc/index.html?quarks/providers/direct/DirectProvider.html) :
    +
    +Your first step when you write a Quarks application is to create a [`DirectProvider`](http://quarks-edge.github.io/quarks/docs/javadoc/index.html?quarks/providers/direct/DirectProvider.html):
     
     ```java
    -    DirectProvider dp = new DirectProvider();
    +DirectProvider dp = new DirectProvider();
     ```
     
    -A **Provider** is an object that contains information on how and where your Quarks application will run. A **DirectProvider** is a type of Provider that runs your application directly within the current virtual machine when its `submit()` method is called.
    +A `Provider` is an object that contains information on how and where your Quarks application will run. A `DirectProvider` is a type of Provider that runs your application directly within the current virtual machine when its `submit()` method is called.
     
     ### Creating a topology
    -Additionally a Provider is used to create a
    -[`Topology`](http://quarks-edge.github.io/quarks/docs/javadoc/index.html?quarks/topology/Topology.html) instance :
    +
    +Additionally a Provider is used to create a [`Topology`](http://quarks-edge.github.io/quarks/docs/javadoc/index.html?quarks/topology/Topology.html) instance:
     
     ```java
    -    Topology topology = dp.newTopology();
    +Topology topology = dp.newTopology();
     ```
     
    -In Quarks, **Topology** is a container that describes the structure of your application:
    +In Quarks, `Topology` is a container that describes the structure of your application:
     
     * Where the streams in the application come from
    -
     * How the data in the stream is modified
     
    -In the TempSensor application above, we have exactly one data source: the `TempSensor` object. We define the source stream by calling `topology.poll()`, which takes both a Supplier function and a time parameter to indicate how frequently readings should be taken. In our case, we read from the sensor every millisecond:
    +In the TempSensor application above, we have exactly one data source: the `TempSensor` object. We define the source stream by calling `topology.poll()`, which takes both a `Supplier` function and a time parameter to indicate how frequently readings should be taken. In our case, we read from the sensor every millisecond:
     
     ```java
    -    TStream<Double> tempReadings = topology.poll(sensor, 1, TimeUnit.MILLISECONDS);
    +TStream<Double> tempReadings = topology.poll(sensor, 1, TimeUnit.MILLISECONDS);
     ```
     
    -### Defining the TStream object
    +### Defining the `TStream` object
    +
     Calling `topology.poll()` to define a source stream creates a `TStream<Double>` instance, which represents the series of readings taken from the temperature sensor.
     
    -A streaming application can run indefinitely, so the TStream might see an arbitrarily large number of readings pass through it. Because a TStream represents the flow of your data, it supports a number of operations which allow you to modify your data.
    +A streaming application can run indefinitely, so the `TStream` might see an arbitrarily large number of readings pass through it. Because a `TStream` represents the flow of your data, it supports a number of operations which allow you to modify your data.
    +
    +## Filtering a `TStream`
    --- End diff --
    
    You're right, it should be a Level 3 header! Lots of changes, I was bound to make a mistake :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-quarks-website pull request: [QUARKS-159] Update website...

Posted by dlaboss <gi...@git.apache.org>.
Github user dlaboss commented on a diff in the pull request:

    https://github.com/apache/incubator-quarks-website/pull/53#discussion_r61791053
  
    --- Diff: site/recipes/recipe_adaptable_deadtime_filter.md ---
    @@ -4,63 +4,63 @@ title: Using an adaptable deadtime filter
     
     Oftentimes, an application wants to control the frequency that continuously generated analytic results are made available to other parts of the application or published to other applications or an event hub.
     
    -For example, an application polls an engine temperature sensor every second and performs various analytics on each reading - an analytic result is generated every second.  By default, the application only wants to publish a (healthy) analytic result every 30 minutes.  However, under certain conditions, the desire is to publish every per-second analytic result.
    +For example, an application polls an engine temperature sensor every second and performs various analytics on each reading &mdash; an analytic result is generated every second. By default, the application only wants to publish a (healthy) analytic result every 30 minutes. However, under certain conditions, the desire is to publish every per-second analytic result.
     
     Such a condition may be locally detected, such as detecting a sudden rise in the engine temperature or it may be as a result of receiving some external command to change the publishing frequency.
     
     Note this is a different case than simply changing the polling frequency for the sensor as doing that would disable local continuous monitoring and analysis of the engine temperature.
     
    -This case needs a *deadtime filter* and Quarks provides one for your use!  In contrast to a *deadband filter*, which skips tuples based on a deadband value range, a deadtime filter skips tuples based on a *deadtime period* following a tuple that is allowed to pass through.  E.g., if the deadtime period is 30 minutes, after allowing a tuple to pass, the filter skips any tuples received for the next 30 minutes.  The next tuple received after that is allowed to pass through, and a new deadtime period is begun.
    +This case needs a *deadtime filter* and Quarks provides one for your use! In contrast to a *deadband filter*, which skips tuples based on a deadband value range, a deadtime filter skips tuples based on a *deadtime period* following a tuple that is allowed to pass through. For example, if the deadtime period is 30 minutes, after allowing a tuple to pass, the filter skips any tuples received for the next 30 minutes. The next tuple received after that is allowed to pass through, and a new deadtime period is begun.
     
    -See ``quarks.analytics.sensors.Filters.deadtime()`` and ``quarks.analytics.sensors.Deadtime``.
    +See `quarks.analytics.sensors.Filters.deadtime()` (on [GitHub](https://github.com/apache/incubator-quarks/blob/master/analytics/sensors/src/main/java/quarks/analytics/sensors/Filters.java)) and `quarks.analytics.sensors.Deadtime` (on [GitHub](https://github.com/apache/incubator-quarks/blob/master/analytics/sensors/src/main/java/quarks/analytics/sensors/Deadtime.java)).
     
     This recipe demonstrates how to use an adaptable deadtime filter.
     
    -A Quarks ``IotProvider`` and ``IoTDevice`` with its command streams would be a natural way to control the application.  In this recipe we will just simulate a "set deadtime period" command stream.
    +A Quarks `IotProvider` ad `IoTDevice` with its command streams would be a natural way to control the application. In this recipe we will just simulate a "set deadtime period" command stream.
     
     ## Create a polled sensor readings stream
     
     ```java
    -        Topology top = ...;
    -        SimulatedTemperatureSensor tempSensor = new SimulatedTemperatureSensor();
    -        TStream<Double> engineTemp = top.poll(tempSensor, 1, TimeUnit.SECONDS)
    -                                      .tag("engineTemp");
    +Topology top = ...;
    +SimulatedTemperatureSensor tempSensor = new SimulatedTemperatureSensor();
    +TStream<Double> engineTemp = top.poll(tempSensor, 1, TimeUnit.SECONDS)
    +                              .tag("engineTemp");
     ```
     
     It's also a good practice to add tags to streams to improve the usability of the development mode Quarks console.
     
    -## Create a deadtime filtered stream - initially no deadtime
    +## Create a deadtime filtered stream&mdash;initially no deadtime
     
    -In this recipe we'll just filter the direct ``engineTemp`` sensor reading stream.  In practice this filtering would be performed after some analytics stages and used as the input to ``IotDevice.event()`` or some other connector publish operation.
    +In this recipe we'll just filter the direct ``engineTemp`` sensor reading stream. In practice this filtering would be performed after some analytics stages and used as the input to ``IotDevice.event()`` or some other connector publish operation.
     
     ```java
    -        Deadtime<Double> deadtime = new Deadtime<>();
    -        TStream<Double> deadtimeFilteredEngineTemp = engineTemp.filter(deadtime)
    -                                      .tag("deadtimeFilteredEngineTemp");
    +Deadtime<Double> deadtime = new Deadtime<>();
    +TStream<Double> deadtimeFilteredEngineTemp = engineTemp.filter(deadtime)
    +                              .tag("deadtimeFilteredEngineTemp");
     ```
     
     ## Define a "set deadtime period" method
     
     ```java
    -    static <T> void setDeadtimePeriod(Deadtime<T> deadtime, long period, TimeUnit unit) {
    -        System.out.println("Setting deadtime period="+period+" "+unit);
    -        deadtime.setPeriod(period, unit);
    -    }
    +static <T> void setDeadtimePeriod(Deadtime<T> deadtime, long period, TimeUnit unit) {
    +    System.out.println("Setting deadtime period="+period+" "+unit);
    +    deadtime.setPeriod(period, unit);
    +}
     ```
     
     ## Process the "set deadtime period" command stream
     
    -Our commands are on the ``TStream<JsonObject> cmds`` stream.  Each ``JsonObject`` tuple is a command with the properties "period" and "unit".
    +Our commands are on the ``TStream<JsonObject> cmds`` stream. Each ``JsonObject`` tuple is a command with the properties "period" and "unit".
     
     ```java
    -        cmds.sink(json -> setDeadtimePeriod(deadtimeFilteredEngineTemp,
    -            json.getAsJsonPrimitive("period").getAsLong(),
    -            TimeUnit.valueOf(json.getAsJsonPrimitive("unit").getAsString())));
    +cmds.sink(json -> setDeadtimePeriod(deadtimeFilteredEngineTemp,
    +    json.getAsJsonPrimitive("period").getAsLong(),
    +    TimeUnit.valueOf(json.getAsJsonPrimitive("unit").getAsString())));
     ```
     
     ## The final application
     
    -When the application is run it will initially print out temperature sensor readings every second for 15 seconds - the deadtime period is 0.  Then every 15 seconds the application will toggle the deadtime period between 5 seconds and 0 seconds, resulting in a reduction in tuples being printed during the 5 second deadtime period.
    +When the application is run it will initially print out temperature sensor readings every second for 15 seconds&mdash;the deadtime period is 0. `Then every 15 seconds the application will toggle the deadtime period between 5 seconds and 0 seconds, resulting in a reduction in tuples being printed during the 5 second deadtime period.
    --- End diff --
    
    I made the adjustment when merging


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-quarks-website pull request: [QUARKS-159] Update website...

Posted by queeniema <gi...@git.apache.org>.
Github user queeniema commented on a diff in the pull request:

    https://github.com/apache/incubator-quarks-website/pull/53#discussion_r61643306
  
    --- Diff: site/recipes/recipe_adaptable_deadtime_filter.md ---
    @@ -4,63 +4,63 @@ title: Using an adaptable deadtime filter
     
     Oftentimes, an application wants to control the frequency that continuously generated analytic results are made available to other parts of the application or published to other applications or an event hub.
     
    -For example, an application polls an engine temperature sensor every second and performs various analytics on each reading - an analytic result is generated every second.  By default, the application only wants to publish a (healthy) analytic result every 30 minutes.  However, under certain conditions, the desire is to publish every per-second analytic result.
    +For example, an application polls an engine temperature sensor every second and performs various analytics on each reading &mdash; an analytic result is generated every second. By default, the application only wants to publish a (healthy) analytic result every 30 minutes. However, under certain conditions, the desire is to publish every per-second analytic result.
     
     Such a condition may be locally detected, such as detecting a sudden rise in the engine temperature or it may be as a result of receiving some external command to change the publishing frequency.
     
     Note this is a different case than simply changing the polling frequency for the sensor as doing that would disable local continuous monitoring and analysis of the engine temperature.
     
    -This case needs a *deadtime filter* and Quarks provides one for your use!  In contrast to a *deadband filter*, which skips tuples based on a deadband value range, a deadtime filter skips tuples based on a *deadtime period* following a tuple that is allowed to pass through.  E.g., if the deadtime period is 30 minutes, after allowing a tuple to pass, the filter skips any tuples received for the next 30 minutes.  The next tuple received after that is allowed to pass through, and a new deadtime period is begun.
    +This case needs a *deadtime filter* and Quarks provides one for your use! In contrast to a *deadband filter*, which skips tuples based on a deadband value range, a deadtime filter skips tuples based on a *deadtime period* following a tuple that is allowed to pass through. For example, if the deadtime period is 30 minutes, after allowing a tuple to pass, the filter skips any tuples received for the next 30 minutes. The next tuple received after that is allowed to pass through, and a new deadtime period is begun.
     
    -See ``quarks.analytics.sensors.Filters.deadtime()`` and ``quarks.analytics.sensors.Deadtime``.
    +See `quarks.analytics.sensors.Filters.deadtime()` (on [GitHub](https://github.com/apache/incubator-quarks/blob/master/analytics/sensors/src/main/java/quarks/analytics/sensors/Filters.java)) and `quarks.analytics.sensors.Deadtime` (on [GitHub](https://github.com/apache/incubator-quarks/blob/master/analytics/sensors/src/main/java/quarks/analytics/sensors/Deadtime.java)).
    --- End diff --
    
    Yes, once the Javadoc is updated, then we should update these types of links to point to there. In the meantime, I figured that pointing to the source code might be helpful.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-quarks-website pull request: [QUARKS-159] Update website...

Posted by queeniema <gi...@git.apache.org>.
Github user queeniema commented on a diff in the pull request:

    https://github.com/apache/incubator-quarks-website/pull/53#discussion_r61643486
  
    --- Diff: site/recipes/recipe_adaptable_deadtime_filter.md ---
    @@ -4,63 +4,63 @@ title: Using an adaptable deadtime filter
     
     Oftentimes, an application wants to control the frequency that continuously generated analytic results are made available to other parts of the application or published to other applications or an event hub.
     
    -For example, an application polls an engine temperature sensor every second and performs various analytics on each reading - an analytic result is generated every second.  By default, the application only wants to publish a (healthy) analytic result every 30 minutes.  However, under certain conditions, the desire is to publish every per-second analytic result.
    +For example, an application polls an engine temperature sensor every second and performs various analytics on each reading &mdash; an analytic result is generated every second. By default, the application only wants to publish a (healthy) analytic result every 30 minutes. However, under certain conditions, the desire is to publish every per-second analytic result.
     
     Such a condition may be locally detected, such as detecting a sudden rise in the engine temperature or it may be as a result of receiving some external command to change the publishing frequency.
     
     Note this is a different case than simply changing the polling frequency for the sensor as doing that would disable local continuous monitoring and analysis of the engine temperature.
     
    -This case needs a *deadtime filter* and Quarks provides one for your use!  In contrast to a *deadband filter*, which skips tuples based on a deadband value range, a deadtime filter skips tuples based on a *deadtime period* following a tuple that is allowed to pass through.  E.g., if the deadtime period is 30 minutes, after allowing a tuple to pass, the filter skips any tuples received for the next 30 minutes.  The next tuple received after that is allowed to pass through, and a new deadtime period is begun.
    +This case needs a *deadtime filter* and Quarks provides one for your use! In contrast to a *deadband filter*, which skips tuples based on a deadband value range, a deadtime filter skips tuples based on a *deadtime period* following a tuple that is allowed to pass through. For example, if the deadtime period is 30 minutes, after allowing a tuple to pass, the filter skips any tuples received for the next 30 minutes. The next tuple received after that is allowed to pass through, and a new deadtime period is begun.
     
    -See ``quarks.analytics.sensors.Filters.deadtime()`` and ``quarks.analytics.sensors.Deadtime``.
    +See `quarks.analytics.sensors.Filters.deadtime()` (on [GitHub](https://github.com/apache/incubator-quarks/blob/master/analytics/sensors/src/main/java/quarks/analytics/sensors/Filters.java)) and `quarks.analytics.sensors.Deadtime` (on [GitHub](https://github.com/apache/incubator-quarks/blob/master/analytics/sensors/src/main/java/quarks/analytics/sensors/Deadtime.java)).
     
     This recipe demonstrates how to use an adaptable deadtime filter.
     
    -A Quarks ``IotProvider`` and ``IoTDevice`` with its command streams would be a natural way to control the application.  In this recipe we will just simulate a "set deadtime period" command stream.
    +A Quarks `IotProvider` ad `IoTDevice` with its command streams would be a natural way to control the application. In this recipe we will just simulate a "set deadtime period" command stream.
     
     ## Create a polled sensor readings stream
     
     ```java
    -        Topology top = ...;
    -        SimulatedTemperatureSensor tempSensor = new SimulatedTemperatureSensor();
    -        TStream<Double> engineTemp = top.poll(tempSensor, 1, TimeUnit.SECONDS)
    -                                      .tag("engineTemp");
    +Topology top = ...;
    +SimulatedTemperatureSensor tempSensor = new SimulatedTemperatureSensor();
    +TStream<Double> engineTemp = top.poll(tempSensor, 1, TimeUnit.SECONDS)
    +                              .tag("engineTemp");
     ```
     
     It's also a good practice to add tags to streams to improve the usability of the development mode Quarks console.
     
    -## Create a deadtime filtered stream - initially no deadtime
    +## Create a deadtime filtered stream&mdash;initially no deadtime
     
    -In this recipe we'll just filter the direct ``engineTemp`` sensor reading stream.  In practice this filtering would be performed after some analytics stages and used as the input to ``IotDevice.event()`` or some other connector publish operation.
    +In this recipe we'll just filter the direct ``engineTemp`` sensor reading stream. In practice this filtering would be performed after some analytics stages and used as the input to ``IotDevice.event()`` or some other connector publish operation.
     
     ```java
    -        Deadtime<Double> deadtime = new Deadtime<>();
    -        TStream<Double> deadtimeFilteredEngineTemp = engineTemp.filter(deadtime)
    -                                      .tag("deadtimeFilteredEngineTemp");
    +Deadtime<Double> deadtime = new Deadtime<>();
    +TStream<Double> deadtimeFilteredEngineTemp = engineTemp.filter(deadtime)
    +                              .tag("deadtimeFilteredEngineTemp");
     ```
     
     ## Define a "set deadtime period" method
     
     ```java
    -    static <T> void setDeadtimePeriod(Deadtime<T> deadtime, long period, TimeUnit unit) {
    -        System.out.println("Setting deadtime period="+period+" "+unit);
    -        deadtime.setPeriod(period, unit);
    -    }
    +static <T> void setDeadtimePeriod(Deadtime<T> deadtime, long period, TimeUnit unit) {
    +    System.out.println("Setting deadtime period="+period+" "+unit);
    +    deadtime.setPeriod(period, unit);
    +}
     ```
     
     ## Process the "set deadtime period" command stream
     
    -Our commands are on the ``TStream<JsonObject> cmds`` stream.  Each ``JsonObject`` tuple is a command with the properties "period" and "unit".
    +Our commands are on the ``TStream<JsonObject> cmds`` stream. Each ``JsonObject`` tuple is a command with the properties "period" and "unit".
     
     ```java
    -        cmds.sink(json -> setDeadtimePeriod(deadtimeFilteredEngineTemp,
    -            json.getAsJsonPrimitive("period").getAsLong(),
    -            TimeUnit.valueOf(json.getAsJsonPrimitive("unit").getAsString())));
    +cmds.sink(json -> setDeadtimePeriod(deadtimeFilteredEngineTemp,
    +    json.getAsJsonPrimitive("period").getAsLong(),
    +    TimeUnit.valueOf(json.getAsJsonPrimitive("unit").getAsString())));
     ```
     
     ## The final application
     
    -When the application is run it will initially print out temperature sensor readings every second for 15 seconds - the deadtime period is 0.  Then every 15 seconds the application will toggle the deadtime period between 5 seconds and 0 seconds, resulting in a reduction in tuples being printed during the 5 second deadtime period.
    +When the application is run it will initially print out temperature sensor readings every second for 15 seconds&mdash;the deadtime period is 0. `Then every 15 seconds the application will toggle the deadtime period between 5 seconds and 0 seconds, resulting in a reduction in tuples being printed during the 5 second deadtime period.
    --- End diff --
    
    Good catch, thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---