You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@datafu.apache.org by mh...@apache.org on 2015/10/21 19:00:40 UTC

svn commit: r1709884 [3/8] - in /incubator/datafu/site: ./ blog/ blog/2012/01/10/ blog/2013/01/24/ blog/2013/09/04/ blog/2013/10/03/ blog/2014/04/27/ community/ docs/ docs/datafu/ docs/datafu/guide/ docs/hourglass/ javascripts/ stylesheets/

Modified: incubator/datafu/site/blog/2013/10/03/datafus-hourglass-incremental-data-processing-in-hadoop.html
URL: http://svn.apache.org/viewvc/incubator/datafu/site/blog/2013/10/03/datafus-hourglass-incremental-data-processing-in-hadoop.html?rev=1709884&r1=1709883&r2=1709884&view=diff
==============================================================================
--- incubator/datafu/site/blog/2013/10/03/datafus-hourglass-incremental-data-processing-in-hadoop.html (original)
+++ incubator/datafu/site/blog/2013/10/03/datafus-hourglass-incremental-data-processing-in-hadoop.html Wed Oct 21 17:00:40 2015
@@ -1,3 +1,5 @@
+
+
 <!doctype html>
 <html>
   <head>
@@ -10,11 +12,9 @@
     <!-- Use title if it's in the page YAML frontmatter -->
     <title>DataFu's Hourglass, Incremental Data Processing in Hadoop</title>
     
-    <link href="/stylesheets/all.css" media="screen" rel="stylesheet" type="text/css" />
-<link href="/stylesheets/highlight.css" media="screen" rel="stylesheet" type="text/css" />
-    <script src="/javascripts/all.js" type="text/javascript"></script>
+    <link href="/stylesheets/all.css" rel="stylesheet" /><link href="/stylesheets/highlight.css" rel="stylesheet" />
+    <script src="/javascripts/all.js"></script>
 
-    
     <script type="text/javascript">
       var _gaq = _gaq || [];
       _gaq.push(['_setAccount', 'UA-30533336-2']);
@@ -26,14 +26,14 @@
         var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
       })();
     </script>
-    
   </head>
   
   <body class="blog blog_2013 blog_2013_10 blog_2013_10_03 blog_2013_10_03_datafus-hourglass-incremental-data-processing-in-hadoop">
 
     <div class="container">
 
-      <div class="header">
+      
+<div class="header">
 
   <ul class="nav nav-pills pull-right">
     <li><a href="/">Home</a></li>
@@ -49,13 +49,13 @@
   <article class="col-lg-10">
     <h1>DataFu's Hourglass, Incremental Data Processing in Hadoop</h1>
     <h5 class="text-muted"><time>Oct  3, 2013</time></h5>
-    
       <h5 class="text-muted">Matthew Hayes</h5>
-    
 
     <hr>
 
-    <p>For a large scale site such as LinkedIn, tracking metrics accurately and efficiently is an important task. For example, imagine we need a dashboard that shows the number of visitors to every page on the site over the last thirty days. To keep this dashboard up to date, we can schedule a query that runs daily and gathers the stats for the last 30 days. However, this simple implementation would be wasteful: only one day of data has changed, but we&#39;d be consuming and recalculating the stats for all 30.</p>
+    <p><em>Update (10/15/2015): The links in this blog post have been updated to point to the correct locations within the Apache DataFu website.</em></p>
+
+<p>For a large scale site such as LinkedIn, tracking metrics accurately and efficiently is an important task. For example, imagine we need a dashboard that shows the number of visitors to every page on the site over the last thirty days. To keep this dashboard up to date, we can schedule a query that runs daily and gathers the stats for the last 30 days. However, this simple implementation would be wasteful: only one day of data has changed, but we&#39;d be consuming and recalculating the stats for all 30.</p>
 
 <p>A more efficient solution is to make the query incremental: using basic arithmetic, we can update the output from the previous day by adding and subtracting input data. This enables the job to process only the new data, significantly reducing the computational resources required. Unfortunately, although there are many benefits to the incremental approach, getting incremental jobs right is hard:</p>
 
@@ -65,11 +65,11 @@
 <li>There are more things that can go wrong with an incremental job, so you typically need to spend more time writing automated tests to make sure things are working.</li>
 </ul>
 
-<p>To solve these problems, we are happy to announce that we have open sourced <a href="https://github.com/linkedin/datafu/tree/master/contrib/hourglass">Hourglass</a>, a framework that makes it much easier to write incremental Hadoop jobs. We are releasing Hourglass under the Apache 2.0 License as part of the <a href="https://github.com/linkedin/datafu">DataFu</a> project. We will be presenting our &quot;Hourglass: a Library for Incremental Processing on Hadoop&quot; paper at the <a href="http://cci.drexel.edu/bigdata/bigdata2013/index.htm">IEEE BigData 2013</a> conference on October 9th.</p>
+<p>To solve these problems, we are happy to announce that we have open sourced <a href="/docs/hourglass/getting-started.html">Hourglass</a>, a framework that makes it much easier to write incremental Hadoop jobs. We are releasing Hourglass under the Apache 2.0 License as part of the <a href="/">DataFu</a> project. We will be presenting our &quot;Hourglass: a Library for Incremental Processing on Hadoop&quot; paper at the <a href="http://cci.drexel.edu/bigdata/bigdata2013/index.htm">IEEE BigData 2013</a> conference on October 9th.</p>
 
 <p>In this post, we will give an overview of the basic concepts behind Hourglass and walk through examples of using the framework to solve processing tasks incrementally. The first example presents a job that counts how many times a member has logged in to a site. The second example presents a job that estimates the number of members who have visited in the past thirty days. Lastly, we will show you how to get the code and start writing your own incremental hadoop jobs.</p>
 
-<h2 id="toc_0">Basic Concepts</h2>
+<h2 id="basic-concepts">Basic Concepts</h2>
 
 <p>Hourglass is designed to make computations over sliding windows more efficient. For these types of computations, the input data is partitioned in some way, usually according to time, and the range of input data to process is adjusted as new data arrives. Hourglass works with input data that is partitioned by day, as this is a common scheme for partitioning temporal data.</p>
 
@@ -91,13 +91,13 @@
 
 <p>We&#39;ll discuss these two jobs in the next two sections.</p>
 
-<h2 id="toc_1">Partition-preserving job</h2>
+<h2 id="partition-preserving-job">Partition-preserving job</h2>
 
 <p><img alt="partition-preserving job" src="/images/Hourglass-Concepts-Preserving.png" /></p>
 
 <p>In the partition-preserving job, input data that is partitioned by day is consumed and output data is produced that is also partitioned by day. This is equivalent to running one MapReduce job separately for each day of input data. Suppose that the input data is a page view event and the goal is to count the number of page views by member. This job would produce the page view counts per member, partitioned by day.</p>
 
-<h2 id="toc_2">Partition-collapsing job</h2>
+<h2 id="partition-collapsing-job">Partition-collapsing job</h2>
 
 <p><img alt="partition-preserving job" src="/images/Hourglass-Concepts-Collapsing.png" /></p>
 
@@ -107,7 +107,7 @@
 
 <p><img alt="partition-preserving job" src="/images/Hourglass-Concepts-CollapsingReuse.png" /></p>
 
-<h2 id="toc_3">Hourglass programming model</h2>
+<h2 id="hourglass-programming-model">Hourglass programming model</h2>
 
 <p>The Hourglass jobs are implemented as MapReduce jobs for Hadoop:</p>
 
@@ -121,84 +121,91 @@
 
 <p>Hourglass uses <a href="http://avro.apache.org/">Avro</a> for all of the input and output data types in the diagram above, namely <code>k</code>, <code>v</code>, <code>v2</code>, and <code>v3</code>. One of the tasks when programming with Hourglass is to define the schemas for these types. The exception is the input schema, which is implicitly determined by the jobs when the input is inspected.</p>
 
-<h2 id="toc_4">Example 1: Counting Events Per Member</h2>
+<h2 id="example-1-counting-events-per-member">Example 1: Counting Events Per Member</h2>
 
 <p>With the basic concepts out of the way, let&#39;s look at an example. Suppose that we have a website that tracks user logins as an event, and for each event, the member ID is recorded. These events are collected and stored in HDFS in Avro under paths with the format <code>/data/event/yyyy/MM/dd</code>. Suppose for this example our Avro schema is:</p>
-<pre class="highlight json"><span class="p">{</span><span class="w">
-  </span><span class="s2">&quot;type&quot;</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="s2">&quot;record&quot;</span><span class="p">,</span><span class="w"> </span><span class="s2">&quot;name&quot;</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="s2">&quot;ExampleEvent&quot;</span><span class="p">,</span><span class="w"> 
-  </span><span class="s2">&quot;namespace&quot;</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="s2">&quot;datafu.hourglass.test&quot;</span><span class="p">,</span><span class="w">
-  </span><span class="s2">&quot;fields&quot;</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w"> </span><span class="p">{</span><span class="w">
-    </span><span class="s2">&quot;name&quot;</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="s2">&quot;id&quot;</span><span class="p">,</span><span class="w">
-    </span><span class="s2">&quot;type&quot;</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="s2">&quot;long&quot;</span><span class="p">,</span><span class="w">
-    </span><span class="s2">&quot;doc&quot;</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="s2">&quot;ID&quot;</span><span class="w">
+<pre class="highlight json"><code><span class="p">{</span><span class="w">
+  </span><span class="nt">"type"</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="s2">"record"</span><span class="p">,</span><span class="w"> </span><span class="nt">"name"</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="s2">"ExampleEvent"</span><span class="p">,</span><span class="w">
+  </span><span class="nt">"namespace"</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="s2">"datafu.hourglass.test"</span><span class="p">,</span><span class="w">
+  </span><span class="nt">"fields"</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w"> </span><span class="p">{</span><span class="w">
+    </span><span class="nt">"name"</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="s2">"id"</span><span class="p">,</span><span class="w">
+    </span><span class="nt">"type"</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="s2">"long"</span><span class="p">,</span><span class="w">
+    </span><span class="nt">"doc"</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="s2">"ID"</span><span class="w">
   </span><span class="p">}</span><span class="w"> </span><span class="p">]</span><span class="w">
 </span><span class="p">}</span><span class="w">
-</span></pre>
+</span></code></pre>
+
 <p>The goal is to count how many times each member has logged in over the entire history and produce a daily report containing these counts. One solution is to simply consume all data under <code>/data/event</code> each day and aggregate by member ID. While this solution works, it is very wasteful (and only gets more wasteful over time), as it recomputes all the data every day, even though only 1 day worth of data has changed. Wouldn&#39;t it be better if we could merge the previous result with the new data? With Hourglass you can.</p>
 
 <p>To continue our example, let&#39;s say there are two days of data currently available, 2013/03/15 and 2013/03/16, and that their contents are:</p>
-<pre class="highlight text">2013/03/15:
-{&quot;id&quot;: 1}, {&quot;id&quot;: 1}, {&quot;id&quot;: 1}, {&quot;id&quot;: 2}, {&quot;id&quot;: 3}, {&quot;id&quot;: 3}
+<pre class="highlight plaintext"><code>2013/03/15:
+{"id": 1}, {"id": 1}, {"id": 1}, {"id": 2}, {"id": 3}, {"id": 3}
 
 2013/03/16:
-{&quot;id&quot;: 1}, {&quot;id&quot;: 1}, {&quot;id&quot;: 2}, {&quot;id&quot;: 2}, {&quot;id&quot;: 3}, 
-</pre>
+{"id": 1}, {"id": 1}, {"id": 2}, {"id": 2}, {"id": 3},
+</code></pre>
+
 <p>Let&#39;s aggregate the counts by member ID using Hourglass. To perform the aggregation we will use <a href="/docs/hourglass/0.1.3/datafu/hourglass/jobs/PartitionCollapsingIncrementalJob.html">PartitionCollapsingIncrementalJob</a>, which takes a partitioned data set and collapses all the partitions together into a single output. The goal is to aggregate the two days of input and produce a single day of output, as in the following diagram:</p>
 
 <p><img alt="partition-preserving job" src="/images/Hourglass-Example1-Step1.png" /></p>
 
 <p>First, create the job:</p>
-<pre class="highlight java"><span class="n">PartitionCollapsingIncrementalJob</span> <span class="n">job</span> <span class="o">=</span> 
+<pre class="highlight java"><code><span class="n">PartitionCollapsingIncrementalJob</span> <span class="n">job</span> <span class="o">=</span>
     <span class="k">new</span> <span class="n">PartitionCollapsingIncrementalJob</span><span class="o">(</span><span class="n">Example</span><span class="o">.</span><span class="na">class</span><span class="o">);</span>
-</pre>
+</code></pre>
+
 <p>Next, we will define schemas for the key and value used by the job. The key affects how data is grouped in the reducer when we perform the aggregation. In this case, it will be the member ID. The value is the piece of data being aggregated, which will be an integer representing the count.</p>
-<pre class="highlight java"><span class="kd">final</span> <span class="n">String</span> <span class="n">namespace</span> <span class="o">=</span> <span class="s">&quot;com.example&quot;</span><span class="o">;</span>
+<pre class="highlight java"><code><span class="kd">final</span> <span class="n">String</span> <span class="n">namespace</span> <span class="o">=</span> <span class="s">"com.example"</span><span class="o">;</span>
 
-<span class="kd">final</span> <span class="n">Schema</span> <span class="n">keySchema</span> <span class="o">=</span> 
-  <span class="n">Schema</span><span class="o">.</span><span class="na">createRecord</span><span class="o">(</span><span class="s">&quot;Key&quot;</span><span class="o">,</span><span class="kc">null</span><span class="o">,</span><span class="n">namespace</span><span class="o">,</span><span class="kc">false</span><span class="o">);</span>
+<span class="kd">final</span> <span class="n">Schema</span> <span class="n">keySchema</span> <span class="o">=</span>
+  <span class="n">Schema</span><span class="o">.</span><span class="na">createRecord</span><span class="o">(</span><span class="s">"Key"</span><span class="o">,</span><span class="kc">null</span><span class="o">,</span><span class="n">namespace</span><span class="o">,</span><span class="kc">false</span><span class="o">);</span>
 
 <span class="n">keySchema</span><span class="o">.</span><span class="na">setFields</span><span class="o">(</span><span class="n">Arrays</span><span class="o">.</span><span class="na">asList</span><span class="o">(</span>
-  <span class="k">new</span> <span class="n">Field</span><span class="o">(</span><span class="s">&quot;member_id&quot;</span><span class="o">,</span><span class="n">Schema</span><span class="o">.</span><span class="na">create</span><span class="o">(</span><span class="n">Type</span><span class="o">.</span><span class="na">LONG</span><span class="o">),</span><span class="kc">null</span><span class="o">,</span><span class="kc">null</span><span class="o">)));</span>
+  <span class="k">new</span> <span class="n">Field</span><span class="o">(</span><span class="s">"member_id"</span><span class="o">,</span><span class="n">Schema</span><span class="o">.</span><span class="na">create</span><span class="o">(</span><span class="n">Type</span><span class="o">.</span><span class="na">LONG</span><span class="o">),</span><span class="kc">null</span><span class="o">,</span><span class="kc">null</span><span class="o">)));</span>
 
 <span class="kd">final</span> <span class="n">String</span> <span class="n">keySchemaString</span> <span class="o">=</span> <span class="n">keySchema</span><span class="o">.</span><span class="na">toString</span><span class="o">(</span><span class="kc">true</span><span class="o">);</span>
 
-<span class="kd">final</span> <span class="n">Schema</span> <span class="n">valueSchema</span> <span class="o">=</span> 
-  <span class="n">Schema</span><span class="o">.</span><span class="na">createRecord</span><span class="o">(</span><span class="s">&quot;Value&quot;</span><span class="o">,</span><span class="kc">null</span><span class="o">,</span><span class="n">namespace</span><span class="o">,</span><span class="kc">false</span><span class="o">);</span>
+<span class="kd">final</span> <span class="n">Schema</span> <span class="n">valueSchema</span> <span class="o">=</span>
+  <span class="n">Schema</span><span class="o">.</span><span class="na">createRecord</span><span class="o">(</span><span class="s">"Value"</span><span class="o">,</span><span class="kc">null</span><span class="o">,</span><span class="n">namespace</span><span class="o">,</span><span class="kc">false</span><span class="o">);</span>
 
 <span class="n">valueSchema</span><span class="o">.</span><span class="na">setFields</span><span class="o">(</span><span class="n">Arrays</span><span class="o">.</span><span class="na">asList</span><span class="o">(</span>
-  <span class="k">new</span> <span class="n">Field</span><span class="o">(</span><span class="s">&quot;count&quot;</span><span class="o">,</span><span class="n">Schema</span><span class="o">.</span><span class="na">create</span><span class="o">(</span><span class="n">Type</span><span class="o">.</span><span class="na">INT</span><span class="o">),</span><span class="kc">null</span><span class="o">,</span><span class="kc">null</span><span class="o">)));</span>
-</pre>
+  <span class="k">new</span> <span class="n">Field</span><span class="o">(</span><span class="s">"count"</span><span class="o">,</span><span class="n">Schema</span><span class="o">.</span><span class="na">create</span><span class="o">(</span><span class="n">Type</span><span class="o">.</span><span class="na">INT</span><span class="o">),</span><span class="kc">null</span><span class="o">,</span><span class="kc">null</span><span class="o">)));</span>
+</code></pre>
+
 <p>final String valueSchemaString = valueSchema.toString(true);</p>
 
 <p>This produces the following representation:</p>
-<pre class="highlight json"><span class="p">{</span><span class="w">
-  </span><span class="s2">&quot;type&quot;</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="s2">&quot;record&quot;</span><span class="p">,</span><span class="w"> </span><span class="s2">&quot;name&quot;</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="s2">&quot;Key&quot;</span><span class="p">,</span><span class="w"> </span><span class="s2">&quot;namespace&quot;</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="s2">&quot;com.example&quot;</span><span class="p">,</span><span class="w">
-  </span><span class="s2">&quot;fields&quot;</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w"> </span><span class="p">{</span><span class="w">
-    </span><span class="s2">&quot;name&quot;</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="s2">&quot;member_id&quot;</span><span class="p">,</span><span class="w">
-    </span><span class="s2">&quot;type&quot;</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="s2">&quot;long&quot;</span><span class="w">
+<pre class="highlight json"><code><span class="p">{</span><span class="w">
+  </span><span class="nt">"type"</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="s2">"record"</span><span class="p">,</span><span class="w"> </span><span class="nt">"name"</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="s2">"Key"</span><span class="p">,</span><span class="w"> </span><span class="nt">"namespace"</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="s2">"com.example"</span><span class="p">,</span><span class="w">
+  </span><span class="nt">"fields"</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w"> </span><span class="p">{</span><span class="w">
+    </span><span class="nt">"name"</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="s2">"member_id"</span><span class="p">,</span><span class="w">
+    </span><span class="nt">"type"</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="s2">"long"</span><span class="w">
   </span><span class="p">}</span><span class="w"> </span><span class="p">]</span><span class="w">
 </span><span class="p">}</span><span class="w">
 
 </span><span class="p">{</span><span class="w">
-  </span><span class="s2">&quot;type&quot;</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="s2">&quot;record&quot;</span><span class="p">,</span><span class="w"> </span><span class="s2">&quot;name&quot;</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="s2">&quot;Value&quot;</span><span class="p">,</span><span class="w"> </span><span class="s2">&quot;namespace&quot;</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="s2">&quot;com.example&quot;</span><span class="p">,</span><span class="w">
-  </span><span class="s2">&quot;fields&quot;</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w"> </span><span class="p">{</span><span class="w">
-    </span><span class="s2">&quot;name&quot;</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="s2">&quot;count&quot;</span><span class="p">,</span><span class="w">
-    </span><span class="s2">&quot;type&quot;</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="s2">&quot;int&quot;</span><span class="w">
+  </span><span class="nt">"type"</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="s2">"record"</span><span class="p">,</span><span class="w"> </span><span class="nt">"name"</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="s2">"Value"</span><span class="p">,</span><span class="w"> </span><span class="nt">"namespace"</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="s2">"com.example"</span><span class="p">,</span><span class="w">
+  </span><span class="nt">"fields"</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w"> </span><span class="p">{</span><span class="w">
+    </span><span class="nt">"name"</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="s2">"count"</span><span class="p">,</span><span class="w">
+    </span><span class="nt">"type"</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="s2">"int"</span><span class="w">
   </span><span class="p">}</span><span class="w"> </span><span class="p">]</span><span class="w">
 </span><span class="p">}</span><span class="w">
-</span></pre>
+</span></code></pre>
+
 <p>Now we can tell the job what our schemas are. Hourglass allows two different value types. One is the intermediate value type that is produced by the mapper and combiner. The other is the output value type, the product of the reducer. In this case we will use the same value type for each.</p>
-<pre class="highlight java"><span class="n">job</span><span class="o">.</span><span class="na">setKeySchema</span><span class="o">(</span><span class="n">keySchema</span><span class="o">);</span>
+<pre class="highlight java"><code><span class="n">job</span><span class="o">.</span><span class="na">setKeySchema</span><span class="o">(</span><span class="n">keySchema</span><span class="o">);</span>
 <span class="n">job</span><span class="o">.</span><span class="na">setIntermediateValueSchema</span><span class="o">(</span><span class="n">valueSchema</span><span class="o">);</span>
 <span class="n">job</span><span class="o">.</span><span class="na">setOutputValueSchema</span><span class="o">(</span><span class="n">valueSchema</span><span class="o">);</span>
-</pre>
+</code></pre>
+
 <p>Next, we will tell Hourglass where to find the data, where to write the data, and that we want to reuse the previous output.</p>
-<pre class="highlight java"><span class="n">job</span><span class="o">.</span><span class="na">setInputPaths</span><span class="o">(</span><span class="n">Arrays</span><span class="o">.</span><span class="na">asList</span><span class="o">(</span><span class="k">new</span> <span class="n">Path</span><span class="o">(</span><span class="s">&quot;/data/event&quot;</span><span class="o">)));</span>
-<span class="n">job</span><span class="o">.</span><span class="na">setOutputPath</span><span class="o">(</span><span class="k">new</span> <span class="n">Path</span><span class="o">(</span><span class="s">&quot;/output&quot;</span><span class="o">));</span>
+<pre class="highlight java"><code><span class="n">job</span><span class="o">.</span><span class="na">setInputPaths</span><span class="o">(</span><span class="n">Arrays</span><span class="o">.</span><span class="na">asList</span><span class="o">(</span><span class="k">new</span> <span class="n">Path</span><span class="o">(</span><span class="s">"/data/event"</span><span class="o">)));</span>
+<span class="n">job</span><span class="o">.</span><span class="na">setOutputPath</span><span class="o">(</span><span class="k">new</span> <span class="n">Path</span><span class="o">(</span><span class="s">"/output"</span><span class="o">));</span>
 <span class="n">job</span><span class="o">.</span><span class="na">setReusePreviousOutput</span><span class="o">(</span><span class="kc">true</span><span class="o">);</span>
-</pre>
+</code></pre>
+
 <p>Now let&#39;s get into some application logic. The mapper will produce a key-value pair from each input record, consisting of the member ID and a count, which for each input record will just be <code>1</code>.</p>
-<pre class="highlight java"><span class="n">job</span><span class="o">.</span><span class="na">setMapper</span><span class="o">(</span><span class="k">new</span> <span class="n">Mapper</span><span class="o">&lt;</span><span class="n">GenericRecord</span><span class="o">,</span><span class="n">GenericRecord</span><span class="o">,</span><span class="n">GenericRecord</span><span class="o">&gt;()</span>
+<pre class="highlight java"><code><span class="n">job</span><span class="o">.</span><span class="na">setMapper</span><span class="o">(</span><span class="k">new</span> <span class="n">Mapper</span><span class="o">&lt;</span><span class="n">GenericRecord</span><span class="o">,</span><span class="n">GenericRecord</span><span class="o">,</span><span class="n">GenericRecord</span><span class="o">&gt;()</span>
 <span class="o">{</span>
   <span class="kd">private</span> <span class="kd">transient</span> <span class="n">Schema</span> <span class="n">kSchema</span><span class="o">;</span>
   <span class="kd">private</span> <span class="kd">transient</span> <span class="n">Schema</span> <span class="n">vSchema</span><span class="o">;</span>
@@ -206,43 +213,44 @@
   <span class="nd">@Override</span>
   <span class="kd">public</span> <span class="kt">void</span> <span class="n">map</span><span class="o">(</span>
     <span class="n">GenericRecord</span> <span class="n">input</span><span class="o">,</span>
-    <span class="n">KeyValueCollector</span><span class="o">&lt;</span><span class="n">GenericRecord</span><span class="o">,</span> <span class="n">GenericRecord</span><span class="o">&gt;</span> <span class="n">collector</span><span class="o">)</span> 
-  <span class="kd">throws</span> <span class="n">IOException</span><span class="o">,</span> <span class="n">InterruptedException</span> 
+    <span class="n">KeyValueCollector</span><span class="o">&lt;</span><span class="n">GenericRecord</span><span class="o">,</span> <span class="n">GenericRecord</span><span class="o">&gt;</span> <span class="n">collector</span><span class="o">)</span>
+  <span class="kd">throws</span> <span class="n">IOException</span><span class="o">,</span> <span class="n">InterruptedException</span>
   <span class="o">{</span>
-    <span class="k">if</span> <span class="o">(</span><span class="n">kSchema</span> <span class="o">==</span> <span class="kc">null</span><span class="o">)</span> 
+    <span class="k">if</span> <span class="o">(</span><span class="n">kSchema</span> <span class="o">==</span> <span class="kc">null</span><span class="o">)</span>
       <span class="n">kSchema</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Schema</span><span class="o">.</span><span class="na">Parser</span><span class="o">().</span><span class="na">parse</span><span class="o">(</span><span class="n">keySchemaString</span><span class="o">);</span>
 
-    <span class="k">if</span> <span class="o">(</span><span class="n">vSchema</span> <span class="o">==</span> <span class="kc">null</span><span class="o">)</span> 
+    <span class="k">if</span> <span class="o">(</span><span class="n">vSchema</span> <span class="o">==</span> <span class="kc">null</span><span class="o">)</span>
       <span class="n">vSchema</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Schema</span><span class="o">.</span><span class="na">Parser</span><span class="o">().</span><span class="na">parse</span><span class="o">(</span><span class="n">valueSchemaString</span><span class="o">);</span>
 
     <span class="n">GenericRecord</span> <span class="n">key</span> <span class="o">=</span> <span class="k">new</span> <span class="n">GenericData</span><span class="o">.</span><span class="na">Record</span><span class="o">(</span><span class="n">kSchema</span><span class="o">);</span>
-    <span class="n">key</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">&quot;member_id&quot;</span><span class="o">,</span> <span class="n">input</span><span class="o">.</span><span class="na">get</span><span class="o">(</span><span class="s">&quot;id&quot;</span><span class="o">));</span>
+    <span class="n">key</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">"member_id"</span><span class="o">,</span> <span class="n">input</span><span class="o">.</span><span class="na">get</span><span class="o">(</span><span class="s">"id"</span><span class="o">));</span>
 
     <span class="n">GenericRecord</span> <span class="n">value</span> <span class="o">=</span> <span class="k">new</span> <span class="n">GenericData</span><span class="o">.</span><span class="na">Record</span><span class="o">(</span><span class="n">vSchema</span><span class="o">);</span>
-    <span class="n">value</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">&quot;count&quot;</span><span class="o">,</span> <span class="mi">1</span><span class="o">);</span>
+    <span class="n">value</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">"count"</span><span class="o">,</span> <span class="mi">1</span><span class="o">);</span>
 
     <span class="n">collector</span><span class="o">.</span><span class="na">collect</span><span class="o">(</span><span class="n">key</span><span class="o">,</span><span class="n">value</span><span class="o">);</span>
-  <span class="o">}</span>      
+  <span class="o">}</span>
 <span class="o">});</span>
-</pre>
+</code></pre>
+
 <p>An accumulator is responsible for aggregating this data. Records will be grouped by member ID and then passed to the accumulator one-by-one. The accumulator keeps a running total and adds each input count to it. When all data has been passed to it, the <code>getFinal()</code> method will be called, which returns the output record containing the count.</p>
-<pre class="highlight java"><span class="n">job</span><span class="o">.</span><span class="na">setReducerAccumulator</span><span class="o">(</span><span class="k">new</span> <span class="n">Accumulator</span><span class="o">&lt;</span><span class="n">GenericRecord</span><span class="o">,</span><span class="n">GenericRecord</span><span class="o">&gt;()</span> 
+<pre class="highlight java"><code><span class="n">job</span><span class="o">.</span><span class="na">setReducerAccumulator</span><span class="o">(</span><span class="k">new</span> <span class="n">Accumulator</span><span class="o">&lt;</span><span class="n">GenericRecord</span><span class="o">,</span><span class="n">GenericRecord</span><span class="o">&gt;()</span>
 <span class="o">{</span>
   <span class="kd">private</span> <span class="kd">transient</span> <span class="kt">int</span> <span class="n">count</span><span class="o">;</span>
   <span class="kd">private</span> <span class="kd">transient</span> <span class="n">Schema</span> <span class="n">vSchema</span><span class="o">;</span>
 
   <span class="nd">@Override</span>
   <span class="kd">public</span> <span class="kt">void</span> <span class="n">accumulate</span><span class="o">(</span><span class="n">GenericRecord</span> <span class="n">value</span><span class="o">)</span> <span class="o">{</span>
-    <span class="k">this</span><span class="o">.</span><span class="na">count</span> <span class="o">+=</span> <span class="o">(</span><span class="n">Integer</span><span class="o">)</span><span class="n">value</span><span class="o">.</span><span class="na">get</span><span class="o">(</span><span class="s">&quot;count&quot;</span><span class="o">);</span>
+    <span class="k">this</span><span class="o">.</span><span class="na">count</span> <span class="o">+=</span> <span class="o">(</span><span class="n">Integer</span><span class="o">)</span><span class="n">value</span><span class="o">.</span><span class="na">get</span><span class="o">(</span><span class="s">"count"</span><span class="o">);</span>
   <span class="o">}</span>
 
   <span class="nd">@Override</span>
   <span class="kd">public</span> <span class="n">GenericRecord</span> <span class="n">getFinal</span><span class="o">()</span> <span class="o">{</span>
-    <span class="k">if</span> <span class="o">(</span><span class="n">vSchema</span> <span class="o">==</span> <span class="kc">null</span><span class="o">)</span> 
+    <span class="k">if</span> <span class="o">(</span><span class="n">vSchema</span> <span class="o">==</span> <span class="kc">null</span><span class="o">)</span>
       <span class="n">vSchema</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Schema</span><span class="o">.</span><span class="na">Parser</span><span class="o">().</span><span class="na">parse</span><span class="o">(</span><span class="n">valueSchemaString</span><span class="o">);</span>
 
     <span class="n">GenericRecord</span> <span class="n">output</span> <span class="o">=</span> <span class="k">new</span> <span class="n">GenericData</span><span class="o">.</span><span class="na">Record</span><span class="o">(</span><span class="n">vSchema</span><span class="o">);</span>
-    <span class="n">output</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">&quot;count&quot;</span><span class="o">,</span> <span class="n">count</span><span class="o">);</span>
+    <span class="n">output</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">"count"</span><span class="o">,</span> <span class="n">count</span><span class="o">);</span>
 
     <span class="k">return</span> <span class="n">output</span><span class="o">;</span>
   <span class="o">}</span>
@@ -250,38 +258,44 @@
   <span class="nd">@Override</span>
   <span class="kd">public</span> <span class="kt">void</span> <span class="n">cleanup</span><span class="o">()</span> <span class="o">{</span>
     <span class="k">this</span><span class="o">.</span><span class="na">count</span> <span class="o">=</span> <span class="mi">0</span><span class="o">;</span>
-  <span class="o">}</span>      
+  <span class="o">}</span>
 <span class="o">});</span>
-</pre>
+</code></pre>
+
 <p>Since the intermediate and output values have the same schema, the accumulator can also be used for the combiner, so let&#39;s indicate that we want it to be used for that:</p>
-<pre class="highlight java"><span class="n">job</span><span class="o">.</span><span class="na">setCombinerAccumulator</span><span class="o">(</span><span class="n">job</span><span class="o">.</span><span class="na">getReducerAccumulator</span><span class="o">());</span>
+<pre class="highlight java"><code><span class="n">job</span><span class="o">.</span><span class="na">setCombinerAccumulator</span><span class="o">(</span><span class="n">job</span><span class="o">.</span><span class="na">getReducerAccumulator</span><span class="o">());</span>
 <span class="n">job</span><span class="o">.</span><span class="na">setUseCombiner</span><span class="o">(</span><span class="kc">true</span><span class="o">);</span>
-</pre>
+</code></pre>
+
 <p>Finally, we run the job.</p>
-<pre class="highlight java"><span class="n">job</span><span class="o">.</span><span class="na">run</span><span class="o">();</span>
-</pre>
+<pre class="highlight java"><code><span class="n">job</span><span class="o">.</span><span class="na">run</span><span class="o">();</span>
+</code></pre>
+
 <p>When we inspect the output we find that the counts match what we expect:</p>
-<pre class="highlight json"><span class="p">{</span><span class="s2">&quot;key&quot;</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="s2">&quot;member_id&quot;</span><span class="p">:</span><span class="w"> </span><span class="mi">1</span><span class="p">},</span><span class="w"> </span><span class="s2">&quot;value&quot;</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="s2">&quot;count&quot;</span><span class="p">:</span><span class="w"> </span><span class="mi">5</span><span class="p">}}</span><span class="w">
-</span><span class="p">{</span><span class="s2">&quot;key&quot;</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="s2">&quot;member_id&quot;</span><span class="p">:</span><span class="w"> </span><span class="mi">2</span><span class="p">},</span><span class="w"> </span><span class="s2">&quot;value&quot;</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="s2">&quot;count&quot;</span><span class="p">:</span><span class="w"> </span><span class="mi">3</span><span class="p">}}</span><span class="w">
-</span><span class="p">{</span><span class="s2">&quot;key&quot;</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="s2">&quot;member_id&quot;</span><span class="p">:</span><span class="w"> </span><span class="mi">3</span><span class="p">},</span><span class="w"> </span><span class="s2">&quot;value&quot;</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="s2">&quot;count&quot;</span><span class="p">:</span><span class="w"> </span><span class="mi">3</span><span class="p">}}</span><span class="w">
-</span></pre>
+<pre class="highlight json"><code><span class="p">{</span><span class="nt">"key"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="nt">"member_id"</span><span class="p">:</span><span class="w"> </span><span class="mi">1</span><span class="p">},</span><span class="w"> </span><span class="nt">"value"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="nt">"count"</span><span class="p">:</span><span class="w"> </span><span class="mi">5</span><span class="p">}}</span><span class="w">
+</span><span class="p">{</span><span class="nt">"key"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="nt">"member_id"</span><span class="p">:</span><span class="w"> </span><span class="mi">2</span><span class="p">},</span><span class="w"> </span><span class="nt">"value"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="nt">"count"</span><span class="p">:</span><span class="w"> </span><span class="mi">3</span><span class="p">}}</span><span class="w">
+</span><span class="p">{</span><span class="nt">"key"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="nt">"member_id"</span><span class="p">:</span><span class="w"> </span><span class="mi">3</span><span class="p">},</span><span class="w"> </span><span class="nt">"value"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="nt">"count"</span><span class="p">:</span><span class="w"> </span><span class="mi">3</span><span class="p">}}</span><span class="w">
+</span></code></pre>
+
 <p>Now suppose that a new day of data becomes available:</p>
-<pre class="highlight text">2013/03/17:
-{&quot;id&quot;: 1}, {&quot;id&quot;: 1}, {&quot;id&quot;: 2}, {&quot;id&quot;: 2}, {&quot;id&quot;: 2},
-{&quot;id&quot;: 3}, {&quot;id&quot;: 3}
-</pre>
+<pre class="highlight plaintext"><code>2013/03/17:
+{"id": 1}, {"id": 1}, {"id": 2}, {"id": 2}, {"id": 2},
+{"id": 3}, {"id": 3}
+</code></pre>
+
 <p>Let&#39;s run the job again. Since Hourglass already has a result for the previous day, it consumes the new day of input and the previous output, rather than all the input data it already processed.</p>
 
 <p><img alt="partition-preserving job" src="/images/Hourglass-Example1-Step2.png" /></p>
 
 <p>The previous output is passed to the accumulator, where it is aggregated with the new data. This produces the output we expect:</p>
-<pre class="highlight json"><span class="p">{</span><span class="s2">&quot;key&quot;</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="s2">&quot;member_id&quot;</span><span class="p">:</span><span class="w"> </span><span class="mi">1</span><span class="p">},</span><span class="w"> </span><span class="s2">&quot;value&quot;</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="s2">&quot;count&quot;</span><span class="p">:</span><span class="w"> </span><span class="mi">7</span><span class="p">}}</span><span class="w">
-</span><span class="p">{</span><span class="s2">&quot;key&quot;</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="s2">&quot;member_id&quot;</span><span class="p">:</span><span class="w"> </span><span class="mi">2</span><span class="p">},</span><span class="w"> </span><span class="s2">&quot;value&quot;</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="s2">&quot;count&quot;</span><span class="p">:</span><span class="w"> </span><span class="mi">6</span><span class="p">}}</span><span class="w">
-</span><span class="p">{</span><span class="s2">&quot;key&quot;</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="s2">&quot;member_id&quot;</span><span class="p">:</span><span class="w"> </span><span class="mi">3</span><span class="p">},</span><span class="w"> </span><span class="s2">&quot;value&quot;</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="s2">&quot;count&quot;</span><span class="p">:</span><span class="w"> </span><span class="mi">5</span><span class="p">}}</span><span class="w">
-</span></pre>
+<pre class="highlight json"><code><span class="p">{</span><span class="nt">"key"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="nt">"member_id"</span><span class="p">:</span><span class="w"> </span><span class="mi">1</span><span class="p">},</span><span class="w"> </span><span class="nt">"value"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="nt">"count"</span><span class="p">:</span><span class="w"> </span><span class="mi">7</span><span class="p">}}</span><span class="w">
+</span><span class="p">{</span><span class="nt">"key"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="nt">"member_id"</span><span class="p">:</span><span class="w"> </span><span class="mi">2</span><span class="p">},</span><span class="w"> </span><span class="nt">"value"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="nt">"count"</span><span class="p">:</span><span class="w"> </span><span class="mi">6</span><span class="p">}}</span><span class="w">
+</span><span class="p">{</span><span class="nt">"key"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="nt">"member_id"</span><span class="p">:</span><span class="w"> </span><span class="mi">3</span><span class="p">},</span><span class="w"> </span><span class="nt">"value"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="nt">"count"</span><span class="p">:</span><span class="w"> </span><span class="mi">5</span><span class="p">}}</span><span class="w">
+</span></code></pre>
+
 <p>In this example, we only have a few days of input data, so the impact of incrementally processing the new data is small. However, as the size of the input data grows, the benefit of incrementally processing data becomes very significant.</p>
 
-<h2 id="toc_5">Example 2: Cardinality Estimation</h2>
+<h2 id="example-2-cardinality-estimation">Example 2: Cardinality Estimation</h2>
 
 <p>Suppose that we have another event that tracks every page view that occurs on the site. One piece of information recorded in the event is the member ID. We want to use this event to tackle another problem: a daily report that estimates the number of members who are active on the site in the past 30 days.</p>
 
@@ -300,7 +314,7 @@
 <p>HyperLogLog is a good fit for this use case. For this example, we will use <a href="https://github.com/clearspring/stream-lib/blob/master/src/main/java/com/clearspring/analytics/stream/cardinality/HyperLogLogPlus.java">HyperLogLogPlus</a> from <a href="https://github.com/clearspring/stream-lib">stream-lib</a>, an implementation based on <a href="http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/en/us/pubs/archive/40671.pdf">this paper</a> that includes some enhancements to the original algorithm. We can use a HyperLogLogPlus estimator for each day of input data in the partition-preserving job and serialize the estimator&#39;s bytes as the output. Then the partition-collapsing job can merge together the estimators for the time window to produce the final estimate.</p>
 
 <p>Let&#39;s start by defining the mapper. The key it uses is just a dummy value, as we are only producing a single statistic in this case. For the value we use a record with two fields: one is the count estimate; the other we&#39;ll just call &quot;data&quot;, which can be either a single member ID or the bytes from the serialized estimator. For the map output we use the member ID.</p>
-<pre class="highlight java"><span class="n">Mapper</span><span class="o">&lt;</span><span class="n">GenericRecord</span><span class="o">,</span><span class="n">GenericRecord</span><span class="o">,</span><span class="n">GenericRecord</span><span class="o">&gt;</span> <span class="n">mapper</span> <span class="o">=</span> 
+<pre class="highlight java"><code><span class="n">Mapper</span><span class="o">&lt;</span><span class="n">GenericRecord</span><span class="o">,</span><span class="n">GenericRecord</span><span class="o">,</span><span class="n">GenericRecord</span><span class="o">&gt;</span> <span class="n">mapper</span> <span class="o">=</span>
   <span class="k">new</span> <span class="n">Mapper</span><span class="o">&lt;</span><span class="n">GenericRecord</span><span class="o">,</span><span class="n">GenericRecord</span><span class="o">,</span><span class="n">GenericRecord</span><span class="o">&gt;()</span> <span class="o">{</span>
     <span class="kd">private</span> <span class="kd">transient</span> <span class="n">Schema</span> <span class="n">kSchema</span><span class="o">;</span>
     <span class="kd">private</span> <span class="kd">transient</span> <span class="n">Schema</span> <span class="n">vSchema</span><span class="o">;</span>
@@ -308,28 +322,29 @@
     <span class="nd">@Override</span>
     <span class="kd">public</span> <span class="kt">void</span> <span class="n">map</span><span class="o">(</span>
       <span class="n">GenericRecord</span> <span class="n">input</span><span class="o">,</span>
-      <span class="n">KeyValueCollector</span><span class="o">&lt;</span><span class="n">GenericRecord</span><span class="o">,</span> <span class="n">GenericRecord</span><span class="o">&gt;</span> <span class="n">collector</span><span class="o">)</span> 
+      <span class="n">KeyValueCollector</span><span class="o">&lt;</span><span class="n">GenericRecord</span><span class="o">,</span> <span class="n">GenericRecord</span><span class="o">&gt;</span> <span class="n">collector</span><span class="o">)</span>
     <span class="kd">throws</span> <span class="n">IOException</span><span class="o">,</span> <span class="n">InterruptedException</span>
     <span class="o">{</span>
-      <span class="k">if</span> <span class="o">(</span><span class="n">kSchema</span> <span class="o">==</span> <span class="kc">null</span><span class="o">)</span> 
+      <span class="k">if</span> <span class="o">(</span><span class="n">kSchema</span> <span class="o">==</span> <span class="kc">null</span><span class="o">)</span>
         <span class="n">kSchema</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Schema</span><span class="o">.</span><span class="na">Parser</span><span class="o">().</span><span class="na">parse</span><span class="o">(</span><span class="n">keySchemaString</span><span class="o">);</span>
 
-      <span class="k">if</span> <span class="o">(</span><span class="n">vSchema</span> <span class="o">==</span> <span class="kc">null</span><span class="o">)</span> 
+      <span class="k">if</span> <span class="o">(</span><span class="n">vSchema</span> <span class="o">==</span> <span class="kc">null</span><span class="o">)</span>
         <span class="n">vSchema</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Schema</span><span class="o">.</span><span class="na">Parser</span><span class="o">().</span><span class="na">parse</span><span class="o">(</span><span class="n">valueSchemaString</span><span class="o">);</span>
 
       <span class="n">GenericRecord</span> <span class="n">key</span> <span class="o">=</span> <span class="k">new</span> <span class="n">GenericData</span><span class="o">.</span><span class="na">Record</span><span class="o">(</span><span class="n">kSchema</span><span class="o">);</span>
-      <span class="n">key</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">&quot;name&quot;</span><span class="o">,</span> <span class="s">&quot;member_count&quot;</span><span class="o">);</span>
+      <span class="n">key</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">"name"</span><span class="o">,</span> <span class="s">"member_count"</span><span class="o">);</span>
 
       <span class="n">GenericRecord</span> <span class="n">value</span> <span class="o">=</span> <span class="k">new</span> <span class="n">GenericData</span><span class="o">.</span><span class="na">Record</span><span class="o">(</span><span class="n">vSchema</span><span class="o">);</span>
-      <span class="n">value</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">&quot;data&quot;</span><span class="o">,</span><span class="n">input</span><span class="o">.</span><span class="na">get</span><span class="o">(</span><span class="s">&quot;id&quot;</span><span class="o">));</span> <span class="c1">// member id</span>
-      <span class="n">value</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">&quot;count&quot;</span><span class="o">,</span> <span class="mi">1L</span><span class="o">);</span>            <span class="c1">// just a single member</span>
+      <span class="n">value</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">"data"</span><span class="o">,</span><span class="n">input</span><span class="o">.</span><span class="na">get</span><span class="o">(</span><span class="s">"id"</span><span class="o">));</span> <span class="c1">// member id</span>
+      <span class="n">value</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">"count"</span><span class="o">,</span> <span class="mi">1L</span><span class="o">);</span>            <span class="c1">// just a single member</span>
 
-      <span class="n">collector</span><span class="o">.</span><span class="na">collect</span><span class="o">(</span><span class="n">key</span><span class="o">,</span><span class="n">value</span><span class="o">);</span>        
-    <span class="o">}</span>      
+      <span class="n">collector</span><span class="o">.</span><span class="na">collect</span><span class="o">(</span><span class="n">key</span><span class="o">,</span><span class="n">value</span><span class="o">);</span>
+    <span class="o">}</span>
   <span class="o">};</span>
-</pre>
+</code></pre>
+
 <p>Next, we&#39;ll define the accumulator, which can be used for both the combiner and the reducer. This accumulator can handle either member IDs or estimator bytes. When it receives a member ID it adds it to the HyperLogLog estimator. When it receives an estimator it merges it with the current estimator to produce a new one. To produce the final result, it gets the current estimate and also serializes the current estimator as a sequence of bytes.</p>
-<pre class="highlight java"><span class="n">Accumulator</span><span class="o">&lt;</span><span class="n">GenericRecord</span><span class="o">,</span><span class="n">GenericRecord</span><span class="o">&gt;</span> <span class="n">accumulator</span> <span class="o">=</span> 
+<pre class="highlight java"><code><span class="n">Accumulator</span><span class="o">&lt;</span><span class="n">GenericRecord</span><span class="o">,</span><span class="n">GenericRecord</span><span class="o">&gt;</span> <span class="n">accumulator</span> <span class="o">=</span>
   <span class="k">new</span> <span class="n">Accumulator</span><span class="o">&lt;</span><span class="n">GenericRecord</span><span class="o">,</span><span class="n">GenericRecord</span><span class="o">&gt;()</span> <span class="o">{</span>
     <span class="kd">private</span> <span class="kd">transient</span> <span class="n">HyperLogLogPlus</span> <span class="n">estimator</span><span class="o">;</span>
     <span class="kd">private</span> <span class="kd">transient</span> <span class="n">Schema</span> <span class="n">vSchema</span><span class="o">;</span>
@@ -338,21 +353,21 @@
     <span class="kd">public</span> <span class="kt">void</span> <span class="n">accumulate</span><span class="o">(</span><span class="n">GenericRecord</span> <span class="n">value</span><span class="o">)</span>
     <span class="o">{</span>
       <span class="k">if</span> <span class="o">(</span><span class="n">estimator</span> <span class="o">==</span> <span class="kc">null</span><span class="o">)</span> <span class="n">estimator</span> <span class="o">=</span> <span class="k">new</span> <span class="n">HyperLogLogPlus</span><span class="o">(</span><span class="mi">20</span><span class="o">);</span>
-      <span class="n">Object</span> <span class="n">data</span> <span class="o">=</span> <span class="n">value</span><span class="o">.</span><span class="na">get</span><span class="o">(</span><span class="s">&quot;data&quot;</span><span class="o">);</span>
+      <span class="n">Object</span> <span class="n">data</span> <span class="o">=</span> <span class="n">value</span><span class="o">.</span><span class="na">get</span><span class="o">(</span><span class="s">"data"</span><span class="o">);</span>
       <span class="k">if</span> <span class="o">(</span><span class="n">data</span> <span class="k">instanceof</span> <span class="n">Long</span><span class="o">)</span>
       <span class="o">{</span>
         <span class="n">estimator</span><span class="o">.</span><span class="na">offer</span><span class="o">(</span><span class="n">data</span><span class="o">);</span>
       <span class="o">}</span>
       <span class="k">else</span> <span class="k">if</span> <span class="o">(</span><span class="n">data</span> <span class="k">instanceof</span> <span class="n">ByteBuffer</span><span class="o">)</span>
       <span class="o">{</span>
-        <span class="n">ByteBuffer</span> <span class="kt">byte</span><span class="n">s</span> <span class="o">=</span> <span class="o">(</span><span class="n">ByteBuffer</span><span class="o">)</span><span class="n">data</span><span class="o">;</span>
+        <span class="n">ByteBuffer</span> <span class="n">bytes</span> <span class="o">=</span> <span class="o">(</span><span class="n">ByteBuffer</span><span class="o">)</span><span class="n">data</span><span class="o">;</span>
         <span class="n">HyperLogLogPlus</span> <span class="n">newEstimator</span><span class="o">;</span>
         <span class="k">try</span>
         <span class="o">{</span>
-          <span class="n">newEstimator</span> <span class="o">=</span> 
-            <span class="n">HyperLogLogPlus</span><span class="o">.</span><span class="na">Builder</span><span class="o">.</span><span class="na">build</span><span class="o">(</span><span class="kt">byte</span><span class="n">s</span><span class="o">.</span><span class="na">array</span><span class="o">());</span>
+          <span class="n">newEstimator</span> <span class="o">=</span>
+            <span class="n">HyperLogLogPlus</span><span class="o">.</span><span class="na">Builder</span><span class="o">.</span><span class="na">build</span><span class="o">(</span><span class="n">bytes</span><span class="o">.</span><span class="na">array</span><span class="o">());</span>
 
-          <span class="n">estimator</span> <span class="o">=</span> 
+          <span class="n">estimator</span> <span class="o">=</span>
             <span class="o">(</span><span class="n">HyperLogLogPlus</span><span class="o">)</span><span class="n">estimator</span><span class="o">.</span><span class="na">merge</span><span class="o">(</span><span class="n">newEstimator</span><span class="o">);</span>
         <span class="o">}</span>
         <span class="k">catch</span> <span class="o">(</span><span class="n">IOException</span> <span class="n">e</span><span class="o">)</span>
@@ -362,24 +377,24 @@
         <span class="k">catch</span> <span class="o">(</span><span class="n">CardinalityMergeException</span> <span class="n">e</span><span class="o">)</span>
         <span class="o">{</span>
           <span class="k">throw</span> <span class="k">new</span> <span class="n">RuntimeException</span><span class="o">(</span><span class="n">e</span><span class="o">);</span>
-        <span class="o">}</span>      
+        <span class="o">}</span>
       <span class="o">}</span>
     <span class="o">}</span>
 
     <span class="nd">@Override</span>
     <span class="kd">public</span> <span class="n">GenericRecord</span> <span class="n">getFinal</span><span class="o">()</span>
     <span class="o">{</span>
-      <span class="k">if</span> <span class="o">(</span><span class="n">vSchema</span> <span class="o">==</span> <span class="kc">null</span><span class="o">)</span> 
+      <span class="k">if</span> <span class="o">(</span><span class="n">vSchema</span> <span class="o">==</span> <span class="kc">null</span><span class="o">)</span>
         <span class="n">vSchema</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Schema</span><span class="o">.</span><span class="na">Parser</span><span class="o">().</span><span class="na">parse</span><span class="o">(</span><span class="n">valueSchemaString</span><span class="o">);</span>
 
       <span class="n">GenericRecord</span> <span class="n">output</span> <span class="o">=</span> <span class="k">new</span> <span class="n">GenericData</span><span class="o">.</span><span class="na">Record</span><span class="o">(</span><span class="n">vSchema</span><span class="o">);</span>
 
       <span class="k">try</span>
       <span class="o">{</span>
-        <span class="n">ByteBuffer</span> <span class="kt">byte</span><span class="n">s</span> <span class="o">=</span> 
+        <span class="n">ByteBuffer</span> <span class="n">bytes</span> <span class="o">=</span>
           <span class="n">ByteBuffer</span><span class="o">.</span><span class="na">wrap</span><span class="o">(</span><span class="n">estimator</span><span class="o">.</span><span class="na">getBytes</span><span class="o">());</span>
-        <span class="n">output</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">&quot;data&quot;</span><span class="o">,</span> <span class="kt">byte</span><span class="n">s</span><span class="o">);</span>
-        <span class="n">output</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">&quot;count&quot;</span><span class="o">,</span> <span class="n">estimator</span><span class="o">.</span><span class="na">cardinality</span><span class="o">());</span>
+        <span class="n">output</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">"data"</span><span class="o">,</span> <span class="n">bytes</span><span class="o">);</span>
+        <span class="n">output</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">"count"</span><span class="o">,</span> <span class="n">estimator</span><span class="o">.</span><span class="na">cardinality</span><span class="o">());</span>
       <span class="o">}</span>
       <span class="k">catch</span> <span class="o">(</span><span class="n">IOException</span> <span class="n">e</span><span class="o">)</span>
       <span class="o">{</span>
@@ -392,73 +407,88 @@
     <span class="kd">public</span> <span class="kt">void</span> <span class="n">cleanup</span><span class="o">()</span>
     <span class="o">{</span>
       <span class="n">estimator</span> <span class="o">=</span> <span class="kc">null</span><span class="o">;</span>
-    <span class="o">}</span>      
+    <span class="o">}</span>
   <span class="o">};</span>
-</pre>
+</code></pre>
+
 <p>So there you have it. With the mapper and accumulator now defined, it is just a matter of passing them to the jobs and providing some other configuration. The key piece is to ensure that the second job uses a 30 day sliding window:</p>
-<pre class="highlight java"><span class="n">PartitionCollapsingIncrementalJob</span> <span class="n">job2</span> <span class="o">=</span> 
-  <span class="k">new</span> <span class="n">PartitionCollapsingIncrementalJob</span><span class="o">(</span><span class="n">Example</span><span class="o">.</span><span class="na">class</span><span class="o">);</span>    
+<pre class="highlight java"><code><span class="n">PartitionCollapsingIncrementalJob</span> <span class="n">job2</span> <span class="o">=</span>
+  <span class="k">new</span> <span class="n">PartitionCollapsingIncrementalJob</span><span class="o">(</span><span class="n">Example</span><span class="o">.</span><span class="na">class</span><span class="o">);</span>
 
 <span class="c1">// ...</span>
 
 <span class="n">job2</span><span class="o">.</span><span class="na">setNumDays</span><span class="o">(</span><span class="mi">30</span><span class="o">);</span> <span class="c1">// 30 day sliding window</span>
-</pre>
-<h2 id="toc_6">Try it yourself!</h2>
+</code></pre>
+
+<h2 id="try-it-yourself">Try it yourself!</h2>
+
+<p><em>Update (10/15/2015): Please see the updated version of these instructions at <a href="/docs/hourglass/getting-started.html">Getting Started</a>, which have changed significantly.  The instructions below will not work with the current code base, which has moved to Apache.</em></p>
 
 <p>Here is how you can start using Hourglass. We&#39;ll test out the job from the first example against some test data we&#39;ll create in a Hadoop. First, clone the DataFu repository and navigate to the Hourglass directory:</p>
-<pre class="highlight text">git clone https://github.com/linkedin/datafu.git
+<pre class="highlight plaintext"><code>git clone https://github.com/linkedin/datafu.git
 cd contrib/hourglass
-</pre>
+</code></pre>
+
 <p>Build the Hourglass JAR, and in addition build a test jar that contains the example jobs above.</p>
-<pre class="highlight text">ant jar
+<pre class="highlight plaintext"><code>ant jar
 ant testjar
-</pre>
+</code></pre>
+
 <p>Define some variables that we&#39;ll need for the <code>hadoop jar</code> command. These list the JAR dependencies, as well as the two JARs we just built.</p>
-<pre class="highlight text">export LIBJARS=$(find &quot;lib/common&quot; -name '*.jar' | xargs echo | tr ' ' ',')
-export LIBJARS=$LIBJARS,$(find &quot;build&quot; -name '*.jar' | xargs echo | tr ' ' ',')
+<pre class="highlight plaintext"><code>export LIBJARS=$(find "lib/common" -name '*.jar' | xargs echo | tr ' ' ',')
+export LIBJARS=$LIBJARS,$(find "build" -name '*.jar' | xargs echo | tr ' ' ',')
 export HADOOP_CLASSPATH=`echo ${LIBJARS} | sed s/,/:/g`
-</pre>
+</code></pre>
+
 <p>Generate some test data under <code>/data/event</code> using a <code>generate</code> tool. This will create some random events for dates between 2013/03/01 and 2013/03/14. Each record consists of just a single long value in the range 1-100.</p>
-<pre class="highlight text">hadoop jar build/datafu-hourglass-test.jar generate -libjars ${LIBJARS} /data/event 2013/03/01-2013/03/14
-</pre>
+<pre class="highlight plaintext"><code>hadoop jar build/datafu-hourglass-test.jar generate -libjars ${LIBJARS} /data/event 2013/03/01-2013/03/14
+</code></pre>
+
 <p>Just to get a sense for what the data looks like, we can copy it locally and dump the first several records.</p>
-<pre class="highlight text">hadoop fs -copyToLocal /data/event/2013/03/01/part-00000.avro temp.avro
+<pre class="highlight plaintext"><code>hadoop fs -copyToLocal /data/event/2013/03/01/part-00000.avro temp.avro
 java -jar lib/test/avro-tools-jar-1.7.4.jar tojson temp.avro | head
-</pre>
+</code></pre>
+
 <p>Now run the <code>countbyid</code> tool, which executes the job from the first example that we defined earlier. This will count the number of events for each ID value. In the output you will notice that it reads all fourteen days of input that are available.</p>
-<pre class="highlight text">hadoop jar build/datafu-hourglass-test.jar countbyid -libjars ${LIBJARS} /data/event /output
-</pre>
+<pre class="highlight plaintext"><code>hadoop jar build/datafu-hourglass-test.jar countbyid -libjars ${LIBJARS} /data/event /output
+</code></pre>
+
 <p>We can see what this produced by copying the output locally and dumping the first several records. Each record consists of an ID and a count.</p>
-<pre class="highlight text">rm temp.avro
+<pre class="highlight plaintext"><code>rm temp.avro
 hadoop fs -copyToLocal /output/20130314/part-r-00000.avro temp.avro
 java -jar lib/test/avro-tools-jar-1.7.4.jar tojson temp.avro | head
-</pre>
+</code></pre>
+
 <p>Now let&#39;s generate an additional day of data for 2013/03/15.</p>
-<pre class="highlight text">hadoop jar build/datafu-hourglass-test.jar generate -libjars ${LIBJARS} /data/event 2013/03/15
-</pre>
+<pre class="highlight plaintext"><code>hadoop jar build/datafu-hourglass-test.jar generate -libjars ${LIBJARS} /data/event 2013/03/15
+</code></pre>
+
 <p>We can run the incremental job again. This time it will reuse the previous output and will only consume the new day of input.</p>
-<pre class="highlight text">hadoop jar build/datafu-hourglass-test.jar countbyid -libjars ${LIBJARS} /data/event /output
-</pre>
+<pre class="highlight plaintext"><code>hadoop jar build/datafu-hourglass-test.jar countbyid -libjars ${LIBJARS} /data/event /output
+</code></pre>
+
 <p>We can download the new output and inspect the counts:</p>
-<pre class="highlight text">rm temp.avro
+<pre class="highlight plaintext"><code>rm temp.avro
 hadoop fs -copyToLocal /output/20130315/part-r-00000.avro temp.avro
 java -jar lib/test/avro-tools-jar-1.7.4.jar tojson temp.avro | head
-</pre>
+</code></pre>
+
 <p>Both of the examples in this post are also included as unit tests in the <code>Example</code> class within the source code. Some code has been omitted from the examples in this post for sake of space, so please check the original source if you&#39;re interested in more of the details.</p>
 
 <p>If you&#39;re interested in the project, we also encourage you to try running the unit tests, which can be run in Eclipse once the project is loaded there, or by running <code>ant test</code> at the command line.</p>
 
-<h2 id="toc_7">Conclusion</h2>
+<h2 id="conclusion">Conclusion</h2>
 
-<p>We hope this whets your appetite for incremental data processing with DataFu&#39;s Hourglass. The <a href="https://github.com/linkedin/datafu/tree/master/contrib/hourglass">code</a> is available on Github in the <a href="https://github.com/linkedin/datafu">DataFu</a> repository under an Apache 2.0 license. Documentation is available <a href="/docs/hourglass/javadoc.html">here</a>. We are accepting contributions, so if you are interesting in helping out, please fork the code and send us your pull requests!</p>
+<p>We hope this whets your appetite for incremental data processing with DataFu&#39;s Hourglass. The <a href="https://github.com/apache/incubator-datafu/tree/master/datafu-hourglass">code</a> is available on Github in the <a href="https://github.com/apache/incubator-datafu">DataFu</a> repository under an Apache 2.0 license. Documentation is available <a href="/docs/hourglass/javadoc.html">here</a>. We are accepting contributions, so if you are interesting in helping out, please fork the code and send us your pull requests!</p>
 
 
   </article>
 </div>
 
     
-      <div class="footer">
-Copyright &copy; 2011-2014 <a href="http://www.apache.org/licenses/">The Apache Software Foundation</a>. <br>
+      
+<div class="footer">
+Copyright &copy; 2011-2015 <a href="http://www.apache.org/licenses/">The Apache Software Foundation</a>. <br>
 Apache DataFu, DataFu, Apache Pig, Apache Hadoop, Hadoop, Apache, and the Apache feather logo are either registered trademarks or trademarks of the Apache Software Foundation in the United States and other countries.
 </div>
 

Modified: incubator/datafu/site/blog/2014/04/27/datafu-at-apachecon.html
URL: http://svn.apache.org/viewvc/incubator/datafu/site/blog/2014/04/27/datafu-at-apachecon.html?rev=1709884&r1=1709883&r2=1709884&view=diff
==============================================================================
--- incubator/datafu/site/blog/2014/04/27/datafu-at-apachecon.html (original)
+++ incubator/datafu/site/blog/2014/04/27/datafu-at-apachecon.html Wed Oct 21 17:00:40 2015
@@ -1,3 +1,5 @@
+
+
 <!doctype html>
 <html>
   <head>
@@ -10,11 +12,9 @@
     <!-- Use title if it's in the page YAML frontmatter -->
     <title>DataFu @ Apache 2014</title>
     
-    <link href="/stylesheets/all.css" media="screen" rel="stylesheet" type="text/css" />
-<link href="/stylesheets/highlight.css" media="screen" rel="stylesheet" type="text/css" />
-    <script src="/javascripts/all.js" type="text/javascript"></script>
+    <link href="/stylesheets/all.css" rel="stylesheet" /><link href="/stylesheets/highlight.css" rel="stylesheet" />
+    <script src="/javascripts/all.js"></script>
 
-    
     <script type="text/javascript">
       var _gaq = _gaq || [];
       _gaq.push(['_setAccount', 'UA-30533336-2']);
@@ -26,14 +26,14 @@
         var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
       })();
     </script>
-    
   </head>
   
   <body class="blog blog_2014 blog_2014_04 blog_2014_04_27 blog_2014_04_27_datafu-at-apachecon">
 
     <div class="container">
 
-      <div class="header">
+      
+<div class="header">
 
   <ul class="nav nav-pills pull-right">
     <li><a href="/">Home</a></li>
@@ -49,9 +49,7 @@
   <article class="col-lg-10">
     <h1>DataFu @ Apache 2014</h1>
     <h5 class="text-muted"><time>Apr 27, 2014</time></h5>
-    
       <h5 class="text-muted">Matthew Hayes</h5>
-    
 
     <hr>
 
@@ -62,8 +60,9 @@
 </div>
 
     
-      <div class="footer">
-Copyright &copy; 2011-2014 <a href="http://www.apache.org/licenses/">The Apache Software Foundation</a>. <br>
+      
+<div class="footer">
+Copyright &copy; 2011-2015 <a href="http://www.apache.org/licenses/">The Apache Software Foundation</a>. <br>
 Apache DataFu, DataFu, Apache Pig, Apache Hadoop, Hadoop, Apache, and the Apache feather logo are either registered trademarks or trademarks of the Apache Software Foundation in the United States and other countries.
 </div>
 

Modified: incubator/datafu/site/blog/index.html
URL: http://svn.apache.org/viewvc/incubator/datafu/site/blog/index.html?rev=1709884&r1=1709883&r2=1709884&view=diff
==============================================================================
--- incubator/datafu/site/blog/index.html (original)
+++ incubator/datafu/site/blog/index.html Wed Oct 21 17:00:40 2015
@@ -1,3 +1,4 @@
+
 <!doctype html>
 <html>
   <head>
@@ -10,11 +11,9 @@
     <!-- Use title if it's in the page YAML frontmatter -->
     <title>Blog - DataFu</title>
     
-    <link href="/stylesheets/all.css" media="screen" rel="stylesheet" type="text/css" />
-<link href="/stylesheets/highlight.css" media="screen" rel="stylesheet" type="text/css" />
-    <script src="/javascripts/all.js" type="text/javascript"></script>
+    <link href="/stylesheets/all.css" rel="stylesheet" /><link href="/stylesheets/highlight.css" rel="stylesheet" />
+    <script src="/javascripts/all.js"></script>
 
-    
     <script type="text/javascript">
       var _gaq = _gaq || [];
       _gaq.push(['_setAccount', 'UA-30533336-2']);
@@ -26,14 +25,14 @@
         var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
       })();
     </script>
-    
   </head>
   
   <body class="blog blog_index">
 
     <div class="container">
 
-      <div class="header">
+      
+<div class="header">
 
   <ul class="nav nav-pills pull-right">
     <li><a href="/">Home</a></li>
@@ -45,15 +44,14 @@
 </div>
 
       
-      <div class="row">
+      
+<div class="row">
   <article class="col-lg-10">
     <h2><a href="/blog/2014/04/27/datafu-at-apachecon.html">DataFu @ Apache 2014</a></h2>
     <h5 class="text-muted"><time>Apr 27, 2014</time>
-    
     <h5 class="text-muted">
         Matthew Hayes
     </h5>
-    
     <p><a href="https://www.linkedin.com/in/williamgvaughan">William Vaughan</a> gave a presentation at ApacheCon North America on Apache DataFu.  Check out the <a href="http://www.slideshare.net/williamgvaughan/datafu-apachecon-33420740">slides</a> and <a href="https://www.youtube.com/watch?v=JWI9tVsQ1cY">video</a> for a great overview of Apache DataFu and some of the cool things you can do with it.</p>
 
     <a href="/blog/2014/04/27/datafu-at-apachecon.html">Read more...</a>
@@ -63,12 +61,12 @@
   <article class="col-lg-10">
     <h2><a href="/blog/2013/10/03/datafus-hourglass-incremental-data-processing-in-hadoop.html">DataFu's Hourglass, Incremental Data Processing in Hadoop</a></h2>
     <h5 class="text-muted"><time>Oct  3, 2013</time>
-    
     <h5 class="text-muted">
         Matthew Hayes
     </h5>
-    
-    <p>For a large scale site such as LinkedIn, tracking metrics accurately and efficiently is an important task. For example, imagine we need a dashboard that shows the number of visitors to every page on the site over the last thirty days. To keep this...</p>
+    <p><em>Update (10/15/2015): The links in this blog post have been updated to point to the correct locations within the Apache DataFu website.</em></p>
+
+<p>For a large scale site such as LinkedIn, tracking metrics accurately and efficiently is an important task. For example...</p>
     <a href="/blog/2013/10/03/datafus-hourglass-incremental-data-processing-in-hadoop.html">Read more...</a>
   </article>
 </div>
@@ -76,14 +74,12 @@
   <article class="col-lg-10">
     <h2><a href="/blog/2013/09/04/datafu-1-0.html">DataFu 1.0</a></h2>
     <h5 class="text-muted"><time>Sep  4, 2013</time>
-    
     <h5 class="text-muted">
         William Vaughan
     </h5>
-    
-    <p><a href="http://data.linkedin.com/opensource/datafu">DataFu</a> is an open-source collection of user-defined functions for working with large-scale data in <a href="http://hadoop.apache.org/">Hadoop</a> and <a href="http://pig.apache.org/">Pig</a>.</p>
+    <p><em>Update (10/15/2015): The links in this blog post have been updated to point to the correct locations within the Apache DataFu website.</em></p>
 
-<p>About two years ago, we recognized a need for a stable, well-tested library of Pig UDFs that could assist in common data mining and...</p>
+<p><a href="/">DataFu</a> is an open-source collection of user-defined functions for working with large-scale data in <a href="http://hadoop.apache.org/">Hadoop</a> and <a href="http://pig.apache.org/">Pig</a></p>
     <a href="/blog/2013/09/04/datafu-1-0.html">Read more...</a>
   </article>
 </div>
@@ -91,11 +87,9 @@
   <article class="col-lg-10">
     <h2><a href="/blog/2013/01/24/datafu-the-wd-40-of-big-data.html">DataFu, The WD-40 of Big Data</a></h2>
     <h5 class="text-muted"><time>Jan 24, 2013</time>
-    
     <h5 class="text-muted">
         Matthew Hayes, Sam Shah
     </h5>
-    
     <p>If Pig is the “<a href="http://blog.linkedin.com/2010/07/01/linkedin-apache-pig/">duct tape for big data</a>“, then DataFu is the WD-40. Or something.</p>
 
 <p>No, seriously, DataFu is a collection of Pig UDFs for data analysis on Hadoop. DataFu includes routines for common statistics tasks (e.g., median, variance), PageRank...</p>
@@ -106,19 +100,18 @@
   <article class="col-lg-10">
     <h2><a href="/blog/2012/01/10/introducing-datafu.html">Introducing DataFu, an open source collection of useful Apache Pig UDFs</a></h2>
     <h5 class="text-muted"><time>Jan 10, 2012</time>
-    
     <h5 class="text-muted">
         Matthew Hayes
     </h5>
-    
     <p>At LinkedIn, we make extensive use of <a href="http://pig.apache.org/">Apache Pig</a> for performing <a href="http://engineering.linkedin.com/hadoop/user-engagement-powered-apache-pig-and-hadoop">data analysis on Hadoop</a>. Pig is a simple, high-level programming language that consists of just a few dozen operators and makes it easy to write MapReduce jobs. For more advanced tasks...</p>
     <a href="/blog/2012/01/10/introducing-datafu.html">Read more...</a>
   </article>
 </div>
 
     
-      <div class="footer">
-Copyright &copy; 2011-2014 <a href="http://www.apache.org/licenses/">The Apache Software Foundation</a>. <br>
+      
+<div class="footer">
+Copyright &copy; 2011-2015 <a href="http://www.apache.org/licenses/">The Apache Software Foundation</a>. <br>
 Apache DataFu, DataFu, Apache Pig, Apache Hadoop, Hadoop, Apache, and the Apache feather logo are either registered trademarks or trademarks of the Apache Software Foundation in the United States and other countries.
 </div>
 

Added: incubator/datafu/site/community/contributing.html
URL: http://svn.apache.org/viewvc/incubator/datafu/site/community/contributing.html?rev=1709884&view=auto
==============================================================================
--- incubator/datafu/site/community/contributing.html (added)
+++ incubator/datafu/site/community/contributing.html Wed Oct 21 17:00:40 2015
@@ -0,0 +1,159 @@
+
+
+<!doctype html>
+<html>
+  <head>
+    <meta charset="utf-8">
+    
+    <!-- Always force latest IE rendering engine or request Chrome Frame -->
+    <meta content="IE=edge,chrome=1" http-equiv="X-UA-Compatible">
+    <meta name="google-site-verification" content="9N7qTOUYyX4kYfXYc0OIomWJku3PVvGrf6oTNWg2CHI" />
+    
+    <!-- Use title if it's in the page YAML frontmatter -->
+    <title>Contributing - Community</title>
+    
+    <link href="/stylesheets/all.css" rel="stylesheet" /><link href="/stylesheets/highlight.css" rel="stylesheet" />
+    <script src="/javascripts/all.js"></script>
+
+    <script type="text/javascript">
+      var _gaq = _gaq || [];
+      _gaq.push(['_setAccount', 'UA-30533336-2']);
+      _gaq.push(['_trackPageview']);
+
+      (function() {
+        var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
+        ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
+        var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
+      })();
+    </script>
+  </head>
+  
+  <body class="community community_contributing">
+
+    <div class="container">
+
+      
+<div class="header">
+
+  <ul class="nav nav-pills pull-right">
+    <li><a href="/">Home</a></li>
+    <li><a href="/blog">Blog</a></li>
+  </ul>
+
+  <h3 class="header-title"><a href="/">Apache DataFu&trade;</a></h3>
+
+</div>
+
+      
+      
+  <div class="row">
+    <div class="col-md-3">
+      
+<h4>Apache DataFu</h4>
+<ul class="nav nav-pills nav-stacked">
+  <li><a href="/">Home</a></li>
+  <li><a href="/docs/quick-start.html">Quick Start</a></li>
+</ul>
+
+<h4>Apache DataFu Pig</h4>
+<ul class="nav nav-pills nav-stacked">
+  <li><a href="/docs/datafu/getting-started.html">Getting Started</a></li>
+  <li><a href="/docs/datafu/guide.html">Guide</a></li>
+  <li><a href="/docs/datafu/javadoc.html">Javadoc</a></li>
+</ul>
+
+<h4>Apache DataFu Hourglass</h4>
+<ul class="nav nav-pills nav-stacked">
+  <li><a href="/docs/hourglass/getting-started.html">Getting Started</a></li>
+  <li><a href="/docs/hourglass/concepts.html">Concepts</a></li>
+  <li><a href="/docs/hourglass/javadoc.html">Javadoc</a></li>
+</ul>
+
+<h4>Community</h4>
+<ul class="nav nav-pills nav-stacked">
+  <li><a href="/community/contributing.html">Contributing</a></li>
+  <li><a href="/community/mailing-lists.html">Mailing Lists</a></li>
+  <li><a href="https://issues.apache.org/jira/browse/DATAFU">Bugs</a></li>
+</ul>
+    </div>
+    <div class="col-md-7">
+      <h4 class="text-muted">Community</h4>
+      <h1 id="contributing">Contributing</h1>
+
+<p>We welcome contributions to the Apache DataFu.  If you&#39;re interested, please read the following guide:</p>
+
+<p><a href="https://cwiki.apache.org/confluence/display/DATAFU/Contributing+to+Apache+DataFu">https://cwiki.apache.org/confluence/display/DATAFU/Contributing+to+Apache+DataFu</a></p>
+
+<h2 id="working-in-the-code-base">Working in the Code Base</h2>
+
+<p>Common tasks for working in the DataFu code can be found below.  For information on how to contribute patches, please
+follow the wiki link above.</p>
+
+<h3 id="get-the-code">Get the Code</h3>
+
+<p>If you haven&#39;t done so already:</p>
+<pre class="highlight plaintext"><code>git clone https://git-wip-us.apache.org/repos/asf/incubator-datafu.git
+cd incubator-datafu
+</code></pre>
+
+<h3 id="generate-eclipse-files">Generate Eclipse Files</h3>
+
+<p>The following command generates the necessary files to load the project in Eclipse:</p>
+<pre class="highlight plaintext"><code>./gradlew eclipse
+</code></pre>
+
+<p>To clean up the eclipse files:</p>
+<pre class="highlight plaintext"><code>./gradlew cleanEclipse
+</code></pre>
+
+<p>Note that you may run out of heap when executing tests in Eclipse.  To fix this adjust your heap settings for the TestNG plugin.  Go to Eclipse-&gt;Preferences.  Select TestNG-&gt;Run/Debug.  Add &quot;-Xmx1G&quot; to the JVM args.</p>
+
+<h3 id="building">Building</h3>
+
+<p>All the JARs for the project can be built with the following command:</p>
+<pre class="highlight plaintext"><code>./gradlew assemble
+</code></pre>
+
+<p>This builds SNAPSHOT versions of the JARs for both DataFu Pig and Hourglass.  The built JARs can be found under <code>datafu-pig/build/libs</code> and <code>datafu-hourglass/build/libs</code>, respectively.</p>
+
+<p>The Apache DataFu Pig library can be built by running the command below.</p>
+<pre class="highlight plaintext"><code>./gradlew :datafu-pig:assemble
+./gradlew :datafu-hourglass:assemble
+</code></pre>
+
+<h3 id="running-tests">Running Tests</h3>
+
+<p>Tests can be run with the following command:</p>
+<pre class="highlight plaintext"><code>./gradlew test
+</code></pre>
+
+<p>All the tests can also be run from within eclipse.</p>
+
+<p>To run the DataFu Pig or Hourglass tests specifically:</p>
+<pre class="highlight plaintext"><code>./gradlew :datafu-pig:test
+./gradlew :datafu-hourglass:test
+</code></pre>
+
+<p>To run a specific set of tests from the command line, you can define the <code>test.single</code> system property with a value matching the test class you want to run.  For example, to run all tests defined in the <code>QuantileTests</code> test class for DataFu Pig:</p>
+<pre class="highlight plaintext"><code>./gradlew :datafu-pig:test -Dtest.single=QuantileTests
+</code></pre>
+
+<p>You can similarly run a specific Hourglass test like so:</p>
+<pre class="highlight plaintext"><code>./gradlew :datafu-hourglass:test -Dtest.single=PartitionCollapsingTests
+</code></pre>
+
+    </div>
+  </div>
+
+
+    
+      
+<div class="footer">
+Copyright &copy; 2011-2015 <a href="http://www.apache.org/licenses/">The Apache Software Foundation</a>. <br>
+Apache DataFu, DataFu, Apache Pig, Apache Hadoop, Hadoop, Apache, and the Apache feather logo are either registered trademarks or trademarks of the Apache Software Foundation in the United States and other countries.
+</div>
+
+    </div>
+
+  </body>
+</html>
\ No newline at end of file