You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by vi...@apache.org on 2020/05/04 00:20:32 UTC

[incubator-hudi] branch asf-site updated: Travis CI build asf-site

This is an automated email from the ASF dual-hosted git repository.

vinoth pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-hudi.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 79fb998  Travis CI build asf-site
79fb998 is described below

commit 79fb9989614909d23885760b253d446a38ea66b5
Author: CI <ci...@hudi.apache.org>
AuthorDate: Mon May 4 00:20:21 2020 +0000

    Travis CI build asf-site
---
 content/docs/quick-start-guide.html | 269 +++++++++++++++++++++++++++++++++---
 1 file changed, 249 insertions(+), 20 deletions(-)

diff --git a/content/docs/quick-start-guide.html b/content/docs/quick-start-guide.html
index 8e40382..e8cbbf7 100644
--- a/content/docs/quick-start-guide.html
+++ b/content/docs/quick-start-guide.html
@@ -4,7 +4,7 @@
     <meta charset="utf-8">
 
 <!-- begin _includes/seo.html --><title>Quick-Start Guide - Apache Hudi</title>
-<meta name="description" content="This guide provides a quick peek at Hudi’s capabilities using spark-shell. Using Spark datasources, we will walk through code snippets that allows you to insert and update a Hudi table of default table type: Copy on Write. After each write operation we will also show how to read the data both snapshot and incrementally.">
+<meta name="description" content="This guide provides a quick peek at Hudi’s capabilities using spark-shell. Using Spark datasources, we will walk through code snippets that allows you to insert and update a Hudi table of default table type: Copy on Write. After each write operation we will also show how to read the data both snapshot and incrementally.Scala example">
 
 <meta property="og:type" content="article">
 <meta property="og:locale" content="en_US">
@@ -13,7 +13,7 @@
 <meta property="og:url" content="https://hudi.apache.org/docs/quick-start-guide.html">
 
 
-  <meta property="og:description" content="This guide provides a quick peek at Hudi’s capabilities using spark-shell. Using Spark datasources, we will walk through code snippets that allows you to insert and update a Hudi table of default table type: Copy on Write. After each write operation we will also show how to read the data both snapshot and incrementally.">
+  <meta property="og:description" content="This guide provides a quick peek at Hudi’s capabilities using spark-shell. Using Spark datasources, we will walk through code snippets that allows you to insert and update a Hudi table of default table type: Copy on Write. After each write operation we will also show how to read the data both snapshot and incrementally.Scala example">
 
 
 
@@ -335,14 +335,29 @@
           <nav class="toc">
             <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> IN THIS PAGE</h4></header>
             <ul class="toc__menu">
-  <li><a href="#setup-spark-shell">Setup spark-shell</a></li>
-  <li><a href="#insert-data">Insert data</a></li>
-  <li><a href="#query-data">Query data</a></li>
-  <li><a href="#update-data">Update data</a></li>
-  <li><a href="#incremental-query">Incremental query</a></li>
-  <li><a href="#point-in-time-query">Point in time query</a></li>
-  <li><a href="#deletes">Delete data</a></li>
-  <li><a href="#where-to-go-from-here">Where to go from here?</a></li>
+  <li><a href="#scala-example">Scala example</a>
+    <ul>
+      <li><a href="#setup">Setup</a></li>
+      <li><a href="#insert-data">Insert data</a></li>
+      <li><a href="#query-data">Query data</a></li>
+      <li><a href="#update-data">Update data</a></li>
+      <li><a href="#incremental-query">Incremental query</a></li>
+      <li><a href="#point-in-time-query">Point in time query</a></li>
+      <li><a href="#deletes">Delete data</a></li>
+    </ul>
+  </li>
+  <li><a href="#pyspark-example">Pyspark example</a>
+    <ul>
+      <li><a href="#setup-1">Setup</a></li>
+      <li><a href="#insert-data-1">Insert data</a></li>
+      <li><a href="#query-data-1">Query data</a></li>
+      <li><a href="#update-data-1">Update data</a></li>
+      <li><a href="#incremental-query-1">Incremental query</a></li>
+      <li><a href="#point-in-time-query-1">Point in time query</a></li>
+      <li><a href="#deletes">Delete data</a></li>
+      <li><a href="#where-to-go-from-here">Where to go from here?</a></li>
+    </ul>
+  </li>
 </ul>
           </nav>
         </aside>
@@ -351,13 +366,15 @@
 code snippets that allows you to insert and update a Hudi table of default table type: 
 <a href="/docs/concepts.html#copy-on-write-table">Copy on Write</a>. 
 After each write operation we will also show how to read the data both snapshot and incrementally.</p>
+<h1 id="scala-example">Scala example</h1>
 
-<h2 id="setup-spark-shell">Setup spark-shell</h2>
+<h2 id="setup">Setup</h2>
 
 <p>Hudi works with Spark-2.x versions. You can follow instructions <a href="https://spark.apache.org/downloads.html">here</a> for setting up spark. 
 From the extracted directory run spark-shell with Hudi as:</p>
 
-<div class="language-scala highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">spark</span><span class="o">-</span><span class="mf">2.4</span><span class="o">.</span><span class="mi">4</span><span class="o">-</span><span class="n">bin</span><span class="o">-</span><span class="n">hadoop2</span><span class="o">.</span><span class="mi">7</span><span class="o">/</span><span class="n">bin</span><span class="o">/</span><span class="n">spark</span><span class [...]
+<div class="language-scala highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">// spark-shell
+</span><span class="n">spark</span><span class="o">-</span><span class="mf">2.4</span><span class="o">.</span><span class="mi">4</span><span class="o">-</span><span class="n">bin</span><span class="o">-</span><span class="n">hadoop2</span><span class="o">.</span><span class="mi">7</span><span class="o">/</span><span class="n">bin</span><span class="o">/</span><span class="n">spark</span><span class="o">-</span><span class="n">shell</span> <span class="o">\</span>
   <span class="o">--</span><span class="n">packages</span> <span class="nv">org</span><span class="o">.</span><span class="py">apache</span><span class="o">.</span><span class="py">hudi</span><span class="k">:</span><span class="kt">hudi-spark-bundle_2.</span><span class="err">11</span><span class="kt">:</span><span class="err">0</span><span class="kt">.</span><span class="err">5</span><span class="kt">.</span><span class="err">1</span><span class="kt">-incubating</span><span class="o">, [...]
   <span class="kt">--conf</span> <span class="kt">'spark.serializer</span><span class="o">=</span><span class="nv">org</span><span class="o">.</span><span class="py">apache</span><span class="o">.</span><span class="py">spark</span><span class="o">.</span><span class="py">serializer</span><span class="o">.</span><span class="py">KryoSerializer</span><span class="o">'</span>
 </code></pre></div></div>
@@ -374,7 +391,8 @@ From the extracted directory run spark-shell with Hudi as:</p>
 
 <p>Setup table name, base path and a data generator to generate records for this guide.</p>
 
-<div class="language-scala highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">import</span> <span class="nn">org.apache.hudi.QuickstartUtils._</span>
+<div class="language-scala highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">// spark-shell
+</span><span class="k">import</span> <span class="nn">org.apache.hudi.QuickstartUtils._</span>
 <span class="k">import</span> <span class="nn">scala.collection.JavaConversions._</span>
 <span class="k">import</span> <span class="nn">org.apache.spark.sql.SaveMode._</span>
 <span class="k">import</span> <span class="nn">org.apache.hudi.DataSourceReadOptions._</span>
@@ -393,7 +411,8 @@ can generate sample inserts and updates based on the the sample trip schema <a h
 
 <p>Generate some new trips, load them into a DataFrame and write the DataFrame into the Hudi table as below.</p>
 
-<div class="language-scala highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">val</span> <span class="nv">inserts</span> <span class="k">=</span> <span class="nf">convertToStringList</span><span class="o">(</span><span class="nv">dataGen</span><span class="o">.</span><span class="py">generateInserts</span><span class="o">(</span><span class="mi">10</span><span class="o">))</span>
+<div class="language-scala highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">// spark-shell
+</span><span class="k">val</span> <span class="nv">inserts</span> <span class="k">=</span> <span class="nf">convertToStringList</span><span class="o">(</span><span class="nv">dataGen</span><span class="o">.</span><span class="py">generateInserts</span><span class="o">(</span><span class="mi">10</span><span class="o">))</span>
 <span class="k">val</span> <span class="nv">df</span> <span class="k">=</span> <span class="nv">spark</span><span class="o">.</span><span class="py">read</span><span class="o">.</span><span class="py">json</span><span class="o">(</span><span class="nv">spark</span><span class="o">.</span><span class="py">sparkContext</span><span class="o">.</span><span class="py">parallelize</span><span class="o">(</span><span class="n">inserts</span><span class="o">,</span> <span class="mi">2</span><spa [...]
 <span class="nv">df</span><span class="o">.</span><span class="py">write</span><span class="o">.</span><span class="py">format</span><span class="o">(</span><span class="s">"hudi"</span><span class="o">).</span>
   <span class="nf">options</span><span class="o">(</span><span class="n">getQuickstartWriteConfigs</span><span class="o">).</span>
@@ -418,7 +437,8 @@ Here we are using the default write operation : <code class="highlighter-rouge">
 
 <p>Load the data files into a DataFrame.</p>
 
-<div class="language-scala highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">val</span> <span class="nv">tripsSnapshotDF</span> <span class="k">=</span> <span class="n">spark</span><span class="o">.</span>
+<div class="language-scala highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">// spark-shell
+</span><span class="k">val</span> <span class="nv">tripsSnapshotDF</span> <span class="k">=</span> <span class="n">spark</span><span class="o">.</span>
   <span class="n">read</span><span class="o">.</span>
   <span class="nf">format</span><span class="o">(</span><span class="s">"hudi"</span><span class="o">).</span>
   <span class="nf">load</span><span class="o">(</span><span class="n">basePath</span> <span class="o">+</span> <span class="s">"/*/*/*/*"</span><span class="o">)</span>
@@ -437,7 +457,8 @@ Refer to <a href="/docs/concepts#table-types--queries">Table types and queries</
 <p>This is similar to inserting new data. Generate updates to existing trips using the data generator, load into a DataFrame 
 and write DataFrame into the hudi table.</p>
 
-<div class="language-scala highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">val</span> <span class="nv">updates</span> <span class="k">=</span> <span class="nf">convertToStringList</span><span class="o">(</span><span class="nv">dataGen</span><span class="o">.</span><span class="py">generateUpdates</span><span class="o">(</span><span class="mi">10</span><span class="o">))</span>
+<div class="language-scala highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">// spark-shell
+</span><span class="k">val</span> <span class="nv">updates</span> <span class="k">=</span> <span class="nf">convertToStringList</span><span class="o">(</span><span class="nv">dataGen</span><span class="o">.</span><span class="py">generateUpdates</span><span class="o">(</span><span class="mi">10</span><span class="o">))</span>
 <span class="k">val</span> <span class="nv">df</span> <span class="k">=</span> <span class="nv">spark</span><span class="o">.</span><span class="py">read</span><span class="o">.</span><span class="py">json</span><span class="o">(</span><span class="nv">spark</span><span class="o">.</span><span class="py">sparkContext</span><span class="o">.</span><span class="py">parallelize</span><span class="o">(</span><span class="n">updates</span><span class="o">,</span> <span class="mi">2</span><spa [...]
 <span class="nv">df</span><span class="o">.</span><span class="py">write</span><span class="o">.</span><span class="py">format</span><span class="o">(</span><span class="s">"hudi"</span><span class="o">).</span>
   <span class="nf">options</span><span class="o">(</span><span class="n">getQuickstartWriteConfigs</span><span class="o">).</span>
@@ -459,7 +480,8 @@ denoted by the timestamp. Look for changes in <code class="highlighter-rouge">_h
 This can be achieved using Hudi’s incremental querying and providing a begin time from which changes need to be streamed. 
 We do not need to specify endTime, if we want all changes after the given commit (as is the common case).</p>
 
-<div class="language-scala highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">// reload data
+<div class="language-scala highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">// spark-shell
+// reload data
 </span><span class="n">spark</span><span class="o">.</span>
   <span class="n">read</span><span class="o">.</span>
   <span class="nf">format</span><span class="o">(</span><span class="s">"hudi"</span><span class="o">).</span>
@@ -487,7 +509,8 @@ feature is that it now lets you author streaming pipelines on batch data.</p>
 <p>Lets look at how to query data as of a specific time. The specific time can be represented by pointing endTime to a 
 specific commit time and beginTime to “000” (denoting earliest possible commit time).</p>
 
-<div class="language-scala highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">val</span> <span class="nv">beginTime</span> <span class="k">=</span> <span class="s">"000"</span> <span class="c1">// Represents all commits &gt; this time.
+<div class="language-scala highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">// spark-shell
+</span><span class="k">val</span> <span class="nv">beginTime</span> <span class="k">=</span> <span class="s">"000"</span> <span class="c1">// Represents all commits &gt; this time.
 </span><span class="k">val</span> <span class="nv">endTime</span> <span class="k">=</span> <span class="nf">commits</span><span class="o">(</span><span class="nv">commits</span><span class="o">.</span><span class="py">length</span> <span class="o">-</span> <span class="mi">2</span><span class="o">)</span> <span class="c1">// commit time we are interested in
 </span>
 <span class="c1">//incrementally query data
@@ -497,13 +520,14 @@ specific commit time and beginTime to “000” (denoting earliest possible comm
   <span class="nf">option</span><span class="o">(</span><span class="nc">END_INSTANTTIME_OPT_KEY</span><span class="o">,</span> <span class="n">endTime</span><span class="o">).</span>
   <span class="nf">load</span><span class="o">(</span><span class="n">basePath</span><span class="o">)</span>
 <span class="nv">tripsPointInTimeDF</span><span class="o">.</span><span class="py">createOrReplaceTempView</span><span class="o">(</span><span class="s">"hudi_trips_point_in_time"</span><span class="o">)</span>
-<span class="nv">spark</span><span class="o">.</span><span class="py">sql</span><span class="o">(</span><span class="s">"select `_hoodie_commit_time`, fare, begin_lon, begin_lat, ts from  hudi_trips_point_in_time where fare &gt; 20.0"</span><span class="o">).</span><span class="py">show</span><span class="o">()</span>
+<span class="nv">spark</span><span class="o">.</span><span class="py">sql</span><span class="o">(</span><span class="s">"select `_hoodie_commit_time`, fare, begin_lon, begin_lat, ts from hudi_trips_point_in_time where fare &gt; 20.0"</span><span class="o">).</span><span class="py">show</span><span class="o">()</span>
 </code></pre></div></div>
 
 <h2 id="deletes">Delete data</h2>
 <p>Delete records for the HoodieKeys passed in.</p>
 
-<div class="language-scala highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">// fetch total records count
+<div class="language-scala highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">// spark-shell
+// fetch total records count
 </span><span class="nv">spark</span><span class="o">.</span><span class="py">sql</span><span class="o">(</span><span class="s">"select uuid, partitionPath from hudi_trips_snapshot"</span><span class="o">).</span><span class="py">count</span><span class="o">()</span>
 <span class="c1">// fetch two records to be deleted
 </span><span class="k">val</span> <span class="nv">ds</span> <span class="k">=</span> <span class="nv">spark</span><span class="o">.</span><span class="py">sql</span><span class="o">(</span><span class="s">"select uuid, partitionPath from hudi_trips_snapshot"</span><span class="o">).</span><span class="py">limit</span><span class="o">(</span><span class="mi">2</span><span class="o">)</span>
@@ -532,6 +556,211 @@ specific commit time and beginTime to “000” (denoting earliest possible comm
 </code></pre></div></div>
 <p>Note: Only <code class="highlighter-rouge">Append</code> mode is supported for delete operation.</p>
 
+<h1 id="pyspark-example">Pyspark example</h1>
+<h2 id="setup-1">Setup</h2>
+
+<p>Hudi works with Spark-2.x versions. You can follow instructions <a href="https://spark.apache.org/downloads.html">here</a> for setting up spark. 
+From the extracted directory run spark-shell with Hudi as:</p>
+
+<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1"># pyspark
+</span><span class="n">export</span> <span class="n">PYSPARK_PYTHON</span><span class="o">=</span><span class="err">$</span><span class="p">(</span><span class="n">which</span> <span class="n">python3</span><span class="p">)</span>
+<span class="n">spark</span><span class="o">-</span><span class="mf">2.4.4</span><span class="o">-</span><span class="nb">bin</span><span class="o">-</span><span class="n">hadoop2</span><span class="mf">.7</span><span class="o">/</span><span class="nb">bin</span><span class="o">/</span><span class="n">pyspark</span> \
+  <span class="o">--</span><span class="n">packages</span> <span class="n">org</span><span class="o">.</span><span class="n">apache</span><span class="o">.</span><span class="n">hudi</span><span class="p">:</span><span class="n">hudi</span><span class="o">-</span><span class="n">spark</span><span class="o">-</span><span class="n">bundle_2</span><span class="mf">.11</span><span class="p">:</span><span class="mf">0.5.1</span><span class="o">-</span><span class="n">incubating</span><span cl [...]
+  <span class="o">--</span><span class="n">conf</span> <span class="s">'spark.serializer=org.apache.spark.serializer.KryoSerializer'</span>
+</code></pre></div></div>
+
+<div class="notice--info">
+  <h4>Please note the following: </h4>
+<ul>
+  <li>spark-avro module needs to be specified in --packages as it is not included with spark-shell by default</li>
+  <li>spark-avro and spark versions must match (we have used 2.4.4 for both above)</li>
+  <li>we have used hudi-spark-bundle built for scala 2.11 since the spark-avro module used also depends on 2.11. 
+         If spark-avro_2.12 is used, correspondingly hudi-spark-bundle_2.12 needs to be used. </li>
+</ul>
+</div>
+
+<p>Setup table name, base path and a data generator to generate records for this guide.</p>
+
+<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1"># pyspark
+</span><span class="n">tableName</span> <span class="o">=</span> <span class="s">"hudi_trips_cow"</span>
+<span class="n">basePath</span> <span class="o">=</span> <span class="s">"file:///tmp/hudi_trips_cow"</span>
+<span class="n">dataGen</span> <span class="o">=</span> <span class="n">sc</span><span class="o">.</span><span class="n">_jvm</span><span class="o">.</span><span class="n">org</span><span class="o">.</span><span class="n">apache</span><span class="o">.</span><span class="n">hudi</span><span class="o">.</span><span class="n">QuickstartUtils</span><span class="o">.</span><span class="n">DataGenerator</span><span class="p">()</span>
+</code></pre></div></div>
+
+<p class="notice--info">The <a href="https://github.com/apache/incubator-hudi/blob/master/hudi-spark/src/main/java/org/apache/hudi/QuickstartUtils.java#L50">DataGenerator</a> 
+can generate sample inserts and updates based on the the sample trip schema <a href="https://github.com/apache/incubator-hudi/blob/master/hudi-spark/src/main/java/org/apache/hudi/QuickstartUtils.java#L57">here</a></p>
+
+<h2 id="insert-data-1">Insert data</h2>
+
+<p>Generate some new trips, load them into a DataFrame and write the DataFrame into the Hudi table as below.</p>
+
+<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1"># pyspark
+</span><span class="n">inserts</span> <span class="o">=</span> <span class="n">sc</span><span class="o">.</span><span class="n">_jvm</span><span class="o">.</span><span class="n">org</span><span class="o">.</span><span class="n">apache</span><span class="o">.</span><span class="n">hudi</span><span class="o">.</span><span class="n">QuickstartUtils</span><span class="o">.</span><span class="n">convertToStringList</span><span class="p">(</span><span class="n">dataGen</span><span class="o">. [...]
+<span class="n">df</span> <span class="o">=</span> <span class="n">spark</span><span class="o">.</span><span class="n">read</span><span class="o">.</span><span class="n">json</span><span class="p">(</span><span class="n">spark</span><span class="o">.</span><span class="n">sparkContext</span><span class="o">.</span><span class="n">parallelize</span><span class="p">(</span><span class="n">inserts</span><span class="p">,</span> <span class="mi">2</span><span class="p">))</span>
+
+<span class="n">hudi_options</span> <span class="o">=</span> <span class="p">{</span>
+  <span class="s">'hoodie.table.name'</span><span class="p">:</span> <span class="n">tableName</span><span class="p">,</span>
+  <span class="s">'hoodie.datasource.write.recordkey.field'</span><span class="p">:</span> <span class="s">'uuid'</span><span class="p">,</span>
+  <span class="s">'hoodie.datasource.write.partitionpath.field'</span><span class="p">:</span> <span class="s">'partitionpath'</span><span class="p">,</span>
+  <span class="s">'hoodie.datasource.write.table.name'</span><span class="p">:</span> <span class="n">tableName</span><span class="p">,</span>
+  <span class="s">'hoodie.datasource.write.operation'</span><span class="p">:</span> <span class="s">'insert'</span><span class="p">,</span>
+  <span class="s">'hoodie.datasource.write.precombine.field'</span><span class="p">:</span> <span class="s">'ts'</span><span class="p">,</span>
+  <span class="s">'hoodie.upsert.shuffle.parallelism'</span><span class="p">:</span> <span class="mi">2</span><span class="p">,</span> 
+  <span class="s">'hoodie.insert.shuffle.parallelism'</span><span class="p">:</span> <span class="mi">2</span>
+<span class="p">}</span>
+
+<span class="n">df</span><span class="o">.</span><span class="n">write</span><span class="o">.</span><span class="nb">format</span><span class="p">(</span><span class="s">"hudi"</span><span class="p">)</span><span class="o">.</span> \
+  <span class="n">options</span><span class="p">(</span><span class="o">**</span><span class="n">hudi_options</span><span class="p">)</span><span class="o">.</span> \
+  <span class="n">mode</span><span class="p">(</span><span class="s">"overwrite"</span><span class="p">)</span><span class="o">.</span> \
+  <span class="n">save</span><span class="p">(</span><span class="n">basePath</span><span class="p">)</span>
+</code></pre></div></div>
+
+<p class="notice--info"><code class="highlighter-rouge">mode(Overwrite)</code> overwrites and recreates the table if it already exists.
+You can check the data generated under <code class="highlighter-rouge">/tmp/hudi_trips_cow/&lt;region&gt;/&lt;country&gt;/&lt;city&gt;/</code>. We provided a record key 
+(<code class="highlighter-rouge">uuid</code> in <a href="https://github.com/apache/incubator-hudi/blob/master/hudi-spark/src/main/java/org/apache/hudi/QuickstartUtils.java#L58">schema</a>), partition field (<code class="highlighter-rouge">region/county/city</code>) and combine logic (<code class="highlighter-rouge">ts</code> in 
+<a href="https://github.com/apache/incubator-hudi/blob/master/hudi-spark/src/main/java/org/apache/hudi/QuickstartUtils.java#L58">schema</a>) to ensure trip records are unique within each partition. For more info, refer to 
+<a href="https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=113709185#FAQ-HowdoImodelthedatastoredinHudi">Modeling data stored in Hudi</a>
+and for info on ways to ingest data into Hudi, refer to <a href="/docs/writing_data.html">Writing Hudi Tables</a>.
+Here we are using the default write operation : <code class="highlighter-rouge">upsert</code>. If you have a workload without updates, you can also issue 
+<code class="highlighter-rouge">insert</code> or <code class="highlighter-rouge">bulk_insert</code> operations which could be faster. To know more, refer to <a href="/docs/writing_data#write-operations">Write operations</a></p>
+
+<h2 id="query-data-1">Query data</h2>
+
+<p>Load the data files into a DataFrame.</p>
+
+<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1"># pyspark
+</span><span class="n">tripsSnapshotDF</span> <span class="o">=</span> <span class="n">spark</span><span class="o">.</span> \
+  <span class="n">read</span><span class="o">.</span> \
+  <span class="nb">format</span><span class="p">(</span><span class="s">"hudi"</span><span class="p">)</span><span class="o">.</span> \
+  <span class="n">load</span><span class="p">(</span><span class="n">basePath</span> <span class="o">+</span> <span class="s">"/*/*/*/*"</span><span class="p">)</span>
+
+<span class="n">tripsSnapshotDF</span><span class="o">.</span><span class="n">createOrReplaceTempView</span><span class="p">(</span><span class="s">"hudi_trips_snapshot"</span><span class="p">)</span>
+
+<span class="n">spark</span><span class="o">.</span><span class="n">sql</span><span class="p">(</span><span class="s">"select fare, begin_lon, begin_lat, ts from  hudi_trips_snapshot where fare &gt; 20.0"</span><span class="p">)</span><span class="o">.</span><span class="n">show</span><span class="p">()</span>
+<span class="n">spark</span><span class="o">.</span><span class="n">sql</span><span class="p">(</span><span class="s">"select _hoodie_commit_time, _hoodie_record_key, _hoodie_partition_path, rider, driver, fare from  hudi_trips_snapshot"</span><span class="p">)</span><span class="o">.</span><span class="n">show</span><span class="p">()</span>
+</code></pre></div></div>
+
+<p class="notice--info">This query provides snapshot querying of the ingested data. Since our partition path (<code class="highlighter-rouge">region/country/city</code>) is 3 levels nested 
+from base path we ve used <code class="highlighter-rouge">load(basePath + "/*/*/*/*")</code>. 
+Refer to <a href="/docs/concepts#table-types--queries">Table types and queries</a> for more info on all table types and query types supported.</p>
+
+<h2 id="update-data-1">Update data</h2>
+
+<p>This is similar to inserting new data. Generate updates to existing trips using the data generator, load into a DataFrame 
+and write DataFrame into the hudi table.</p>
+
+<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1"># pyspark
+</span><span class="n">updates</span> <span class="o">=</span> <span class="n">sc</span><span class="o">.</span><span class="n">_jvm</span><span class="o">.</span><span class="n">org</span><span class="o">.</span><span class="n">apache</span><span class="o">.</span><span class="n">hudi</span><span class="o">.</span><span class="n">QuickstartUtils</span><span class="o">.</span><span class="n">convertToStringList</span><span class="p">(</span><span class="n">dataGen</span><span class="o">. [...]
+<span class="n">df</span> <span class="o">=</span> <span class="n">spark</span><span class="o">.</span><span class="n">read</span><span class="o">.</span><span class="n">json</span><span class="p">(</span><span class="n">spark</span><span class="o">.</span><span class="n">sparkContext</span><span class="o">.</span><span class="n">parallelize</span><span class="p">(</span><span class="n">updates</span><span class="p">,</span> <span class="mi">2</span><span class="p">))</span>
+<span class="n">df</span><span class="o">.</span><span class="n">write</span><span class="o">.</span><span class="nb">format</span><span class="p">(</span><span class="s">"hudi"</span><span class="p">)</span><span class="o">.</span> \
+  <span class="n">options</span><span class="p">(</span><span class="o">**</span><span class="n">hudi_options</span><span class="p">)</span><span class="o">.</span> \
+  <span class="n">mode</span><span class="p">(</span><span class="s">"append"</span><span class="p">)</span><span class="o">.</span> \
+  <span class="n">save</span><span class="p">(</span><span class="n">basePath</span><span class="p">)</span>
+</code></pre></div></div>
+
+<p class="notice--info">Notice that the save mode is now <code class="highlighter-rouge">Append</code>. In general, always use append mode unless you are trying to create the table for the first time.
+<a href="#query-data">Querying</a> the data again will now show updated trips. Each write operation generates a new <a href="http://hudi.incubator.apache.org/docs/concepts.html">commit</a> 
+denoted by the timestamp. Look for changes in <code class="highlighter-rouge">_hoodie_commit_time</code>, <code class="highlighter-rouge">rider</code>, <code class="highlighter-rouge">driver</code> fields for the same <code class="highlighter-rouge">_hoodie_record_key</code>s in previous commit.</p>
+
+<h2 id="incremental-query-1">Incremental query</h2>
+
+<p>Hudi also provides capability to obtain a stream of records that changed since given commit timestamp. 
+This can be achieved using Hudi’s incremental querying and providing a begin time from which changes need to be streamed. 
+We do not need to specify endTime, if we want all changes after the given commit (as is the common case).</p>
+
+<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1"># pyspark
+# reload data
+</span><span class="n">spark</span><span class="o">.</span> \
+  <span class="n">read</span><span class="o">.</span> \
+  <span class="nb">format</span><span class="p">(</span><span class="s">"hudi"</span><span class="p">)</span><span class="o">.</span> \
+  <span class="n">load</span><span class="p">(</span><span class="n">basePath</span> <span class="o">+</span> <span class="s">"/*/*/*/*"</span><span class="p">)</span><span class="o">.</span> \
+  <span class="n">createOrReplaceTempView</span><span class="p">(</span><span class="s">"hudi_trips_snapshot"</span><span class="p">)</span>
+
+<span class="n">commits</span> <span class="o">=</span> <span class="nb">list</span><span class="p">(</span><span class="nb">map</span><span class="p">(</span><span class="k">lambda</span> <span class="n">row</span><span class="p">:</span> <span class="n">row</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">spark</span><span class="o">.</span><span class="n">sql</span><span class="p">(</span><span class="s">"select distinct(_hoodie_commit_t [...]
+<span class="n">beginTime</span> <span class="o">=</span> <span class="n">commits</span><span class="p">[</span><span class="nb">len</span><span class="p">(</span><span class="n">commits</span><span class="p">)</span> <span class="o">-</span> <span class="mi">2</span><span class="p">]</span> <span class="c1"># commit time we are interested in
+</span>
+<span class="c1"># incrementally query data
+</span><span class="n">incremental_read_options</span> <span class="o">=</span> <span class="p">{</span>
+  <span class="s">'hoodie.datasource.query.type'</span><span class="p">:</span> <span class="s">'incremental'</span><span class="p">,</span>
+  <span class="s">'hoodie.datasource.read.begin.instanttime'</span><span class="p">:</span> <span class="n">beginTime</span><span class="p">,</span>
+<span class="p">}</span>
+
+<span class="n">tripsIncrementalDF</span> <span class="o">=</span> <span class="n">spark</span><span class="o">.</span><span class="n">read</span><span class="o">.</span><span class="nb">format</span><span class="p">(</span><span class="s">"hudi"</span><span class="p">)</span><span class="o">.</span> \
+  <span class="n">options</span><span class="p">(</span><span class="o">**</span><span class="n">incremental_read_options</span><span class="p">)</span><span class="o">.</span> \
+  <span class="n">load</span><span class="p">(</span><span class="n">basePath</span><span class="p">)</span>
+<span class="n">tripsIncrementalDF</span><span class="o">.</span><span class="n">createOrReplaceTempView</span><span class="p">(</span><span class="s">"hudi_trips_incremental"</span><span class="p">)</span>
+
+<span class="n">spark</span><span class="o">.</span><span class="n">sql</span><span class="p">(</span><span class="s">"select `_hoodie_commit_time`, fare, begin_lon, begin_lat, ts from  hudi_trips_incremental where fare &gt; 20.0"</span><span class="p">)</span><span class="o">.</span><span class="n">show</span><span class="p">()</span>
+</code></pre></div></div>
+
+<p class="notice--info">This will give all changes that happened after the beginTime commit with the filter of fare &gt; 20.0. The unique thing about this
+feature is that it now lets you author streaming pipelines on batch data.</p>
+
+<h2 id="point-in-time-query-1">Point in time query</h2>
+
+<p>Lets look at how to query data as of a specific time. The specific time can be represented by pointing endTime to a 
+specific commit time and beginTime to “000” (denoting earliest possible commit time).</p>
+
+<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1"># pyspark
+</span><span class="n">beginTime</span> <span class="o">=</span> <span class="s">"000"</span> <span class="c1"># Represents all commits &gt; this time.
+</span><span class="n">endTime</span> <span class="o">=</span> <span class="n">commits</span><span class="p">[</span><span class="nb">len</span><span class="p">(</span><span class="n">commits</span><span class="p">)</span> <span class="o">-</span> <span class="mi">2</span><span class="p">]</span>
+
+<span class="c1"># query point in time data
+</span><span class="n">point_in_time_read_options</span> <span class="o">=</span> <span class="p">{</span>
+  <span class="s">'hoodie.datasource.query.type'</span><span class="p">:</span> <span class="s">'incremental'</span><span class="p">,</span>
+  <span class="s">'hoodie.datasource.read.end.instanttime'</span><span class="p">:</span> <span class="n">endTime</span><span class="p">,</span>
+  <span class="s">'hoodie.datasource.read.begin.instanttime'</span><span class="p">:</span> <span class="n">beginTime</span>
+<span class="p">}</span>
+
+<span class="n">tripsPointInTimeDF</span> <span class="o">=</span> <span class="n">spark</span><span class="o">.</span><span class="n">read</span><span class="o">.</span><span class="nb">format</span><span class="p">(</span><span class="s">"hudi"</span><span class="p">)</span><span class="o">.</span> \
+  <span class="n">options</span><span class="p">(</span><span class="o">**</span><span class="n">point_in_time_read_options</span><span class="p">)</span><span class="o">.</span> \
+  <span class="n">load</span><span class="p">(</span><span class="n">basePath</span><span class="p">)</span>
+
+<span class="n">tripsPointInTimeDF</span><span class="o">.</span><span class="n">createOrReplaceTempView</span><span class="p">(</span><span class="s">"hudi_trips_point_in_time"</span><span class="p">)</span>
+<span class="n">spark</span><span class="o">.</span><span class="n">sql</span><span class="p">(</span><span class="s">"select `_hoodie_commit_time`, fare, begin_lon, begin_lat, ts from hudi_trips_point_in_time where fare &gt; 20.0"</span><span class="p">)</span><span class="o">.</span><span class="n">show</span><span class="p">()</span>
+</code></pre></div></div>
+
+<h2 id="deletes">Delete data</h2>
+<p>Delete records for the HoodieKeys passed in.</p>
+
+<p>Note: Only <code class="highlighter-rouge">Append</code> mode is supported for delete operation.</p>
+
+<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1"># pyspark
+# fetch total records count
+</span><span class="n">spark</span><span class="o">.</span><span class="n">sql</span><span class="p">(</span><span class="s">"select uuid, partitionPath from hudi_trips_snapshot"</span><span class="p">)</span><span class="o">.</span><span class="n">count</span><span class="p">()</span>
+<span class="c1"># fetch two records to be deleted
+</span><span class="n">ds</span> <span class="o">=</span> <span class="n">spark</span><span class="o">.</span><span class="n">sql</span><span class="p">(</span><span class="s">"select uuid, partitionPath from hudi_trips_snapshot"</span><span class="p">)</span><span class="o">.</span><span class="n">limit</span><span class="p">(</span><span class="mi">2</span><span class="p">)</span>
+
+<span class="c1"># issue deletes
+</span><span class="n">hudi_delete_options</span> <span class="o">=</span> <span class="p">{</span>
+  <span class="s">'hoodie.table.name'</span><span class="p">:</span> <span class="n">tableName</span><span class="p">,</span>
+  <span class="s">'hoodie.datasource.write.recordkey.field'</span><span class="p">:</span> <span class="s">'uuid'</span><span class="p">,</span>
+  <span class="s">'hoodie.datasource.write.partitionpath.field'</span><span class="p">:</span> <span class="s">'partitionpath'</span><span class="p">,</span>
+  <span class="s">'hoodie.datasource.write.table.name'</span><span class="p">:</span> <span class="n">tableName</span><span class="p">,</span>
+  <span class="s">'hoodie.datasource.write.operation'</span><span class="p">:</span> <span class="s">'delete'</span><span class="p">,</span>
+  <span class="s">'hoodie.datasource.write.precombine.field'</span><span class="p">:</span> <span class="s">'ts'</span><span class="p">,</span>
+  <span class="s">'hoodie.upsert.shuffle.parallelism'</span><span class="p">:</span> <span class="mi">2</span><span class="p">,</span> 
+  <span class="s">'hoodie.insert.shuffle.parallelism'</span><span class="p">:</span> <span class="mi">2</span>
+<span class="p">}</span>
+
+<span class="kn">from</span> <span class="nn">pyspark.sql.functions</span> <span class="kn">import</span> <span class="n">lit</span>
+<span class="n">deletes</span> <span class="o">=</span> <span class="nb">list</span><span class="p">(</span><span class="nb">map</span><span class="p">(</span><span class="k">lambda</span> <span class="n">row</span><span class="p">:</span> <span class="p">(</span><span class="n">row</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">row</span><span class="p">[</span><span class="mi">1</span><span class="p">]),</span> <span class="n">ds</span> [...]
+<span class="n">df</span> <span class="o">=</span> <span class="n">spark</span><span class="o">.</span><span class="n">sparkContext</span><span class="o">.</span><span class="n">parallelize</span><span class="p">(</span><span class="n">deletes</span><span class="p">)</span><span class="o">.</span><span class="n">toDF</span><span class="p">([</span><span class="s">'partitionpath'</span><span class="p">,</span> <span class="s">'uuid'</span><span class="p">])</span><span class="o">.</span>< [...]
+<span class="n">df</span><span class="o">.</span><span class="n">write</span><span class="o">.</span><span class="nb">format</span><span class="p">(</span><span class="s">"hudi"</span><span class="p">)</span><span class="o">.</span> \
+  <span class="n">options</span><span class="p">(</span><span class="o">**</span><span class="n">hudi_delete_options</span><span class="p">)</span><span class="o">.</span> \
+  <span class="n">mode</span><span class="p">(</span><span class="s">"append"</span><span class="p">)</span><span class="o">.</span> \
+  <span class="n">save</span><span class="p">(</span><span class="n">basePath</span><span class="p">)</span>
+
+<span class="c1"># run the same read query as above.
+</span><span class="n">roAfterDeleteViewDF</span> <span class="o">=</span> <span class="n">spark</span><span class="o">.</span> \
+  <span class="n">read</span><span class="o">.</span> \
+  <span class="nb">format</span><span class="p">(</span><span class="s">"hudi"</span><span class="p">)</span><span class="o">.</span> \
+  <span class="n">load</span><span class="p">(</span><span class="n">basePath</span> <span class="o">+</span> <span class="s">"/*/*/*/*"</span><span class="p">)</span> 
+<span class="n">roAfterDeleteViewDF</span><span class="o">.</span><span class="n">registerTempTable</span><span class="p">(</span><span class="s">"hudi_trips_snapshot"</span><span class="p">)</span>
+<span class="c1"># fetch should return (total - 2) records
+</span><span class="n">spark</span><span class="o">.</span><span class="n">sql</span><span class="p">(</span><span class="s">"select uuid, partitionPath from hudi_trips_snapshot"</span><span class="p">)</span><span class="o">.</span><span class="n">count</span><span class="p">()</span>
+</code></pre></div></div>
+
 <h2 id="where-to-go-from-here">Where to go from here?</h2>
 
 <p>You can also do the quickstart by <a href="https://github.com/apache/incubator-hudi#building-apache-hudi-from-source">building hudi yourself</a>,