You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@beam.apache.org by GitBox <gi...@apache.org> on 2020/10/21 13:24:21 UTC

[GitHub] [beam] iemejia commented on a change in pull request #12963: [BEAM-10983] Add getting started from Spark page

iemejia commented on a change in pull request #12963:
URL: https://github.com/apache/beam/pull/12963#discussion_r509276556



##########
File path: website/www/site/content/en/get-started/from-spark.md
##########
@@ -0,0 +1,261 @@
+---
+title: "Getting started from Apache Spark"
+---
+<!--
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+-->
+
+# Getting started from Apache Spark
+
+{{< localstorage language language-py >}}
+
+If you already know [_Apache Spark_](http://spark.apache.org/),
+learning _Apache Beam_ is easy.
+The Beam and Spark APIs are similar, so you already know the basic concepts.
+
+Spark stores data _Spark DataFrames_ for structured data,
+and in _Resilient Distributed Datasets_ (RDD) for unstructured data.
+We are using RDDs for this guide.
+
+A _Spark RDD_ represents a collection of elements,
+while in Beam it's called a _Parallel Collection_ (PCollection).
+A PCollection in Beam does _not_ have any ordering guarantees.
+
+Likewise, a transform in Beam is called a _Parallel Transform_ (PTransform).
+
+Here are some examples of common operations and their equivalent between PySpark and Beam.
+
+## Overview
+
+Here's a simple example of a PySpark pipeline that takes the numbers from one to four,
+multiplies them by two, adds all the values together, and prints the result.
+
+{{< highlight py >}}
+import pyspark
+
+sc = pyspark.SparkContext()
+result = (
+    sc.parallelize([1, 2, 3, 4])
+    .map(lambda x: x * 2)
+    .reduce(lambda x, y: x + y)
+)
+print(result)
+{{< /highlight >}}
+
+In Beam you _pipe_ your data through the pipeline using the
+_pipe operator_ `|` like `data | beam.Map(...)` instead of chaining
+methods like `data.map(...)`, but they're doing the same thing.
+
+Here's how an equivalent pipeline looks like in Beam.
+
+{{< highlight py >}}
+import apache_beam as beam
+
+with beam.Pipeline() as pipeline:
+    result = (
+        pipeline
+        | beam.Create([1, 2, 3, 4])
+        | beam.Map(lambda x: x * 2)
+        | beam.CombineGlobally(sum)
+        | beam.Map(print)
+    )
+{{< /highlight >}}
+
+> ℹ️ Note that we called `print` inside a `Map` transform.
+> That's because we can only access the elements of a PCollection
+> from within a PTransform.
+
+Another thing to note is that Beam pipelines are constructed _lazily_.
+This means that when you _pipe_ `|` data you're only _declaring_ the
+transformations and the order you want them to happen,
+but the actual computation doesn't happen.
+The pipeline is run _after_ the `with beam.Pipeline() as pipeline` context has
+closed.
+The pipeline is then sent to your runner of choice and it processes the data.
+
+> ℹ️ When the `with beam.Pipeline() as pipeline` context closes,
+> it implicitly calls `pipeline.run()` which triggers the computation to happen.
+
+A label can optionally be added to a transform using the
+_right shift operator_ `>>` like `data | 'My description' >> beam.Map(...)`.
+This serves both as comments and makes your pipeline easier to debug.
+
+This is how the pipeline looks after adding labels.
+
+{{< highlight py >}}
+import apache_beam as beam
+
+with beam.Pipeline() as pipeline:
+    result = (
+        pipeline
+        | 'Create numbers' >> beam.Create([1, 2, 3, 4])
+        | 'Multiply by two' >> beam.Map(lambda x: x * 2)
+        | 'Sum everything' >> beam.CombineGlobally(sum)
+        | 'Print results' >> beam.Map(print)
+    )
+{{< /highlight >}}
+
+## Setup
+
+Here's a comparison on how to get started both in PySpark and Beam.
+
+{{< table >}}
+<table>
+<tr>
+    <th></th>
+    <th>PySpark</th>
+    <th>Beam</th>
+</tr>
+<tr>
+    <td><b>Install</b></td>
+    <td><code>$ pip install pyspark</code></td>
+    <td><code>$ pip install apache-beam</code></td>
+</tr>
+<tr>
+    <td><b>Imports</b></td>
+    <td><code>import pyspark</code></td>
+    <td><code>import apache_beam as beam</code></td>
+</tr>
+<tr>
+    <td><b>Creating a<br>local pipeline</b></td>
+    <td>
+        <code>sc = pyspark.SparkContext() as sc:</code><br>
+        <code># Your pipeline code here.</code>
+    </td>
+    <td>
+        <code>with beam.Pipeline() as pipeline:</code><br>
+        <code>&nbsp;&nbsp;&nbsp;&nbsp;# Your pipeline code here.</code>
+    </td>
+</tr>
+<tr>
+    <td><b>Creating values</b></td>
+    <td><code>values = sc.parallelize([1, 2, 3, 4])</code></td>
+    <td><code>values = pipeline | beam.Create([1, 2, 3, 4])</code></td>
+</tr>
+<tr>
+    <td><b>Creating<br>key-value pairs</b></td>
+    <td>
+        <code>pairs = sc.parallelize([</code><br>
+        <code>&nbsp;&nbsp;&nbsp;&nbsp;('key1', 'value1'),</code><br>
+        <code>&nbsp;&nbsp;&nbsp;&nbsp;('key2', 'value2'),</code><br>
+        <code>&nbsp;&nbsp;&nbsp;&nbsp;('key3', 'value3'),</code><br>
+        <code>])</code>
+    </td>
+    <td>
+        <code>pairs = pipeline | beam.Create([</code><br>
+        <code>&nbsp;&nbsp;&nbsp;&nbsp;('key1', 'value1'),</code><br>
+        <code>&nbsp;&nbsp;&nbsp;&nbsp;('key2', 'value2'),</code><br>
+        <code>&nbsp;&nbsp;&nbsp;&nbsp;('key3', 'value3'),</code><br>
+        <code>])</code>
+    </td>
+</tr>

Review comment:
       I am not sure if worth but since we mention lazy computation we probably may mention what triggers 'producing results', in Spark is done by Spark's `actions` e.g. collect(), etc and in Beam by outputting the data (you can relate this to the print mention). Notice that I saw the warning of the `with` section but we can double mention this here and/or mention `p.run()`).
   Note this is just an extra suggestion but it is not mandatory at all, it is already clear, this is 'extra details'.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org