You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by "HyukjinKwon (via GitHub)" <gi...@apache.org> on 2023/02/20 14:17:56 UTC

[GitHub] [spark] HyukjinKwon commented on a diff in pull request #40092: [SPARK-42475][CONNECT][DOCS] Getting Started: Live Notebook for Spark Connect

HyukjinKwon commented on code in PR #40092:
URL: https://github.com/apache/spark/pull/40092#discussion_r1112012618


##########
python/docs/source/getting_started/quickstart_connect.ipynb:
##########
@@ -0,0 +1,1118 @@
+{
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "# Quickstart: DataFrame with Spark Connect\n",
+    "\n",
+    "This is a short introduction and quickstart for the DataFrame with Spark Connect. A DataFrame with Spark Connect is virtually, conceptually identical to an existing [PySpark DataFrame](https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.DataFrame.html?highlight=dataframe#pyspark.sql.DataFrame), so most of the examples from 'Live Notebook: DataFrame' at [the quickstart page](https://spark.apache.org/docs/latest/api/python/getting_started/index.html) can be reused directly.\n",
+    "\n",
+    "However, it does not yet support some key features such as [RDD](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.RDD.html?highlight=rdd#pyspark.RDD) and [SparkSession.conf](https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.SparkSession.conf.html#pyspark.sql.SparkSession.conf), so you need to consider it when using DataFrame with Spark Connect.\n",
+    "\n",
+    "This notebook shows the basic usages of the DataFrame with Spark Connect geared mainly for those new to Spark Connect, along with comments of which features is not supported compare to the existing DataFrame.\n",
+    "\n",
+    "There is also other useful information in Apache Spark documentation site, see the latest version of [Spark SQL and DataFrames](https://spark.apache.org/docs/latest/sql-programming-guide.html).\n",
+    "\n",
+    "PySpark applications start with initializing `SparkSession` which is the entry point of PySpark as below. In case of running it in PySpark shell via <code>pyspark</code> executable, the shell automatically creates the session in the variable <code>spark</code> for users."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 1,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# Spark Connect uses SparkSession from `pyspark.sql.connect.session` instead of `pyspark.sql.SparkSession`.\n",
+    "from pyspark.sql.connect.session import SparkSession\n",
+    "\n",
+    "spark = SparkSession.builder.getOrCreate()"

Review Comment:
   The right way to get the remote session is:
   
   ```python
   from pyspark.sql import SparkSession
   SparkSession.builder.remote("local[*]").getOrCreate()
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org