You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by "xinrong-meng (via GitHub)" <gi...@apache.org> on 2024/03/01 19:21:33 UTC

Re: [PR] [SPARK-47227][FOLLOW][DOCS] Improve Spark Connect Documentation [spark]

xinrong-meng commented on code in PR #45339:
URL: https://github.com/apache/spark/pull/45339#discussion_r1509439684


##########
docs/spark-connect-overview.md:
##########
@@ -67,8 +67,8 @@ that developers need to be aware of when using Spark Connect:
    the execution environment. In particular, in PySpark, the client does not use Py4J
    and thus the accessing the private fields holding the JVM implementation of `DataFrame`,
    `Column`, `SparkSession`, etc. is not possible (e.g. `df._jdf`).
-2. By design, the Spark Connect protocol is designed around the concepts of Sparks logical
-   plans as the abstraction to be able to declarative describe the operations to be executed
+2. By design, the Spark Connect protocol uses Sparks logical

Review Comment:
   nit: Spark`'`s



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org