You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2022/11/27 01:55:00 UTC

[GitHub] [spark] HyukjinKwon commented on a diff in pull request #38803: [SPARK-41114] [CONNECT] [PYTHON] [FOLLOW-UP] Python Client support for local data

HyukjinKwon commented on code in PR #38803:
URL: https://github.com/apache/spark/pull/38803#discussion_r1032854396


##########
python/pyspark/sql/connect/session.py:
##########
@@ -205,6 +207,31 @@ def __init__(self, connectionString: str, userId: Optional[str] = None):
         # Create the reader
         self.read = DataFrameReader(self)
 
+    def createDataFrame(self, data: "pd.DataFrame") -> "DataFrame":

Review Comment:
   Actually, the implementation here isn't matched to what we have in `createDataFrame(pandas)`.
   
   By default, the Arrow message conversion (more specifically in https://github.com/apache/spark/pull/38659/files#diff-d630cc4be6c65a3c3f7d6dbfe990f99ba992ccc26d9c3aaf6cfe46e163cb7389R514-R521) have to happen in RDD so we can parallelize this.
   
   For a bit of history, PySpark added the initial version with RDD first, and added this local relation as an optimization for small dataset (see also https://github.com/apache/spark/pull/36683) later.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org