You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by gu...@apache.org on 2021/10/21 01:27:06 UTC
[spark] branch branch-3.2 updated: [SPARK-37079][PYTHON][SQL] Fix
DataFrameWriterV2.partitionedBy to send the arguments to JVM properly
This is an automated email from the ASF dual-hosted git repository.
gurwls223 pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/branch-3.2 by this push:
new 0e6fbd7 [SPARK-37079][PYTHON][SQL] Fix DataFrameWriterV2.partitionedBy to send the arguments to JVM properly
0e6fbd7 is described below
commit 0e6fbd7e52c2f71a1d814a520da3dabb37c55937
Author: Takuya UESHIN <ue...@databricks.com>
AuthorDate: Thu Oct 21 10:25:42 2021 +0900
[SPARK-37079][PYTHON][SQL] Fix DataFrameWriterV2.partitionedBy to send the arguments to JVM properly
### What changes were proposed in this pull request?
Fix `DataFrameWriterV2.partitionedBy` to send the arguments to JVM properly.
### Why are the changes needed?
In PySpark, `DataFrameWriterV2.partitionedBy` doesn't send the arguments to JVM properly.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Manually checked whether the arguments are sent to JVM or not.
Closes #34347 from ueshin/issues/SPARK-37079/partitionBy.
Authored-by: Takuya UESHIN <ue...@databricks.com>
Signed-off-by: Hyukjin Kwon <gu...@apache.org>
(cherry picked from commit 33deeb35f1c994328b577970d4577e6d9288bfc2)
Signed-off-by: Hyukjin Kwon <gu...@apache.org>
---
python/pyspark/sql/readwriter.py | 1 +
1 file changed, 1 insertion(+)
diff --git a/python/pyspark/sql/readwriter.py b/python/pyspark/sql/readwriter.py
index 0ef59c0..6575354 100644
--- a/python/pyspark/sql/readwriter.py
+++ b/python/pyspark/sql/readwriter.py
@@ -1115,6 +1115,7 @@ class DataFrameWriterV2(object):
"""
col = _to_java_column(col)
cols = _to_seq(self._spark._sc, [_to_java_column(c) for c in cols])
+ self._jwriter.partitionedBy(col, cols)
return self
@since(3.1)
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org