You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by "Hisoka-X (via GitHub)" <gi...@apache.org> on 2023/04/06 13:59:58 UTC

[GitHub] [spark] Hisoka-X opened a new pull request, #40684: [SPARK-41532][CONNECT][CLIENT] Add check for operations that involve multiple data frames

Hisoka-X opened a new pull request, #40684:
URL: https://github.com/apache/spark/pull/40684

   <!--
   Thanks for sending a pull request!  Here are some tips for you:
     1. If this is your first time, please read our contributor guidelines: https://spark.apache.org/contributing.html
     2. Ensure you have added or run the appropriate tests for your PR: https://spark.apache.org/developer-tools.html
     3. If the PR is unfinished, add '[WIP]' in your PR title, e.g., '[WIP][SPARK-XXXX] Your PR title ...'.
     4. Be sure to keep the PR description updated to reflect all changes.
     5. Please write your PR title to summarize what this PR proposes.
     6. If possible, provide a concise example to reproduce the issue for a faster review.
     7. If you want to add a new configuration, please read the guideline first for naming configurations in
        'core/src/main/scala/org/apache/spark/internal/config/ConfigEntry.scala'.
     8. If you want to add or modify an error type or message, please read the guideline first in
        'core/src/main/resources/error/README.md'.
   -->
   
   ### What changes were proposed in this pull request?
   Add check for operations that involve multiple data frames,  because spark do not support joining for example two data frames from different Spark Connect Sessions.
   <!--
   Please clarify what changes you are proposing. The purpose of this section is to outline the changes and how this PR fixes the issue. 
   If possible, please consider writing useful notes for better and faster reviews in your PR. See the examples below.
     1. If you refactor some codes with changing classes, showing the class hierarchy will help reviewers.
     2. If you fix some SQL features, you can provide some references of other DBMSes.
     3. If there is design documentation, please add the link.
     4. If there is a discussion in the mailing list, please add the link.
   -->
   
   
   ### Why are the changes needed?
   Spark do not support joining for example two data frames from different Spark Connect Sessions. To avoid exceptions, the client should clearly fail when it tries to construct such a composition.
   <!--
   Please clarify why the changes are needed. For instance,
     1. If you propose a new API, clarify the use case for a new API.
     2. If you fix a bug, you can clarify why it is a bug.
   -->
   
   
   ### Does this PR introduce _any_ user-facing change?
   No
   <!--
   Note that it means *any* user-facing change including all aspects such as the documentation fix.
   If yes, please clarify the previous behavior and the change this PR proposes - provide the console output, description and/or an example to show the behavior difference if possible.
   If possible, please also clarify if this is a user-facing change compared to the released Spark versions or within the unreleased branches such as master.
   If no, write 'No'.
   -->
   
   
   ### How was this patch tested?
   Add new test
   <!--
   If tests were added, say they were added here. Please make sure to add some test cases that check the changes thoroughly including negative and positive cases if possible.
   If it was tested in a way different from regular unit tests, please clarify how you tested step by step, ideally copy and paste-able, so that other reviewers can test and check, and descendants can verify in the future.
   If tests were not added, please describe why they were not added and/or why it was difficult to add.
   If benchmark tests were added, please run the benchmarks in GitHub Actions for the consistent environment, and the instructions could accord to: https://spark.apache.org/developer-tools.html#github-workflow-benchmarks.
   -->
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] Hisoka-X commented on pull request #40684: [SPARK-41532][CONNECT][CLIENT] Add check for operations that involve multiple data frames

Posted by "Hisoka-X (via GitHub)" <gi...@apache.org>.
Hisoka-X commented on PR #40684:
URL: https://github.com/apache/spark/pull/40684#issuecomment-1538371068

   Thanks! @HyukjinKwon @amaliujia @hvanhovell @grundprinzip @xinrong-meng 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] xinrong-meng commented on a diff in pull request #40684: [SPARK-41532][CONNECT][CLIENT] Add check for operations that involve multiple data frames

Posted by "xinrong-meng (via GitHub)" <gi...@apache.org>.
xinrong-meng commented on code in PR #40684:
URL: https://github.com/apache/spark/pull/40684#discussion_r1183174981


##########
python/pyspark/sql/connect/dataframe.py:
##########
@@ -249,14 +254,18 @@ def crossJoin(self, other: "DataFrame") -> "DataFrame":
             raise Exception("Cannot cartesian join when self._plan is empty.")
         if other._plan is None:
             raise Exception("Cannot cartesian join when other._plan is empty.")
-
+        self.checkSameSparkSession(other)
         return DataFrame.withPlan(
             plan.Join(left=self._plan, right=other._plan, on=None, how="cross"),
             session=self._session,
         )
 
     crossJoin.__doc__ = PySparkDataFrame.crossJoin.__doc__
 
+    def checkSameSparkSession(self, other: "DataFrame") -> None:
+        if self._session.session_id != other._session.session_id:
+            raise PySparkException("Both Datasets must belong to the same SparkSession")

Review Comment:
   Feel free to adjust that in a follow-up though.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] Hisoka-X commented on a diff in pull request #40684: [SPARK-41532][CONNECT][CLIENT] Add check for operations that involve multiple data frames

Posted by "Hisoka-X (via GitHub)" <gi...@apache.org>.
Hisoka-X commented on code in PR #40684:
URL: https://github.com/apache/spark/pull/40684#discussion_r1159957222


##########
connector/connect/client/jvm/src/test/scala/org/apache/spark/sql/connect/client/util/RemoteSparkSession.scala:
##########
@@ -49,7 +49,7 @@ import org.apache.spark.util.Utils
 object SparkConnectServerUtils {
 
   // Server port
-  private[connect] val port = ConnectCommon.CONNECT_GRPC_BINDING_PORT + util.Random.nextInt(1000)
+  val port: Int = ConnectCommon.CONNECT_GRPC_BINDING_PORT + util.Random.nextInt(1000)

Review Comment:
   https://github.com/apache/spark/pull/40684/files/f3d1eade9f90517039842fa59815aaf46f0a53b3#diff-7fa161b193c8792c8c0d8dd4bcae3e683ab8553edafa2ae5c13df42b26f612b0R37
   just want to use it in `ClientE2ETestSuite` to create another SparkSession. Any better idea?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] xinrong-meng commented on a diff in pull request #40684: [SPARK-41532][CONNECT][CLIENT] Add check for operations that involve multiple data frames

Posted by "xinrong-meng (via GitHub)" <gi...@apache.org>.
xinrong-meng commented on code in PR #40684:
URL: https://github.com/apache/spark/pull/40684#discussion_r1183889165


##########
python/pyspark/sql/connect/dataframe.py:
##########
@@ -249,14 +254,18 @@ def crossJoin(self, other: "DataFrame") -> "DataFrame":
             raise Exception("Cannot cartesian join when self._plan is empty.")
         if other._plan is None:
             raise Exception("Cannot cartesian join when other._plan is empty.")
-
+        self.checkSameSparkSession(other)
         return DataFrame.withPlan(
             plan.Join(left=self._plan, right=other._plan, on=None, how="cross"),
             session=self._session,
         )
 
     crossJoin.__doc__ = PySparkDataFrame.crossJoin.__doc__
 
+    def checkSameSparkSession(self, other: "DataFrame") -> None:
+        if self._session.session_id != other._session.session_id:
+            raise PySparkException("Both Datasets must belong to the same SparkSession")

Review Comment:
   Thanks! It's better to define an error class in [python/pyspark/errors/error_classes.py](https://github.com/apache/spark/blob/master/python/pyspark/errors/error_classes.py) with a template error message. Again, that doesn't block merging the PR.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] Hisoka-X commented on pull request #40684: [SPARK-41532][CONNECT][CLIENT] Add check for operations that involve multiple data frames

Posted by "Hisoka-X (via GitHub)" <gi...@apache.org>.
Hisoka-X commented on PR #40684:
URL: https://github.com/apache/spark/pull/40684#issuecomment-1538334844

   Hi, can we merge this PR now? @HyukjinKwon @hvanhovell @xinrong-meng 😁


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] Hisoka-X commented on a diff in pull request #40684: [SPARK-41532][CONNECT][CLIENT] Add check for operations that involve multiple data frames

Posted by "Hisoka-X (via GitHub)" <gi...@apache.org>.
Hisoka-X commented on code in PR #40684:
URL: https://github.com/apache/spark/pull/40684#discussion_r1159969110


##########
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/SparkSession.scala:
##########
@@ -69,6 +70,9 @@ class SparkSession private[sql] (
 
   private[this] val allocator = new RootAllocator()
 
+  // Generate a unique session ID for this session.
+  lazy val sessionId: String = UUID.randomUUID.toString

Review Comment:
   I got your point. Thanks



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] hvanhovell commented on a diff in pull request #40684: [SPARK-41532][CONNECT][CLIENT] Add check for operations that involve multiple data frames

Posted by "hvanhovell (via GitHub)" <gi...@apache.org>.
hvanhovell commented on code in PR #40684:
URL: https://github.com/apache/spark/pull/40684#discussion_r1160149555


##########
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/SparkSession.scala:
##########
@@ -69,6 +69,9 @@ class SparkSession private[sql] (
 
   private[this] val allocator = new RootAllocator()
 
+  // a unique session ID for this session from client.
+  private[sql] lazy val sessionId: String = client.sessionId

Review Comment:
   Make this a def? No need for lazy val.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] Hisoka-X commented on pull request #40684: [SPARK-41532][CONNECT][CLIENT] Add check for operations that involve multiple data frames

Posted by "Hisoka-X (via GitHub)" <gi...@apache.org>.
Hisoka-X commented on PR #40684:
URL: https://github.com/apache/spark/pull/40684#issuecomment-1508001349

   @HyukjinKwon @hvanhovell Hi, kindly ask, can merge this PR now? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] Hisoka-X commented on pull request #40684: [SPARK-41532][CONNECT][CLIENT] Add check for operations that involve multiple data frames

Posted by "Hisoka-X (via GitHub)" <gi...@apache.org>.
Hisoka-X commented on PR #40684:
URL: https://github.com/apache/spark/pull/40684#issuecomment-1517122121

   kindly ping @HyukjinKwon 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] hvanhovell commented on a diff in pull request #40684: [SPARK-41532][CONNECT][CLIENT] Add check for operations that involve multiple data frames

Posted by "hvanhovell (via GitHub)" <gi...@apache.org>.
hvanhovell commented on code in PR #40684:
URL: https://github.com/apache/spark/pull/40684#discussion_r1159876694


##########
connector/connect/client/jvm/src/test/scala/org/apache/spark/sql/connect/client/util/RemoteSparkSession.scala:
##########
@@ -49,7 +49,7 @@ import org.apache.spark.util.Utils
 object SparkConnectServerUtils {
 
   // Server port
-  private[connect] val port = ConnectCommon.CONNECT_GRPC_BINDING_PORT + util.Random.nextInt(1000)
+  val port: Int = ConnectCommon.CONNECT_GRPC_BINDING_PORT + util.Random.nextInt(1000)

Review Comment:
   ??? Why this change?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] hvanhovell commented on a diff in pull request #40684: [SPARK-41532][CONNECT][CLIENT] Add check for operations that involve multiple data frames

Posted by "hvanhovell (via GitHub)" <gi...@apache.org>.
hvanhovell commented on code in PR #40684:
URL: https://github.com/apache/spark/pull/40684#discussion_r1159876187


##########
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/Dataset.scala:
##########
@@ -1624,6 +1626,12 @@ class Dataset[T] private[sql] (
     }
   }
 
+  private def withDataset(other: Dataset[_]): Unit = {
+    assert(

Review Comment:
   Asserts can be elided. Can you just add and if statement here, and throw a proper exception?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] amaliujia commented on pull request #40684: [SPARK-41532][CONNECT][CLIENT] Add check for operations that involve multiple data frames

Posted by "amaliujia (via GitHub)" <gi...@apache.org>.
amaliujia commented on PR #40684:
URL: https://github.com/apache/spark/pull/40684#issuecomment-1499869942

   LGTM


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] Hisoka-X commented on a diff in pull request #40684: [SPARK-41532][CONNECT][CLIENT] Add check for operations that involve multiple data frames

Posted by "Hisoka-X (via GitHub)" <gi...@apache.org>.
Hisoka-X commented on code in PR #40684:
URL: https://github.com/apache/spark/pull/40684#discussion_r1160381245


##########
connector/connect/client/jvm/src/test/scala/org/apache/spark/sql/connect/client/util/RemoteSparkSession.scala:
##########
@@ -49,7 +49,7 @@ import org.apache.spark.util.Utils
 object SparkConnectServerUtils {
 
   // Server port
-  private[connect] val port = ConnectCommon.CONNECT_GRPC_BINDING_PORT + util.Random.nextInt(1000)
+  val port: Int = ConnectCommon.CONNECT_GRPC_BINDING_PORT + util.Random.nextInt(1000)

Review Comment:
   > You could use `private[spark] val port: Int` accessor?
   
   Thanks for remind, I changed it.



##########
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/SparkSession.scala:
##########
@@ -69,6 +69,9 @@ class SparkSession private[sql] (
 
   private[this] val allocator = new RootAllocator()
 
+  // a unique session ID for this session from client.
+  private[sql] lazy val sessionId: String = client.sessionId

Review Comment:
   Done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon commented on a diff in pull request #40684: [SPARK-41532][CONNECT][CLIENT] Add check for operations that involve multiple data frames

Posted by "HyukjinKwon (via GitHub)" <gi...@apache.org>.
HyukjinKwon commented on code in PR #40684:
URL: https://github.com/apache/spark/pull/40684#discussion_r1160370564


##########
connector/connect/client/jvm/src/test/scala/org/apache/spark/sql/connect/client/util/RemoteSparkSession.scala:
##########
@@ -49,7 +49,7 @@ import org.apache.spark.util.Utils
 object SparkConnectServerUtils {
 
   // Server port
-  private[connect] val port = ConnectCommon.CONNECT_GRPC_BINDING_PORT + util.Random.nextInt(1000)
+  val port: Int = ConnectCommon.CONNECT_GRPC_BINDING_PORT + util.Random.nextInt(1000)

Review Comment:
   Can we use a ephemeral port to make sure on the test flakiness? See also `SparkConnectService.localPort`, and https://github.com/apache/spark/blob/c7007b37e15ed1dc7c27e44576aa21f2e9e10e3f/python/pyspark/sql/connect/session.py#L503-L506



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon commented on pull request #40684: [SPARK-41532][CONNECT][CLIENT] Add check for operations that involve multiple data frames

Posted by "HyukjinKwon (via GitHub)" <gi...@apache.org>.
HyukjinKwon commented on PR #40684:
URL: https://github.com/apache/spark/pull/40684#issuecomment-1517134131

   I am fine. but I would defer to @hvanhovell to merge.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] amaliujia commented on a diff in pull request #40684: [SPARK-41532][CONNECT][CLIENT] Add check for operations that involve multiple data frames

Posted by "amaliujia (via GitHub)" <gi...@apache.org>.
amaliujia commented on code in PR #40684:
URL: https://github.com/apache/spark/pull/40684#discussion_r1160396036


##########
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/SparkSession.scala:
##########
@@ -69,6 +69,9 @@ class SparkSession private[sql] (
 
   private[this] val allocator = new RootAllocator()
 
+  // a unique session ID for this session from client.
+  private[sql] def sessionId: String = client.sessionId

Review Comment:
   https://github.com/apache/spark/blob/fa6e55bba4d62cdad66e5f425d8a261fe1050134/connector/connect/common/src/main/protobuf/spark/connect/base.proto#L48 this is the user_id of a client.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] amaliujia commented on a diff in pull request #40684: [SPARK-41532][CONNECT][CLIENT] Add check for operations that involve multiple data frames

Posted by "amaliujia (via GitHub)" <gi...@apache.org>.
amaliujia commented on code in PR #40684:
URL: https://github.com/apache/spark/pull/40684#discussion_r1160395678


##########
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/SparkSession.scala:
##########
@@ -69,6 +69,9 @@ class SparkSession private[sql] (
 
   private[this] val allocator = new RootAllocator()
 
+  // a unique session ID for this session from client.
+  private[sql] def sessionId: String = client.sessionId

Review Comment:
   Nit: this to be more accurate should be `user_id+session_id`. However because session_id itself is a random number thus `client.sessionId` should still be able to identify SparkSession uniquely.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] Hisoka-X commented on a diff in pull request #40684: [SPARK-41532][CONNECT][CLIENT] Add check for operations that involve multiple data frames

Posted by "Hisoka-X (via GitHub)" <gi...@apache.org>.
Hisoka-X commented on code in PR #40684:
URL: https://github.com/apache/spark/pull/40684#discussion_r1160409014


##########
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/SparkSession.scala:
##########
@@ -69,6 +69,9 @@ class SparkSession private[sql] (
 
   private[this] val allocator = new RootAllocator()
 
+  // a unique session ID for this session from client.
+  private[sql] def sessionId: String = client.sessionId

Review Comment:
   Good advise, should I change it for this PR? @HyukjinKwon @hvanhovell 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] Hisoka-X commented on pull request #40684: [SPARK-41532][CONNECT][CLIENT] Add check for operations that involve multiple data frames

Posted by "Hisoka-X (via GitHub)" <gi...@apache.org>.
Hisoka-X commented on PR #40684:
URL: https://github.com/apache/spark/pull/40684#issuecomment-1525614789

   kindly ping @hvanhovell 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] xinrong-meng commented on a diff in pull request #40684: [SPARK-41532][CONNECT][CLIENT] Add check for operations that involve multiple data frames

Posted by "xinrong-meng (via GitHub)" <gi...@apache.org>.
xinrong-meng commented on code in PR #40684:
URL: https://github.com/apache/spark/pull/40684#discussion_r1183889165


##########
python/pyspark/sql/connect/dataframe.py:
##########
@@ -249,14 +254,18 @@ def crossJoin(self, other: "DataFrame") -> "DataFrame":
             raise Exception("Cannot cartesian join when self._plan is empty.")
         if other._plan is None:
             raise Exception("Cannot cartesian join when other._plan is empty.")
-
+        self.checkSameSparkSession(other)
         return DataFrame.withPlan(
             plan.Join(left=self._plan, right=other._plan, on=None, how="cross"),
             session=self._session,
         )
 
     crossJoin.__doc__ = PySparkDataFrame.crossJoin.__doc__
 
+    def checkSameSparkSession(self, other: "DataFrame") -> None:
+        if self._session.session_id != other._session.session_id:
+            raise PySparkException("Both Datasets must belong to the same SparkSession")

Review Comment:
   Thanks! It's better to define an error class in [python/pyspark/errors/error_classes.py](python/pyspark/errors/error_classes.py) with a template error message. Again, that doesn't block merging the PR.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] Hisoka-X commented on pull request #40684: [SPARK-41532][CONNECT][CLIENT] Add check for operations that involve multiple data frames

Posted by "Hisoka-X (via GitHub)" <gi...@apache.org>.
Hisoka-X commented on PR #40684:
URL: https://github.com/apache/spark/pull/40684#issuecomment-1501601564

   @HyukjinKwon @amaliujia @hvanhovell Hi, I add support for python. PTAL again. Thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] Hisoka-X commented on a diff in pull request #40684: [SPARK-41532][CONNECT][CLIENT] Add check for operations that involve multiple data frames

Posted by "Hisoka-X (via GitHub)" <gi...@apache.org>.
Hisoka-X commented on code in PR #40684:
URL: https://github.com/apache/spark/pull/40684#discussion_r1183569839


##########
python/pyspark/sql/connect/dataframe.py:
##########
@@ -249,14 +254,18 @@ def crossJoin(self, other: "DataFrame") -> "DataFrame":
             raise Exception("Cannot cartesian join when self._plan is empty.")
         if other._plan is None:
             raise Exception("Cannot cartesian join when other._plan is empty.")
-
+        self.checkSameSparkSession(other)
         return DataFrame.withPlan(
             plan.Join(left=self._plan, right=other._plan, on=None, how="cross"),
             session=self._session,
         )
 
     crossJoin.__doc__ = PySparkDataFrame.crossJoin.__doc__
 
+    def checkSameSparkSession(self, other: "DataFrame") -> None:
+        if self._session.session_id != other._session.session_id:
+            raise PySparkException("Both Datasets must belong to the same SparkSession")

Review Comment:
   Thanks for remind, I add `SessionNotSameException` for this. Please check.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] xinrong-meng commented on a diff in pull request #40684: [SPARK-41532][CONNECT][CLIENT] Add check for operations that involve multiple data frames

Posted by "xinrong-meng (via GitHub)" <gi...@apache.org>.
xinrong-meng commented on code in PR #40684:
URL: https://github.com/apache/spark/pull/40684#discussion_r1183174669


##########
python/pyspark/sql/connect/dataframe.py:
##########
@@ -249,14 +254,18 @@ def crossJoin(self, other: "DataFrame") -> "DataFrame":
             raise Exception("Cannot cartesian join when self._plan is empty.")
         if other._plan is None:
             raise Exception("Cannot cartesian join when other._plan is empty.")
-
+        self.checkSameSparkSession(other)
         return DataFrame.withPlan(
             plan.Join(left=self._plan, right=other._plan, on=None, how="cross"),
             session=self._session,
         )
 
     crossJoin.__doc__ = PySparkDataFrame.crossJoin.__doc__
 
+    def checkSameSparkSession(self, other: "DataFrame") -> None:
+        if self._session.session_id != other._session.session_id:
+            raise PySparkException("Both Datasets must belong to the same SparkSession")

Review Comment:
   I am wondering if a subclass of PySparkException should be used instead of that base class. See https://github.com/apache/spark/blob/master/python/pyspark/errors/exceptions/base.py.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] Hisoka-X commented on a diff in pull request #40684: [SPARK-41532][CONNECT][CLIENT] Add check for operations that involve multiple data frames

Posted by "Hisoka-X (via GitHub)" <gi...@apache.org>.
Hisoka-X commented on code in PR #40684:
URL: https://github.com/apache/spark/pull/40684#discussion_r1159846824


##########
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/Dataset.scala:
##########
@@ -560,6 +560,7 @@ class Dataset[T] private[sql] (
   def stat: DataFrameStatFunctions = new DataFrameStatFunctions(sparkSession, plan.getRoot)
 
   private def buildJoin(right: Dataset[_])(f: proto.Join.Builder => Unit): DataFrame = {
+    withDataset(right)

Review Comment:
   I will add python implementation later. Just want to make sure my understanding is correct.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] hvanhovell closed pull request #40684: [SPARK-41532][CONNECT][CLIENT] Add check for operations that involve multiple data frames

Posted by "hvanhovell (via GitHub)" <gi...@apache.org>.
hvanhovell closed pull request #40684: [SPARK-41532][CONNECT][CLIENT] Add check for operations that involve multiple data frames
URL: https://github.com/apache/spark/pull/40684


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] hvanhovell commented on pull request #40684: [SPARK-41532][CONNECT][CLIENT] Add check for operations that involve multiple data frames

Posted by "hvanhovell (via GitHub)" <gi...@apache.org>.
hvanhovell commented on PR #40684:
URL: https://github.com/apache/spark/pull/40684#issuecomment-1538365093

   Merging.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] Hisoka-X commented on a diff in pull request #40684: [SPARK-41532][CONNECT][CLIENT] Add check for operations that involve multiple data frames

Posted by "Hisoka-X (via GitHub)" <gi...@apache.org>.
Hisoka-X commented on code in PR #40684:
URL: https://github.com/apache/spark/pull/40684#discussion_r1159954928


##########
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/SparkSession.scala:
##########
@@ -69,6 +70,9 @@ class SparkSession private[sql] (
 
   private[this] val allocator = new RootAllocator()
 
+  // Generate a unique session ID for this session.
+  lazy val sessionId: String = UUID.randomUUID.toString

Review Comment:
   `client.sessionId` are same for different `SparkSession` with same `SparkConnectClient`. It can't fulfill this requirement (can't join dataset from different session)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] hvanhovell commented on a diff in pull request #40684: [SPARK-41532][CONNECT][CLIENT] Add check for operations that involve multiple data frames

Posted by "hvanhovell (via GitHub)" <gi...@apache.org>.
hvanhovell commented on code in PR #40684:
URL: https://github.com/apache/spark/pull/40684#discussion_r1159876993


##########
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/Dataset.scala:
##########
@@ -560,6 +560,7 @@ class Dataset[T] private[sql] (
   def stat: DataFrameStatFunctions = new DataFrameStatFunctions(sparkSession, plan.getRoot)
 
   private def buildJoin(right: Dataset[_])(f: proto.Join.Builder => Unit): DataFrame = {
+    withDataset(right)

Review Comment:
   I think the understanding is spot on.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] hvanhovell commented on a diff in pull request #40684: [SPARK-41532][CONNECT][CLIENT] Add check for operations that involve multiple data frames

Posted by "hvanhovell (via GitHub)" <gi...@apache.org>.
hvanhovell commented on code in PR #40684:
URL: https://github.com/apache/spark/pull/40684#discussion_r1159875607


##########
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/Dataset.scala:
##########
@@ -1624,6 +1626,12 @@ class Dataset[T] private[sql] (
     }
   }
 
+  private def withDataset(other: Dataset[_]): Unit = {

Review Comment:
   Name it `checkSameSparkSession`? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] Hisoka-X commented on a diff in pull request #40684: [SPARK-41532][CONNECT][CLIENT] Add check for operations that involve multiple data frames

Posted by "Hisoka-X (via GitHub)" <gi...@apache.org>.
Hisoka-X commented on code in PR #40684:
URL: https://github.com/apache/spark/pull/40684#discussion_r1159963213


##########
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/Dataset.scala:
##########
@@ -1624,6 +1626,12 @@ class Dataset[T] private[sql] (
     }
   }
 
+  private def withDataset(other: Dataset[_]): Unit = {
+    assert(

Review Comment:
   Done



##########
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/Dataset.scala:
##########
@@ -1624,6 +1626,12 @@ class Dataset[T] private[sql] (
     }
   }
 
+  private def withDataset(other: Dataset[_]): Unit = {

Review Comment:
   Done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon commented on a diff in pull request #40684: [SPARK-41532][CONNECT][CLIENT] Add check for operations that involve multiple data frames

Posted by "HyukjinKwon (via GitHub)" <gi...@apache.org>.
HyukjinKwon commented on code in PR #40684:
URL: https://github.com/apache/spark/pull/40684#discussion_r1160370816


##########
connector/connect/client/jvm/src/test/scala/org/apache/spark/sql/connect/client/util/RemoteSparkSession.scala:
##########
@@ -49,7 +49,7 @@ import org.apache.spark.util.Utils
 object SparkConnectServerUtils {
 
   // Server port
-  private[connect] val port = ConnectCommon.CONNECT_GRPC_BINDING_PORT + util.Random.nextInt(1000)
+  val port: Int = ConnectCommon.CONNECT_GRPC_BINDING_PORT + util.Random.nextInt(1000)

Review Comment:
   Can be done in a separate PR.
   
   BTW, you could use `private[spark] val port: Int` accessor



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] xinrong-meng commented on a diff in pull request #40684: [SPARK-41532][CONNECT][CLIENT] Add check for operations that involve multiple data frames

Posted by "xinrong-meng (via GitHub)" <gi...@apache.org>.
xinrong-meng commented on code in PR #40684:
URL: https://github.com/apache/spark/pull/40684#discussion_r1183889165


##########
python/pyspark/sql/connect/dataframe.py:
##########
@@ -249,14 +254,18 @@ def crossJoin(self, other: "DataFrame") -> "DataFrame":
             raise Exception("Cannot cartesian join when self._plan is empty.")
         if other._plan is None:
             raise Exception("Cannot cartesian join when other._plan is empty.")
-
+        self.checkSameSparkSession(other)
         return DataFrame.withPlan(
             plan.Join(left=self._plan, right=other._plan, on=None, how="cross"),
             session=self._session,
         )
 
     crossJoin.__doc__ = PySparkDataFrame.crossJoin.__doc__
 
+    def checkSameSparkSession(self, other: "DataFrame") -> None:
+        if self._session.session_id != other._session.session_id:
+            raise PySparkException("Both Datasets must belong to the same SparkSession")

Review Comment:
   Thanks! It's better to define an error class in python/pyspark/errors/error_classes.py with a template error message. Again, that's not the blocker of merging the PR.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] hvanhovell commented on a diff in pull request #40684: [SPARK-41532][CONNECT][CLIENT] Add check for operations that involve multiple data frames

Posted by "hvanhovell (via GitHub)" <gi...@apache.org>.
hvanhovell commented on code in PR #40684:
URL: https://github.com/apache/spark/pull/40684#discussion_r1159960145


##########
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/SparkSession.scala:
##########
@@ -69,6 +70,9 @@ class SparkSession private[sql] (
 
   private[this] val allocator = new RootAllocator()
 
+  // Generate a unique session ID for this session.
+  lazy val sessionId: String = UUID.randomUUID.toString

Review Comment:
   It would be the same session. If you really want to be sure, just check if the clients are referentially the same.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon commented on a diff in pull request #40684: [SPARK-41532][CONNECT][CLIENT] Add check for operations that involve multiple data frames

Posted by "HyukjinKwon (via GitHub)" <gi...@apache.org>.
HyukjinKwon commented on code in PR #40684:
URL: https://github.com/apache/spark/pull/40684#discussion_r1160371432


##########
connector/connect/client/jvm/src/test/scala/org/apache/spark/sql/connect/client/util/RemoteSparkSession.scala:
##########
@@ -49,7 +49,7 @@ import org.apache.spark.util.Utils
 object SparkConnectServerUtils {
 
   // Server port
-  private[connect] val port = ConnectCommon.CONNECT_GRPC_BINDING_PORT + util.Random.nextInt(1000)
+  val port: Int = ConnectCommon.CONNECT_GRPC_BINDING_PORT + util.Random.nextInt(1000)

Review Comment:
   You could use `private[spark] val port: Int` accessor?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] hvanhovell commented on a diff in pull request #40684: [SPARK-41532][CONNECT][CLIENT] Add check for operations that involve multiple data frames

Posted by "hvanhovell (via GitHub)" <gi...@apache.org>.
hvanhovell commented on code in PR #40684:
URL: https://github.com/apache/spark/pull/40684#discussion_r1159874975


##########
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/SparkSession.scala:
##########
@@ -69,6 +70,9 @@ class SparkSession private[sql] (
 
   private[this] val allocator = new RootAllocator()
 
+  // Generate a unique session ID for this session.
+  lazy val sessionId: String = UUID.randomUUID.toString

Review Comment:
   Please use `client.sessionId` for this. Also making private[sql] for now.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] amaliujia commented on a diff in pull request #40684: [SPARK-41532][CONNECT][CLIENT] Add check for operations that involve multiple data frames

Posted by "amaliujia (via GitHub)" <gi...@apache.org>.
amaliujia commented on code in PR #40684:
URL: https://github.com/apache/spark/pull/40684#discussion_r1161413372


##########
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/SparkSession.scala:
##########
@@ -69,6 +69,9 @@ class SparkSession private[sql] (
 
   private[this] val allocator = new RootAllocator()
 
+  // a unique session ID for this session from client.
+  private[sql] def sessionId: String = client.sessionId

Review Comment:
   I think current way is fine so I left it as a `Nit`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org