You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by "nchammas (via GitHub)" <gi...@apache.org> on 2024/03/02 22:47:22 UTC

[PR] [SPARK-47252][DOCS] Clarify that pivot may trigger an eager computation [spark]

nchammas opened a new pull request, #45363:
URL: https://github.com/apache/spark/pull/45363

   ### What changes were proposed in this pull request?
   
   Clarify that, if explicit pivot values are not provided, Spark will eagerly compute them.
   
   ### Why are the changes needed?
   
   The current wording on `master` is misleading. To say that one version of pivot is more or less "efficient" than the other glosses over the fact that one is lazy and the other is not. Spark users are trained from early on that transformations are generally lazy; exceptions to this rule should be more clearly highlighted.
   
   I experienced this personally when I called pivot on a DataFrame without providing explicit values, and Spark took around 20 minutes to compute the distinct pivot values. Looking at the docs, I felt that "less efficient" didn't accurately represent this behavior.
   
   ### Does this PR introduce _any_ user-facing change?
   
   Yes, updated user docs.
   
   ### How was this patch tested?
   
   I built and reviewed the docs locally.
   
   <img width="300" src="https://github.com/apache/spark/assets/1039369/0079e6ea-6a5a-4a00-a1ad-45a08c07f716" />
   <img width="400" src="https://github.com/apache/spark/assets/1039369/8a4873a0-e80f-408c-aa38-eac5fa51611c" />
   
   ### Was this patch authored or co-authored using generative AI tooling?
   
   No.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Re: [PR] [SPARK-47252][DOCS] Clarify that pivot may trigger an eager computation [spark]

Posted by "nchammas (via GitHub)" <gi...@apache.org>.
nchammas commented on PR #45363:
URL: https://github.com/apache/spark/pull/45363#issuecomment-1976753715

   I've updated the screenshots in the PR description to reflect the latest changes.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Re: [PR] [SPARK-47252][DOCS] Clarify that pivot may trigger an eager computation [spark]

Posted by "nchammas (via GitHub)" <gi...@apache.org>.
nchammas commented on code in PR #45363:
URL: https://github.com/apache/spark/pull/45363#discussion_r1510094130


##########
sql/core/src/main/scala/org/apache/spark/sql/RelationalGroupedDataset.scala:
##########
@@ -324,18 +324,18 @@ class RelationalGroupedDataset protected[sql](
   /**
    * Pivots a column of the current `DataFrame` and performs the specified aggregation.
    *
-   * There are two versions of `pivot` function: one that requires the caller to specify the list
-   * of distinct values to pivot on, and one that does not. The latter is more concise but less
-   * efficient, because Spark needs to first compute the list of distinct values internally.
-   *
    * {{{
    *   // Compute the sum of earnings for each year by course with each course as a separate column
-   *   df.groupBy("year").pivot("course", Seq("dotNET", "Java")).sum("earnings")
-   *
-   *   // Or without specifying column values (less efficient)
    *   df.groupBy("year").pivot("course").sum("earnings")
    * }}}
    *
+   * @note Spark will '''eagerly''' compute the distinct values in `pivotColumn` so it can determine
+   *    the resulting schema of the transformation. Depending on the size and complexity of your
+   *    data, this may take some time. In other words, though the pivot transformation is lazy like
+   *    most DataFrame transformations, computing the distinct pivot values is not. To avoid any
+   *    eager computations, provide an explicit list of values via
+   *    `pivot(pivotColumn: String, values: Seq[Any])`.

Review Comment:
   I probably spent about an hour trying to get this to work as a proper link via `[[pivot(...]]`, per [the scaladoc docs on ambiguous links][1], but I could not get it to work.
   
   [1]: https://docs.scala-lang.org/overviews/scaladoc/for-library-authors.html#resolving-ambiguous-links-within-scaladoc-comments



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Re: [PR] [SPARK-47252][DOCS] Clarify that pivot may trigger an eager computation [spark]

Posted by "HyukjinKwon (via GitHub)" <gi...@apache.org>.
HyukjinKwon commented on code in PR #45363:
URL: https://github.com/apache/spark/pull/45363#discussion_r1510453526


##########
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/RelationalGroupedDataset.scala:
##########
@@ -259,18 +259,19 @@ class RelationalGroupedDataset private[sql] (
   /**
    * Pivots a column of the current `DataFrame` and performs the specified aggregation.
    *
-   * There are two versions of `pivot` function: one that requires the caller to specify the list
-   * of distinct values to pivot on, and one that does not. The latter is more concise but less
-   * efficient, because Spark needs to first compute the list of distinct values internally.
-   *
    * {{{
    *   // Compute the sum of earnings for each year by course with each course as a separate column
-   *   df.groupBy("year").pivot("course", Seq("dotNET", "Java")).sum("earnings")
-   *
-   *   // Or without specifying column values (less efficient)
    *   df.groupBy("year").pivot("course").sum("earnings")
    * }}}
    *
+   * @note

Review Comment:
   I wonder if we can just make it a bit shorter, and put it into the main doc instead of the separate note. I don't want to scare users about this ..  e.g., `DataFrameReader.csv` about schema inference.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Re: [PR] [SPARK-47252][DOCS] Clarify that pivot may trigger an eager computation [spark]

Posted by "nchammas (via GitHub)" <gi...@apache.org>.
nchammas commented on code in PR #45363:
URL: https://github.com/apache/spark/pull/45363#discussion_r1510555559


##########
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/RelationalGroupedDataset.scala:
##########
@@ -259,18 +259,19 @@ class RelationalGroupedDataset private[sql] (
   /**
    * Pivots a column of the current `DataFrame` and performs the specified aggregation.
    *
-   * There are two versions of `pivot` function: one that requires the caller to specify the list
-   * of distinct values to pivot on, and one that does not. The latter is more concise but less
-   * efficient, because Spark needs to first compute the list of distinct values internally.
-   *
    * {{{
    *   // Compute the sum of earnings for each year by course with each course as a separate column
-   *   df.groupBy("year").pivot("course", Seq("dotNET", "Java")).sum("earnings")
-   *
-   *   // Or without specifying column values (less efficient)
    *   df.groupBy("year").pivot("course").sum("earnings")
    * }}}
    *
+   * @note

Review Comment:
   I trimmed the note a bit. Is that better?
   
   I also took a look at the CSV reader method:
   
   https://github.com/apache/spark/blob/a1b0da200b271214e9d6b3170308509d7d514c7f/sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala#L530-L532
   
   It's pretty similar to what I'm proposing here.
   
   I believe it's more important to highlight the eager computation here since `pivot` is a transformation and, unlike with reader methods, users are probably not expecting expensive computations to be triggered. But I agree, we don't want to make it sound like there's something _wrong_ with not specifying pivot values.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Re: [PR] [SPARK-47252][DOCS] Clarify that pivot may trigger an eager computation [spark]

Posted by "HyukjinKwon (via GitHub)" <gi...@apache.org>.
HyukjinKwon commented on PR #45363:
URL: https://github.com/apache/spark/pull/45363#issuecomment-1977745985

   Merged to master.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Re: [PR] [SPARK-47252][DOCS] Clarify that pivot may trigger an eager computation [spark]

Posted by "HyukjinKwon (via GitHub)" <gi...@apache.org>.
HyukjinKwon commented on code in PR #45363:
URL: https://github.com/apache/spark/pull/45363#discussion_r1510598001


##########
python/pyspark/sql/group.py:
##########
@@ -450,6 +446,10 @@ def pivot(self, pivot_col: str, values: Optional[List["LiteralType"]] = None) ->
         values : list, optional
             List of values that will be translated to columns in the output DataFrame.
 
+            .. note:: If ``values`` is not provided, Spark will **eagerly** compute the distinct

Review Comment:
   this too. I would just put it up in the doctest (like `DataFrameReader.csv`)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Re: [PR] [SPARK-47252][DOCS] Clarify that pivot may trigger an eager computation [spark]

Posted by "HyukjinKwon (via GitHub)" <gi...@apache.org>.
HyukjinKwon closed pull request #45363: [SPARK-47252][DOCS] Clarify that pivot may trigger an eager computation
URL: https://github.com/apache/spark/pull/45363


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org