You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by "cloud-fan (via GitHub)" <gi...@apache.org> on 2023/04/24 05:14:03 UTC

[GitHub] [spark] cloud-fan commented on a diff in pull request #40914: [SPARK-43240][SQL][3.3] Fix the wrong result issue when calling df.describe() method.

cloud-fan commented on code in PR #40914:
URL: https://github.com/apache/spark/pull/40914#discussion_r1174782853


##########
sql/core/src/main/scala/org/apache/spark/sql/execution/stat/StatFunctions.scala:
##########
@@ -288,7 +288,7 @@ object StatFunctions extends Logging {
     }
 
     // If there is no selected columns, we don't need to run this aggregate, so make it a lazy val.
-    lazy val aggResult = ds.select(aggExprs: _*).queryExecution.toRdd.collect().head
+    lazy val aggResult = ds.select(aggExprs: _*).queryExecution.toRdd.map(_.copy()).collect().head

Review Comment:
   This is only a bug if Spark installs third-party physical operators that release memory eagerly.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org