You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by ru...@apache.org on 2023/09/02 09:20:45 UTC
[spark] branch master updated: [SPARK-45026][CONNECT][FOLLOW-UP] Code cleanup
This is an automated email from the ASF dual-hosted git repository.
ruifengz pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/master by this push:
new f0fb434c268 [SPARK-45026][CONNECT][FOLLOW-UP] Code cleanup
f0fb434c268 is described below
commit f0fb434c268f69e6845ba97e3256d3c1b873fc95
Author: Ruifeng Zheng <ru...@apache.org>
AuthorDate: Sat Sep 2 17:20:22 2023 +0800
[SPARK-45026][CONNECT][FOLLOW-UP] Code cleanup
### What changes were proposed in this pull request?
move 3 variables to `isCommand` branch
### Why are the changes needed?
they are not used in other branches
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
CI
### Was this patch authored or co-authored using generative AI tooling?
NO
Closes #42765 from zhengruifeng/SPARK-45026-followup.
Authored-by: Ruifeng Zheng <ru...@apache.org>
Signed-off-by: Ruifeng Zheng <ru...@apache.org>
---
.../apache/spark/sql/connect/planner/SparkConnectPlanner.scala | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala b/connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala
index 547b6a9fb40..11300631491 100644
--- a/connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala
+++ b/connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala
@@ -2464,15 +2464,15 @@ class SparkConnectPlanner(val sessionHolder: SessionHolder) extends Logging {
case _ => Seq.empty
}
- // Convert the results to Arrow.
- val schema = df.schema
- val maxBatchSize = (SparkEnv.get.conf.get(CONNECT_GRPC_ARROW_MAX_BATCH_SIZE) * 0.7).toLong
- val timeZoneId = session.sessionState.conf.sessionLocalTimeZone
-
// To avoid explicit handling of the result on the client, we build the expected input
// of the relation on the server. The client has to simply forward the result.
val result = SqlCommandResult.newBuilder()
if (isCommand) {
+ // Convert the results to Arrow.
+ val schema = df.schema
+ val maxBatchSize = (SparkEnv.get.conf.get(CONNECT_GRPC_ARROW_MAX_BATCH_SIZE) * 0.7).toLong
+ val timeZoneId = session.sessionState.conf.sessionLocalTimeZone
+
// Convert the data.
val bytes = if (rows.isEmpty) {
ArrowConverters.createEmptyArrowBatch(
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org