You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by "cloud-fan (via GitHub)" <gi...@apache.org> on 2023/09/20 12:49:36 UTC

[GitHub] [spark] cloud-fan commented on a diff in pull request #42199: [SPARK-44579][SQL] Support Interrupt On Cancel in SQLExecution

cloud-fan commented on code in PR #42199:
URL: https://github.com/apache/spark/pull/42199#discussion_r1331581159


##########
sql/core/src/main/scala/org/apache/spark/sql/execution/SQLExecution.scala:
##########
@@ -77,6 +79,11 @@ object SQLExecution {
     }
     val rootExecutionId = sc.getLocalProperty(EXECUTION_ROOT_ID_KEY).toLong
     executionIdToQueryExecution.put(executionId, queryExecution)
+    val originalInterruptOnCancel = sc.getLocalProperty(SPARK_JOB_INTERRUPT_ON_CANCEL)
+    if (originalInterruptOnCancel == null) {
+      val interruptOnCancel = sparkSession.conf.get(SQLConf.INTERRUPT_ON_CANCEL)
+      sc.setInterruptOnCancel(interruptOnCancel)

Review Comment:
   This has some side effects: if a task fails many times, we will abort its stage and interrupt other running tasks if this is set to true. However, if the task does file scans, then interrupting the task means we won't trigger the task completion callback which frees file streams.
   
   I think this will lead to memory leak, as runtime execution error is common, especially with ansi mode.
   
   How does thriftserver leverage this feature?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org