You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Ali Smesseim (Jira)" <ji...@apache.org> on 2020/06/22 21:01:00 UTC

[jira] [Created] (SPARK-32057) SparkExecuteStatementOperation does not set CANCELED/CLOSED state correctly

Ali Smesseim created SPARK-32057:
------------------------------------

             Summary: SparkExecuteStatementOperation does not set CANCELED/CLOSED state correctly 
                 Key: SPARK-32057
                 URL: https://issues.apache.org/jira/browse/SPARK-32057
             Project: Spark
          Issue Type: Bug
          Components: SQL
    Affects Versions: 3.0.0
            Reporter: Ali Smesseim


https://github.com/apache/spark/pull/28671 introduced changes that changed the way cleanup is done in SparkExecuteStatementOperation. In cancel(), cleanup (killing jobs) used to be done after setting state to CANCELED. Now, the order is reversed. Jobs are killed first, causing exception to be thrown inside execute(), so the status of the operation becomes ERROR before being set to CANCELED.

cc [~juliuszsompolski]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org