You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by gu...@apache.org on 2022/06/22 03:04:23 UTC

[spark] branch master updated: [SPARK-39195][SQL][FOLLOWUP] Remove flaky test of OutputCommitCoordinator

This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 1e6fcba09d5 [SPARK-39195][SQL][FOLLOWUP] Remove flaky test of OutputCommitCoordinator
1e6fcba09d5 is described below

commit 1e6fcba09d5c26a4aceff37af1f36efa25240d3e
Author: Angerszhuuuu <an...@gmail.com>
AuthorDate: Wed Jun 22 12:04:10 2022 +0900

    [SPARK-39195][SQL][FOLLOWUP] Remove flaky test of OutputCommitCoordinator
    
    ### What changes were proposed in this pull request?
    Remove flaky test of OutputCommitCoordinator
    
    ### Why are the changes needed?
    Remove flaky test after disscuss
    
    ### Does this PR introduce _any_ user-facing change?
    No
    
    ### How was this patch tested?
    Not need
    
    Closes #36943 from AngersZhuuuu/SPARK-39195-FOLLOWUP.
    
    Authored-by: Angerszhuuuu <an...@gmail.com>
    Signed-off-by: Hyukjin Kwon <gu...@apache.org>
---
 .../scala/org/apache/spark/scheduler/OutputCommitCoordinatorSuite.scala | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/core/src/test/scala/org/apache/spark/scheduler/OutputCommitCoordinatorSuite.scala b/core/src/test/scala/org/apache/spark/scheduler/OutputCommitCoordinatorSuite.scala
index 62c559b52ff..95e2429ea58 100644
--- a/core/src/test/scala/org/apache/spark/scheduler/OutputCommitCoordinatorSuite.scala
+++ b/core/src/test/scala/org/apache/spark/scheduler/OutputCommitCoordinatorSuite.scala
@@ -144,7 +144,7 @@ class OutputCommitCoordinatorSuite extends SparkFunSuite with BeforeAndAfter {
     assert(tempDir.list().size === 1)
   }
 
-  test("If commit fails, if task is retried it should not be locked, and will succeed.") {
+  ignore("If commit fails, if task is retried it should not be locked, and will succeed.") {
     val rdd = sc.parallelize(Seq(1), 1)
     sc.runJob(rdd, OutputCommitFunctions(tempDir.getAbsolutePath).failFirstCommitAttempt _,
       rdd.partitions.indices)


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org