You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by gu...@apache.org on 2020/03/25 04:44:48 UTC

[spark] branch master updated: [SPARK-31239][CORE][TEST] Increase await duration in `WorkerDecommissionSuite`.`verify a task with all workers decommissioned succeeds`

This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new c2c5b2d  [SPARK-31239][CORE][TEST] Increase await duration in `WorkerDecommissionSuite`.`verify a task with all workers decommissioned succeeds`
c2c5b2d is described below

commit c2c5b2df50e8fb42218a786820a98ba702d9fd4b
Author: Xingbo Jiang <xi...@databricks.com>
AuthorDate: Wed Mar 25 13:43:35 2020 +0900

    [SPARK-31239][CORE][TEST] Increase await duration in `WorkerDecommissionSuite`.`verify a task with all workers decommissioned succeeds`
    
    ### What changes were proposed in this pull request?
    
    The test case has been flaky because the execution time sometimes exceeds the await duration. Increase the await duration to avoid flakiness.
    
    ### How was this patch tested?
    
    Tested locally and it didn't fail anymore.
    
    Closes #28007 from jiangxb1987/DecomTest.
    
    Authored-by: Xingbo Jiang <xi...@databricks.com>
    Signed-off-by: HyukjinKwon <gu...@apache.org>
---
 .../scala/org/apache/spark/scheduler/WorkerDecommissionSuite.scala    | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/core/src/test/scala/org/apache/spark/scheduler/WorkerDecommissionSuite.scala b/core/src/test/scala/org/apache/spark/scheduler/WorkerDecommissionSuite.scala
index 15733b0..8c6f86a 100644
--- a/core/src/test/scala/org/apache/spark/scheduler/WorkerDecommissionSuite.scala
+++ b/core/src/test/scala/org/apache/spark/scheduler/WorkerDecommissionSuite.scala
@@ -70,13 +70,13 @@ class WorkerDecommissionSuite extends SparkFunSuite with LocalSparkContext {
     val sched = sc.schedulerBackend.asInstanceOf[StandaloneSchedulerBackend]
     val execs = sched.getExecutorIds()
     execs.foreach(execId => sched.decommissionExecutor(execId))
-    val asyncCountResult = ThreadUtils.awaitResult(asyncCount, 2.seconds)
+    val asyncCountResult = ThreadUtils.awaitResult(asyncCount, 10.seconds)
     assert(asyncCountResult === 10)
     // Try and launch task after decommissioning, this should fail
     val postDecommissioned = input.map(x => x)
     val postDecomAsyncCount = postDecommissioned.countAsync()
     val thrown = intercept[java.util.concurrent.TimeoutException]{
-      val result = ThreadUtils.awaitResult(postDecomAsyncCount, 2.seconds)
+      val result = ThreadUtils.awaitResult(postDecomAsyncCount, 10.seconds)
     }
     assert(postDecomAsyncCount.isCompleted === false,
       "After exec decommission new task could not launch")


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org