You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by do...@apache.org on 2020/06/09 16:24:45 UTC

[spark] branch branch-3.0 updated: [SPARK-31921][CORE] Fix the wrong warning: "App app-xxx requires more resource than any of Workers could have"

This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
     new c3e5cd2  [SPARK-31921][CORE] Fix the wrong warning: "App app-xxx requires more resource than any of Workers could have"
c3e5cd2 is described below

commit c3e5cd25766e0ee0eaf4513c0639229315fbb1e6
Author: yi.wu <yi...@databricks.com>
AuthorDate: Tue Jun 9 09:20:54 2020 -0700

    [SPARK-31921][CORE] Fix the wrong warning: "App app-xxx requires more resource than any of Workers could have"
    
    ### What changes were proposed in this pull request?
    
    This PR adds the check to see whether the allocated executors for the waiting application is empty before recognizing it as a possible hang application.
    
    ### Why are the changes needed?
    
    It's a bugfix. The warning means there are not enough resources for the application to launch at least one executor. But we can still successfully run a job under this warning, which means it does have launched executor.
    
    ### Does this PR introduce _any_ user-facing change?
    
    Yes. Before this PR, when using local cluster mode to start spark-shell, e.g. `./bin/spark-shell --master "local-cluster[2, 1, 1024]"`, the user would always see the warning:
    
    ```
    20/06/06 22:21:02 WARN Utils: Your hostname, C02ZQ051LVDR resolves to a loopback address: 127.0.0.1; using 192.168.1.6 instead (on interface en0)
    20/06/06 22:21:02 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
    20/06/06 22:21:02 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    Setting default log level to "WARN".
    To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
    NOTE: SPARK_PREPEND_CLASSES is set, placing locally compiled Spark classes ahead of assembly.
    NOTE: SPARK_PREPEND_CLASSES is set, placing locally compiled Spark classes ahead of assembly.
    Spark context Web UI available at http://192.168.1.6:4040
    Spark context available as 'sc' (master = local-cluster[2, 1, 1024], app id = app-20200606222107-0000).
    Spark session available as 'spark'.
    20/06/06 22:21:07 WARN Master: App app-20200606222107-0000 requires more resource than any of Workers could have.
    20/06/06 22:21:07 WARN Master: App app-20200606222107-0000 requires more resource than any of Workers could have.
    ```
    
    After this PR, the warning has gone.
    
    ### How was this patch tested?
    
    Tested manually.
    
    Closes #28742 from Ngone51/fix_warning.
    
    Authored-by: yi.wu <yi...@databricks.com>
    Signed-off-by: Dongjoon Hyun <do...@apache.org>
    (cherry picked from commit 38873d5196108897a29888dcd77a0c0d835772e6)
    Signed-off-by: Dongjoon Hyun <do...@apache.org>
---
 core/src/main/scala/org/apache/spark/deploy/master/Master.scala | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/core/src/main/scala/org/apache/spark/deploy/master/Master.scala b/core/src/main/scala/org/apache/spark/deploy/master/Master.scala
index 8d3795c..3818a87 100644
--- a/core/src/main/scala/org/apache/spark/deploy/master/Master.scala
+++ b/core/src/main/scala/org/apache/spark/deploy/master/Master.scala
@@ -704,7 +704,9 @@ private[deploy] class Master(
         val usableWorkers = workers.toArray.filter(_.state == WorkerState.ALIVE)
           .filter(canLaunchExecutor(_, app.desc))
           .sortBy(_.coresFree).reverse
-        if (waitingApps.length == 1 && usableWorkers.isEmpty) {
+        val appMayHang = waitingApps.length == 1 &&
+          waitingApps.head.executors.isEmpty && usableWorkers.isEmpty
+        if (appMayHang) {
           logWarning(s"App ${app.id} requires more resource than any of Workers could have.")
         }
         val assignedCores = scheduleExecutorsOnWorkers(app, usableWorkers, spreadOutApps)


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org