You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Ruiguang Pei (JIRA)" <ji...@apache.org> on 2019/05/15 09:40:01 UTC

[jira] [Comment Edited] (SPARK-24374) SPIP: Support Barrier Execution Mode in Apache Spark

    [ https://issues.apache.org/jira/browse/SPARK-24374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16840122#comment-16840122 ] 

Ruiguang Pei edited comment on SPARK-24374 at 5/15/19 9:39 AM:
---------------------------------------------------------------

Hi, [~mengxr]

when I'm using Barrier Execution Mode, it seems that I can't  partition my data more than the number of total cores, if you don't want to get the exception ["Barrier execution mode does not allow run a barrier stage that requires more slots than the total number of slots in the cluster currently."].

Suppose that I have a extremely large RDD, but only 4 cores are available, which means that each partition is still too large. will it takes potential performance problems? Do you have some plans to support the scenario that more slots can be request than available?


was (Author: ruiguang pei):
Hi, [~mengxr]

For the map-reduce stage, the number of tasks(or data partitions) can bigger than the number of total cores, so that the calculation cost of each data partitions can be controlled, but when I'm using Barrier Execution Mode, it seems that I mustn't  re-partition my data more than number of total cores, if you don't want to get "no enough resource" exception. Will it take some computational performance problem? also, is this kind of design a little inflexible?

> SPIP: Support Barrier Execution Mode in Apache Spark
> ----------------------------------------------------
>
>                 Key: SPARK-24374
>                 URL: https://issues.apache.org/jira/browse/SPARK-24374
>             Project: Spark
>          Issue Type: Epic
>          Components: ML, Spark Core
>    Affects Versions: 2.4.0
>            Reporter: Xiangrui Meng
>            Assignee: Xiangrui Meng
>            Priority: Major
>              Labels: Hydrogen, SPIP
>         Attachments: SPIP_ Support Barrier Scheduling in Apache Spark.pdf
>
>
> (See details in the linked/attached SPIP doc.)
> {quote}
> The proposal here is to add a new scheduling model to Apache Spark so users can properly embed distributed DL training as a Spark stage to simplify the distributed training workflow. For example, Horovod uses MPI to implement all-reduce to accelerate distributed TensorFlow training. The computation model is different from MapReduce used by Spark. In Spark, a task in a stage doesn’t depend on any other tasks in the same stage, and hence it can be scheduled independently. In MPI, all workers start at the same time and pass messages around. To embed this workload in Spark, we need to introduce a new scheduling model, tentatively named “barrier scheduling”, which launches tasks at the same time and provides users enough information and tooling to embed distributed DL training. Spark can also provide an extra layer of fault tolerance in case some tasks failed in the middle, where Spark would abort all tasks and restart the stage.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org