You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Mridul Muralidharan (JIRA)" <ji...@apache.org> on 2014/08/11 06:37:11 UTC

[jira] [Comment Edited] (SPARK-2962) Suboptimal scheduling in spark

    [ https://issues.apache.org/jira/browse/SPARK-2962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14092427#comment-14092427 ] 

Mridul Muralidharan edited comment on SPARK-2962 at 8/11/14 4:35 AM:
---------------------------------------------------------------------

To give more context; 

a) Our jobs start with load data from dfs as starting point : and so this is the first stage that gets executed.

b) We are sleeping for 1 minute before starting the jobs (in case cluster is busy, etc) - unfortunately, this is not sufficient and iirc there is no programmatic way to wait more deterministically for X% of node (was something added to alleviate this ? I did see some discussion)

c) This becomes more of a problem because spark does not honour preferred location anymore while running in yarn. See SPARK-2089 - due to 1.0 interface changes.
[ Practically, if we are using large enough number of nodes (with replication of 3 or higher), usually we do end up with quite of lot of data local tasks eventually - so (c) is not an immediate concern for our current jobs assuming (b) is not an issue, though it is suboptimal in general case ]




was (Author: mridulm80):
To give more context; 

a) Our jobs start with load data from dfs as starting point : and so this is the first stage that gets executed.

b) We are sleeping for 1 minute before starting the jobs (in case cluster is busy, etc) - unfortunately, this is not sufficient and iirc there is no programmatic way to wait more deterministically for X% of node (was something added to alleviate this ? I did see some discussion)

c) This becomes more of a problem because spark does not honour preferred location anymore while running in yarn. See SPARK-208 - due to 1.0 interface changes.
[ Practically, if we are using large enough number of nodes (with replication of 3 or higher), usually we do end up with quite of lot of data local tasks eventually - so (c) is not an immediate concern for our current jobs assuming (b) is not an issue, though it is suboptimal in general case ]



> Suboptimal scheduling in spark
> ------------------------------
>
>                 Key: SPARK-2962
>                 URL: https://issues.apache.org/jira/browse/SPARK-2962
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.1.0
>         Environment: All
>            Reporter: Mridul Muralidharan
>
> In findTask, irrespective of 'locality' specified, pendingTasksWithNoPrefs are always scheduled with PROCESS_LOCAL
> pendingTasksWithNoPrefs contains tasks which currently do not have any alive locations - but which could come in 'later' : particularly relevant when spark app is just coming up and containers are still being added.
> This causes a large number of non node local tasks to be scheduled incurring significant network transfers in the cluster when running with non trivial datasets.
> The comment "// Look for no-pref tasks after rack-local tasks since they can run anywhere." is misleading in the method code : locality levels start from process_local down to any, and so no prefs get scheduled much before rack.
> Also note that, currentLocalityIndex is reset to the taskLocality returned by this method - so returning PROCESS_LOCAL as the level will trigger wait times again. (Was relevant before recent change to scheduler, and might be again based on resolution of this issue).
> Found as part of writing test for SPARK-2931
>  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org