You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "ASF GitHub Bot (JIRA)" <ji...@apache.org> on 2017/10/16 14:47:00 UTC

[jira] [Commented] (FLINK-7851) Improve scheduling balance in case of fewer sub tasks than input operator

    [ https://issues.apache.org/jira/browse/FLINK-7851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206009#comment-16206009 ] 

ASF GitHub Bot commented on FLINK-7851:
---------------------------------------

GitHub user tillrohrmann opened a pull request:

    https://github.com/apache/flink/pull/4839

    [FLINK-7851] [scheduling] Improve scheduling balance by round robin distribution

    ## What is the purpose of the change
    
    Polls in a round robin fashion from the slots multimap in `SlotSharingGroupAssignment`. This will improve how tasks are spread out across the available cluster resources if you have an operator with a smaller degree of parallelism than other tasks.
    
    ## Brief change log
    
    - Enforce that the available slot maps are of type `LinkeHashMap` to ensure round robin traversal
    - Change `SlotSharingGroupAssingment#pollFromMultiMap` such that it removes the entry for a given `ResourceID`, takes a `SharedSlot` and then re-adds the list of available slots if there is still one left. This will ensure that the next time we remove the first iterator entry, we will take a slot from another TaskManager if available.
    
    ## Verifying this change
    
    - Added `SlotSharingGroupAssignmentTest#testRoundRobinPolling`
    
    ## Does this pull request potentially affect one of the following parts:
    
      - Dependencies (does it add or upgrade a dependency): (no)
      - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (no)
      - The serializers: (no)
      - The runtime per-record code paths (performance sensitive): (no)
      - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes)
    
    ## Documentation
    
      - Does this pull request introduce a new feature? (no)
      - If yes, how is the feature documented? (not applicable)
    


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/tillrohrmann/flink roundRobinMultiMapPolling

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/flink/pull/4839.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #4839
    
----
commit 393fc965cd43e6b3e5948b7ede484e770e4e708a
Author: Till <ti...@gmail.com>
Date:   2017-10-16T14:18:23Z

    [FLINK-7851] [scheduling] Improve scheduling balance by round robin distribution

commit c58e07dbd9f0feab51a7327822446c9b69759ea7
Author: Till <ti...@gmail.com>
Date:   2017-10-16T14:38:43Z

    Make sure that the value maps are of type LinkedHashMap in SlotSharingGroupAssignment#availableSlotsPerJid

----


> Improve scheduling balance in case of fewer sub tasks than input operator
> -------------------------------------------------------------------------
>
>                 Key: FLINK-7851
>                 URL: https://issues.apache.org/jira/browse/FLINK-7851
>             Project: Flink
>          Issue Type: Improvement
>          Components: Distributed Coordination
>    Affects Versions: 1.4.0, 1.3.2
>            Reporter: Till Rohrmann
>             Fix For: 1.4.0
>
>
> When having a job where we have a mapper {{m1}} running with dop {{n}} followed by a key by and a mapper {{m2}} (all-to-all communication) which runs with dop {{m}} and {{n > m}}, it happens that the sub tasks of {{m2}} are not uniformly spread out across all currently used {{TaskManagers}}.
> For example: {{n = 4}}, {{m = 2}} and we have 2 TaskManagers with 2 slots each. The deployment would look the following:
> TM1: 
> Slot 1: {{m1_1}} -> {{m_2_1}}
> Slot 2: {{m1_3}} -> {{m_2_2}}
> TM2:
> Slot 1: {{m1_2}}
> Slot 2: {{m1_4}}
> The problem for this behaviour is that when there are too many preferred locations (currently 8) due to an all-to-all communication pattern, then we will simply poll the next slot from the MultiMap in {{SlotSharingGroupAssignment}}. The polling algorithm first drains all available slots for a single machine before it polls slots from another machine. 
> I think it would be better to poll slots in a round robin fashion wrt to the machines. That way we would get a better resource utilisation by spreading the tasks more evenly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)