You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Lijie Wang (Jira)" <ji...@apache.org> on 2022/11/29 01:55:00 UTC
[jira] [Comment Edited] (FLINK-30198) Support AdaptiveBatchScheduler to set per-task size for reducer task
[ https://issues.apache.org/jira/browse/FLINK-30198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17640337#comment-17640337 ]
Lijie Wang edited comment on FLINK-30198 at 11/29/22 1:54 AM:
--------------------------------------------------------------
[~zhuzh] Would you like share your thoughts
was (Author: wanglijie95):
[~zhuzh] Would you like share your thoughs?
> Support AdaptiveBatchScheduler to set per-task size for reducer task
> ---------------------------------------------------------------------
>
> Key: FLINK-30198
> URL: https://issues.apache.org/jira/browse/FLINK-30198
> Project: Flink
> Issue Type: Improvement
> Components: Runtime / Coordination
> Reporter: Aitozi
> Priority: Major
>
> When we use AdaptiveBatchScheduler in our case, we found that it can work well in most case, but there is a limit that, there is only one global parameter for per task data size by {{jobmanager.adaptive-batch-scheduler.avg-data-volume-per-task}}.
> However, in a map-reduce architecture, the reducer tasks are usually have more complex computation logic such as aggregate/sort/join operators. So I think it will be nicer if we can set the reducer and mapper task's data size per task individually.
> Then, how to distinguish the reducer task?
> IMO, we can let the parallelism decider know whether the vertex have a hash edge inputs. If yes, it should be a reducer task.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)