You are viewing a plain text version of this content. The canonical link for it is here.
Posted to yarn-dev@hadoop.apache.org by "Jason Lowe (JIRA)" <ji...@apache.org> on 2017/02/14 21:36:41 UTC
[jira] [Created] (YARN-6191) CapacityScheduler preemption by
container priority can be problematic for MapReduce
Jason Lowe created YARN-6191:
--------------------------------
Summary: CapacityScheduler preemption by container priority can be problematic for MapReduce
Key: YARN-6191
URL: https://issues.apache.org/jira/browse/YARN-6191
Project: Hadoop YARN
Issue Type: Bug
Components: capacityscheduler
Reporter: Jason Lowe
A MapReduce job with thousands of reducers and just a couple of maps left to go was running in a preemptable queue. Periodically other queues would get busy and the RM would preempt some resources from the job, but it _always_ picked the job's map tasks first because they use the lowest priority containers. Even though the reducers had a shorter running time, most were spared but the maps were always shot. Since the map tasks ran for a longer time than the preemption period, the job was in a perpetual preemption loop.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: yarn-dev-help@hadoop.apache.org