You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "Wang Yan (JIRA)" <ji...@apache.org> on 2018/10/05 02:18:00 UTC

[jira] [Created] (MAPREDUCE-7148) Fast fail jobs when exceeds dfs quota limitation

Wang Yan created MAPREDUCE-7148:
-----------------------------------

             Summary: Fast fail jobs when exceeds dfs quota limitation
                 Key: MAPREDUCE-7148
                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-7148
             Project: Hadoop Map/Reduce
          Issue Type: Improvement
          Components: task
    Affects Versions: 2.9.0, 2.8.0, 2.7.0
         Environment: hadoop 2.7.3, hive 0.13
            Reporter: Wang Yan


We are running hive jobs with a DFS quota limitation per job(3TB). If a job hits DFS quota limitation, the task that hit it will fail and there will be a few task reties before the job actually fails. The retry is not very helpful because the job will always fail anyway. In some worse cases, we have a job which has a single reduce task writing more than 3TB to HDFS over 20 hours, the reduce task meets the quota limitation and retries 4 times until the job fail in the end thus consuming a lot of unnecessary resource. This ticket aims at providing the feature to let a job fail fast when it writes too much data to the DFS and exceeds the DFS quota limitation. The fast fail feature is introduced in MAPREDUCE-7022 and MAPREDUCE-6489 .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: mapreduce-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-help@hadoop.apache.org