You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@airflow.apache.org by GitBox <gi...@apache.org> on 2019/04/25 20:34:09 UTC

[GitHub] [airflow] fmao opened a new pull request #5188: status propagation

fmao opened a new pull request #5188: status propagation
URL: https://github.com/apache/airflow/pull/5188
 
 
   Make sure you have checked _all_ steps below.
   
   ### Jira
   
   My PR addresses the following Airflow JIRA issues and references them in the PR title. 
   https://issues.apache.org/jira/browse/AIRFLOW-719
   https://issues.apache.org/jira/browse/AIRFLOW-982
   https://issues.apache.org/jira/browse/AIRFLOW-983
   
   The temp fix was remove after 1.8.0 due to  https://github.com/apache/airflow/pull/2195
   
   in the latest branch, a new trigger rule is added.
   https://github.com/apache/airflow/pull/4182
   
   ### Description
   
   Issue: skip status stop propagation to down streams and get randomly stopped with the dag status marked as failed.
   The issue is located in the version 1.8.1.
   In version 1.8.0 there is a temp fix but removed after this version.
   https://github.com/apache/airflow/commit/4077c6de297566a4c598065867a9a27324ae6eb1
   https://github.com/apache/airflow/commit/92965e8275c6f2ec2282ad46c09950bab10c1cb2
   
   
   root casue:
     In a loop, the scheduler evaluates each dag and all its task dependcies around by around.
     Each round evaluation happens twice in the context of flag_upstream_failed = false and =true.
   
     The dag run update method mark the dag run deadlocked which stops the dag and all its tasks from be processed furture.
     https://github.com/apache/airflow/blob/1.8.1/airflow/models.py#L4184
     It is due to in no_dependencies_met.  All_sccucess trigger rule misses skipped status check and marks the task as failed when upstream only has skipped tasks.
     https://github.com/apache/airflow/blob/1.8.1/airflow/models.py#L4152
     https://github.com/apache/airflow/blob/1.8.1/airflow/ti_deps/deps/trigger_rule_dep.py#L165
   
     Each dag update will checks all its task deps and sent ready tasks to run in the context of flag_upstream_failed=false (defalt)
     https://github.com/apache/airflow/blob/1.8.1/airflow/models.py#L4156   which wont handle skip status propagation.
   
     After dag update, dag will checks all its task deps and sent ready tasks to run in the context of flag_upstream_failed=true
     https://github.com/apache/airflow/blob/1.8.1/airflow/jobs.py#L904
     which handles skip status propogration.
     https://github.com/apache/airflow/blob/1.8.1/airflow/ti_deps/deps/trigger_rule_dep.py#L138
   
     Two potential causes that will trigger dag update detect a deadlock.
     The skip status proprogatation rely on detected skipped upstreams (which happens asyncly by other nodes writing skipped status to db).
     If the tasks been evaluated  are not following topoloy order(random order) by priority_weigth. It requried multipe loop rounds to propogate skip statue to all downsteam tasks.
     Depending on how close the topoloy order the tasks fetched, the proprogation may go further or shorter.
   
     The deadlock detetion can be avoid only the following  conditions happen at the same time:
     1. the skip task (shortcurit operation async process) update db with skip task status, right after dag update (flag_upstream_failed=false )before dag task checks(flag_upstream_failed=true) in scheduler process.
     2. dag checks(flag_upstream_failed=true) have all tasks fetched/evaluated in the topology order that skip status can propogate in one evaluations round.
   
   
   Fix approaches:
   
   1) Mark the ALL_sucess trigger rule : num_failures = upstream - successes - skipped.
      It will prevent the deadlock detectoin from being triggered. If the tasks are not ordered, mulitple rounds are required and will eventually mark all of the tasked as skipped. Or add an additional trigger rule. That is https://github.com/apache/airflow/pull/4182 and make it as the default trigger rule.
   
   2) Ordered tasks by topopy order to speed speedup skip status propogation in one round of evaluation.
      https://github.com/apache/airflow/blob/1.8.1/airflow/jobs.py#L893
      tis = sorted(tis, key=lambda x: x.priority_weight, reverse=True)
   
   ### Tests
   
   - [ ] My PR adds the following unit tests __OR__ does not need testing for this extremely good reason:
   
   ### Commits
   
   - [ ] My commits all reference Jira issues in their subject lines, and I have squashed multiple commits if they address the same issue. In addition, my commits follow the guidelines from "[How to write a good git commit message](http://chris.beams.io/posts/git-commit/)":
     1. Subject is separated from body by a blank line
     1. Subject is limited to 50 characters (not including Jira issue reference)
     1. Subject does not end with a period
     1. Subject uses the imperative mood ("add", not "adding")
     1. Body wraps at 72 characters
     1. Body explains "what" and "why", not "how"
   
   ### Documentation
   
   - [ ] In case of new functionality, my PR adds documentation that describes how to use it.
     - All the public functions and the classes in the PR contain docstrings that explain what it does
     - If you implement backwards incompatible changes, please leave a note in the [Updating.md](https://github.com/apache/airflow/blob/master/UPDATING.md) so we can assign it to a appropriate release
   
   ### Code Quality
   
   - [ ] Passes `flake8`

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services