You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@airflow.apache.org by "Oliver Ricken (JIRA)" <ji...@apache.org> on 2019/08/13 10:33:00 UTC

[jira] [Created] (AIRFLOW-5191) SubDag is marked failed

Oliver Ricken created AIRFLOW-5191:
--------------------------------------

             Summary: SubDag is marked failed 
                 Key: AIRFLOW-5191
                 URL: https://issues.apache.org/jira/browse/AIRFLOW-5191
             Project: Apache Airflow
          Issue Type: Bug
          Components: DAG, DagRun
    Affects Versions: 1.10.4
         Environment: CentOS 7, Maria-DB, python 3.6.7, Airflow 1.10.4
            Reporter: Oliver Ricken


Dear all,
after having upgraded from Airflow version 1.10.2 to 1.10.4, we experience strange and very problematic behaviour of SubDags (which are crucial for our environment and used frequently).
Tasks inside the SubDag failing and awaiting retry ("up-for-retry") mark the SubDag "failed" (while in 1.10.2, the SubDag was still in "running"-state). This is particularly problematic for downstream tasks depending on the state of the SubDag. Since we have downstream tasks triggered on "all_done", the downstream task is triggered by the "failed" SubDag although a SubDag-internal task is awaiting retry and might (in our case: most likely) yield successfully processed data. This data is thus not available to the prematurely triggered task downstream of the SubDag.

This is a severe problem for us and worth rolling back to 1.10.2 if there is no quick solution or work-around to this issue!

We urgently need help on this matter.

Thanks allot in advance, any suggestions and input is highly appreciated!

Cheers

Oliver



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)