You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@airflow.apache.org by "Ram (Jira)" <ji...@apache.org> on 2020/01/16 12:14:00 UTC

[jira] [Updated] (AIRFLOW-6580) Killing or marking a task as failed does not kill the Pod in the backend

     [ https://issues.apache.org/jira/browse/AIRFLOW-6580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Ram updated AIRFLOW-6580:
-------------------------
    Summary: Killing or marking a task as failed does not kill the Pod in the backend  (was: Killing or marking a task does not kill the Pod in the backend)

> Killing or marking a task as failed does not kill the Pod in the backend
> ------------------------------------------------------------------------
>
>                 Key: AIRFLOW-6580
>                 URL: https://issues.apache.org/jira/browse/AIRFLOW-6580
>             Project: Apache Airflow
>          Issue Type: Bug
>          Components: DAG, executor-kubernetes
>    Affects Versions: 1.10.2
>            Reporter: Ram
>            Assignee: Daniel Imberman
>            Priority: Blocker
>
> We're using KubernetesPodOperator in Airflow 1.10.2
>  The pods that we have some NodeAffinity and Tolerations in it. 
>  Sometimes the pod gets stuck at a Pending state.
>  *But when the task fails, the Pending pod does not kill itself.*
> Related to this, when we manually fail a task, the DAG task stops running, but the Pod in the DAG does not get killed and continues running.
> We have tried setting the 'is_delete_operator_pod' to True. But for some reason the Pod gets killed almost instantly when the execution starts. We have not been able to debug the issue behind this.
> Does the latest version of Airflow account for this?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)