You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@airflow.apache.org by "Adan Christian Rosales Ornelas (Jira)" <ji...@apache.org> on 2019/10/08 19:24:00 UTC

[jira] [Assigned] (AIRFLOW-5540) task with SparkSubmitOperator does not fail if the spark job it executes fails.

     [ https://issues.apache.org/jira/browse/AIRFLOW-5540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Adan Christian Rosales Ornelas reassigned AIRFLOW-5540:
-------------------------------------------------------

    Assignee: Adan Christian Rosales Ornelas

> task with SparkSubmitOperator  does not  fail if the  spark job it executes fails.
> ----------------------------------------------------------------------------------
>
>                 Key: AIRFLOW-5540
>                 URL: https://issues.apache.org/jira/browse/AIRFLOW-5540
>             Project: Apache Airflow
>          Issue Type: Wish
>          Components: operators
>    Affects Versions: 1.10.3
>         Environment: RHEL 7.3 with default airflow  installation.
>            Reporter: Juan M George
>            Assignee: Adan Christian Rosales Ornelas
>            Priority: Major
>         Attachments: airfow_issues
>
>
> In  my  test Dag, I have a task that uses SparkSubmitOperator operator to execute a spark job  that  reads from a table in a database and  does some processing and write into a file.   In  a scenario, where source table from where  I read data does not exist, the spark job fails but the  task  is still be shown as  executed successfully.  I don't see any other way to handle this business logic failure from the operator side.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)