You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@airflow.apache.org by "Bolke de Bruin (JIRA)" <ji...@apache.org> on 2017/12/12 11:47:02 UTC

[jira] [Resolved] (AIRFLOW-1854) Improve Spark submit hook for cluster mode

     [ https://issues.apache.org/jira/browse/AIRFLOW-1854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Bolke de Bruin resolved AIRFLOW-1854.
-------------------------------------
       Resolution: Fixed
    Fix Version/s: 1.9.1

Issue resolved by pull request #2852
[https://github.com/apache/incubator-airflow/pull/2852]

> Improve Spark submit hook for cluster mode
> ------------------------------------------
>
>                 Key: AIRFLOW-1854
>                 URL: https://issues.apache.org/jira/browse/AIRFLOW-1854
>             Project: Apache Airflow
>          Issue Type: Improvement
>          Components: hooks
>            Reporter: Milan van der Meer
>            Assignee: Milan van der Meer
>            Priority: Minor
>              Labels: features
>             Fix For: 1.9.1
>
>
> *We are already working on this issue and will submit a PR soon*
> When executing a Spark submit to a standalone cluster using the Spark submit hook, it will get a return code from the Spark submit action and not the Spark job itself.
> This means when a Spark submit is executed and successfully received by the cluster, the Airflow job will be successful, even when the Spark job fails on the cluster later on.
> Suggested solution:
> * When you execute a Spark submit in cluster mode, the logs will contain a driver ID.
> * Use this driver ID to poll the cluster for the driver state.
> * Based on the drivers state, the job will be successful or failed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)