You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@airflow.apache.org by GitBox <gi...@apache.org> on 2021/08/27 10:42:20 UTC

[GitHub] [airflow] eladkal commented on issue #15588: Task is retried after Scheduler restart

eladkal commented on issue #15588:
URL: https://github.com/apache/airflow/issues/15588#issuecomment-907108860


   > Imho only 1 and 3 are feasible solutions. but I am not sure where to store the job run id, so that it survives the scheduler restart. 3) is imho the cleanest solution.
   
   The scheduler shouldn't care about it. This is something that needs to be handled by the operator itself.
   I'm not sure I follow on the problem because `DatabricksRunNowOperator` has `job_id` parameter is something that you configure:
   
   https://github.com/apache/airflow/blob/866a601b76e219b3c043e1dbbc8fb22300866351/airflow/providers/databricks/operators/databricks.py#L462
   
   in any case if you think there is a problem to fix - Just open a PR with the approach you find best to solve the problem.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@airflow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org