You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@airflow.apache.org by GitBox <gi...@apache.org> on 2021/04/16 17:29:29 UTC

[GitHub] [airflow] sosso edited a comment on issue #14205: Scheduler "deadlocks" itself when max_active_runs_per_dag is reached by up_for_retry tasks

sosso edited a comment on issue #14205:
URL: https://github.com/apache/airflow/issues/14205#issuecomment-821326846


   +1 for us on this issue as well, I think?  Very strangely, we see the most recent run for a DAG have its *run* be set to 'running', but the only task in the DAG be a clear success:
   
   ![image](https://user-images.githubusercontent.com/619968/115061689-57345180-9e9e-11eb-88a4-41de2abf94d2.png)
   
   This is a catchup=False DAG, where the only task runs in a pool, and there is *nothing* in the Scheduler log for this DAG for two hours (the DAG runs is supposed to run every 5 minutes) about why it can't schedule this DAG.  No "max active runs reached", no "no slots available in pool", nothing.  It's like the scheduler forgot this DAG existed until we rebooted it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org