You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@airflow.apache.org by "jens-scheffler-bosch (via GitHub)" <gi...@apache.org> on 2023/10/15 08:43:56 UTC

Re: [I] Zombie tasks detected after service restart sometimes do not retry [airflow]

jens-scheffler-bosch commented on issue #27657:
URL: https://github.com/apache/airflow/issues/27657#issuecomment-1763323254

   This error report is stale since a while, I tried to follow the discussion to clean up the issue backlog. I assume under the current release 2.7.2 there had been many changes and we would need to refresh the error report.
   
   One thing that I am curious from the last message is that DAG file are being replaced (during restart?). I assume we should consider for the bug that "only" a restart of the Airflow system (assumption: including DB and worker?) is made. Or do you just restart the scheduler? Is DAG Parsing within the scheduler process?
   If there are also DAG parsing problems reported, it might be caused by bad DAG parsing performance as well. If DAG parsing takes too long, DAGs not parsed for a longer time are dropped from the database. That might be a side effect causing this message.
   
   Can you maybe craft more details about the tasks you execute or implement a dummy DAG in the error report, which shows the error and try to re-produce on Airflow 2.7.2?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@airflow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org