You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@airflow.apache.org by GitBox <gi...@apache.org> on 2021/04/02 22:35:44 UTC

[GitHub] [airflow] SalmonTimo commented on issue #13542: Task stuck in "scheduled" or "queued" state, pool has all slots queued, nothing is executing

SalmonTimo commented on issue #13542:
URL: https://github.com/apache/airflow/issues/13542#issuecomment-812742063


   I ran into this issue due to the scheduler over-utilizing CPU because our `min_file_process_interval` was set to 0 (the default prior to 2.0), which in airflow 2.0 causes a ton of CPU work from constantly refreshing DAG files. Setting this parameter to 60 fixed the issue.
   The stack I observed this on:
   host: AWS ECS Cluster
   executor: CeleryExecutor
   queue: AWS SQS Queue
   
   The behavior I observed was that the scheduler would mark tasks are "queued", but never actually send them to the queue (I think the scheduler does actual queueing via the executor). My manual workaround until correcting the `min_file_process_interval` param was to stop the scheduler, clear queued tasks, and then start a new scheduler. The new scheduler would temporarily properly send tasks to the queue, before degenerating to marking tasks as queued without sending to the queue.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org