You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@airflow.apache.org by GitBox <gi...@apache.org> on 2020/10/23 00:42:01 UTC

[GitHub] [airflow] shivanshs9 opened a new issue #11754: Scheduler doesn't schedule task instances again due to "max_active_runs"

shivanshs9 opened a new issue #11754:
URL: https://github.com/apache/airflow/issues/11754


   <!--
   
   Welcome to Apache Airflow!  For a smooth issue process, try to answer the following questions.
   Don't worry if they're not all applicable; just try to include what you can :-)
   
   If you need to include code snippets or logs, please put them in fenced code
   blocks.  If they're super-long, please use the details tag like
   <details><summary>super-long log</summary> lots of stuff </details>
   
   Please delete these comment blocks before submitting the issue.
   
   -->
   
   <!--
   
   IMPORTANT!!!
   
   PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
   NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
   
   PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
   
   Please complete the next sections or the issue will be closed.
   These questions are the first thing we need to know to understand the context.
   
   -->
   
   **Apache Airflow version**: v2.0.0a1
   **Apache Airflow Docker Image**: docker.pkg.github.com/apache/airflow/master-python3.8:32f24cbd44fc0a893e293aa0ac24983045dfde1e
   
   
   **Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
   ```
   Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.7", GitCommit:"be3d344ed06bff7a4fc60656200a93c74f31f9a4", GitTreeState:"clean", BuildDate:"2020-02-11T19:34:02Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
   Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-30T20:19:45Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
   ```
   
   **Environment**:
   
   - **Cloud provider or hardware configuration**: KIND (Local Kubernetes)
   - **OS** (e.g. from /etc/os-release): Arch
   - **Kernel** (e.g. `uname -a`):
   - **Install tools**:
   - **Others**:
   
   **What happened**:
   
   <!-- (please include exact error messages if you can) -->
   Smart Sensor DAG, `smart_sensor_group_shard_4`, had a UP_FOR_RETRY state in its task instance but it wasn't being rescheduled by the scheduler. From the attached scheduler logs, you can see it incorrectly decided that the active runs have reached its `max_active_runs` value.
   
   A similar problem occurred with my custom DAG, `_new_.djsandk`, which coincidentally had 1 `max_active_runs` value too. So on being scheduled, there was 1 dagrun of it in `running` state, but all of its task instances had a `NULL` state in DB (checked it in MySQL). Hence this DAG wasn't even running as per schedule.
   
   **What you expected to happen**:
   
   <!-- What do you think went wrong? -->
   
   I expected both mentioned DAGs, `smart_sensor_group_shard_4` and `_new_.djsandk`, to be scheduled and queued for execution.
   
   **How to reproduce it**:
   <!---
   
   As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
   
   If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
   
   ## Install minikube/kind
   
   - Minikube https://minikube.sigs.k8s.io/docs/start/
   - Kind https://kind.sigs.k8s.io/docs/user/quick-start/
   
   If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
   
   You can include images using the .md style of
   ![alt text](http://url/to/img.png)
   
   To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
   
   --->
   
   Trying to run any DAG with `max_active_runs` as 1.
   
   
   **Anything else we need to know**:
   
   <!--
   
   How often does this problem occur? Once? Every time etc?
   
   Any relevant logs to include? Put them here in side a detail tag:
   <details><summary>x.log</summary> lots of stuff </details>
   
   -->
   
   <details>
   <summary>airflow.cfg</summary>
   
   ```
   [core]
   # Whether to serialise DAGs and persist them in DB.
   # If set to True, Webserver reads from DB instead of parsing DAG files
   # More details: https://airflow.apache.org/docs/stable/dag-serialization.html
   store_serialized_dags = False
   
   [scheduler]
   # This changes the batch size of queries in the scheduling main loop.
   # If this is too high, SQL query performance may be impacted by one
   # or more of the following:
   # - reversion to full table scan
   # - complexity of query predicate
   # - excessive locking
   # Additionally, you may hit the maximum allowable query length for your db.
   # Set this to 0 for no limit (not advised)
   max_tis_per_query = 512
   
   # Should the scheduler issue `SELECT ... FOR UPDATE` in relevant queries.
   # If this is set to False then you should not run more than a single
   # scheduler at once
   use_row_level_locking = False
   
   # The scheduler can run multiple threads in parallel to schedule dags.
   # This defines how many threads will run.
   max_threads = 8
   ```
   </details>
   
   <details>
   
   <summary>Scheduler Logs</summary>
   
   ```
   [2020-10-22 19:04:29,010] {{scheduler_job.py:1644}} INFO - DAG _new_.djsandk already has 1 active runs, not queuing any more tasks
   [2020-10-22 19:04:29,015] {{scheduler_job.py:1644}} INFO - DAG ergo_job_collector already has 1 active runs, not queuing any more tasks
   [2020-10-22 19:04:29,021] {{scheduler_job.py:1644}} INFO - DAG ergo_task_queuer already has 1 active runs, not queuing any more tasks
   [2020-10-22 19:04:29,025] {{scheduler_job.py:1644}} INFO - DAG smart_sensor_group_shard_2 already has 1 active runs, not queuing any more tasks
   [2020-10-22 19:04:29,028] {{scheduler_job.py:1644}} INFO - DAG smart_sensor_group_shard_0 already has 1 active runs, not queuing any more tasks
   [2020-10-22 19:04:29,031] {{scheduler_job.py:1644}} INFO - DAG smart_sensor_group_shard_4 already has 1 active runs, not queuing any more tasks
   [2020-10-22 19:04:30,095] {{scheduler_job.py:1644}} INFO - DAG _new_.djsandk already has 1 active runs, not queuing any more tasks
   [2020-10-22 19:04:30,116] {{scheduler_job.py:1644}} INFO - DAG ergo_job_collector already has 1 active runs, not queuing any more tasks
   [2020-10-22 19:04:30,120] {{scheduler_job.py:1644}} INFO - DAG ergo_task_queuer already has 1 active runs, not queuing any more tasks
   [2020-10-22 19:04:30,127] {{scheduler_job.py:1644}} INFO - DAG smart_sensor_group_shard_2 already has 1 active runs, not queuing any more tasks
   [2020-10-22 19:04:30,130] {{scheduler_job.py:1644}} INFO - DAG smart_sensor_group_shard_0 already has 1 active runs, not queuing any more tasks
   [2020-10-22 19:04:30,134] {{scheduler_job.py:1644}} INFO - DAG smart_sensor_group_shard_4 already has 1 active runs, not queuing any more tasks
   ```
   
   </details>
   
   I noticed the behaviour changed from https://github.com/apache/airflow/blob/87038ae42aff3ff81cba1111e418a1f411a0c7b1/airflow/jobs/scheduler_job.py#L781 to
   https://github.com/apache/airflow/blob/95be3eec42cd5ae0ef8c6402f7d1cd87e5d93848/airflow/jobs/scheduler_job.py#L1643 in Airflow v2. Previously, it used to process all task instances belonging to active DAG runs (bounded by `max_active_runs`). But in Airflow v2, it doesn't process any task instances if the current number of `running` DAGruns >= `max_active_runs`, which is probably a wrong interpretation since a `running` DAG doesn't necessarily imply all its task_instances are running as well.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [airflow] shivanshs9 closed issue #11754: Scheduler doesn't schedule task instances again due to "max_active_runs"

Posted by GitBox <gi...@apache.org>.
shivanshs9 closed issue #11754:
URL: https://github.com/apache/airflow/issues/11754


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [airflow] shivanshs9 commented on issue #11754: Scheduler doesn't schedule task instances again due to "max_active_runs"

Posted by GitBox <gi...@apache.org>.
shivanshs9 commented on issue #11754:
URL: https://github.com/apache/airflow/issues/11754#issuecomment-717541370


   Fixed by https://github.com/apache/airflow/pull/11730


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org