You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@airflow.apache.org by Vijay Ramesh <vi...@change.org> on 2017/02/19 18:50:08 UTC

dag_run immediately marked as failed without creating tasks

I sent an email to this list 2 days ago ("backfill doesn't create task
instances for subdag") but am still experiencing very odd problems with
pools configured and running the local executor.

I reset the Airflow database, and pushed all dags to have updated schedules
so I wouldn't have huge backfills when redeploying to production.  All
seemed well, but soon enough (within a few hours) I started getting DAGs
creating dag_run records that are immediately marked failed, and no task
instances are created. As soon as that happens, every subsequent dag_run is
immediately failed, and no task instances are created.

This behavior did not occur without pools configured (we previously just
had a low parallelism of 4, after doing the pools change I turned
parallelism up to 10 as there are some tasks that don't require the
low-concurrency redshift pool).  As best as I can figure, with tasks
backing up on pools it never even gets to queue them but marks the dag run
itself as failed? Has anybody experienced this or have any ideas what I
might be doing wrong?

Thanks!
 - Vijay Ramesh