You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@airflow.apache.org by Sergio Kef <se...@gmail.com> on 2019/07/02 22:55:26 UTC

Benchmarking Airflow

Hey folks,

Do we have something like airflow benchmarks?
Seems that many people seem to struggle to understand the limitations of
airflow  (me included).

Is there some existing work on bechmarking (ie define a few common cases
and measure performance while increase volume of tasks)?

I know it's quite challenging task to compare the different executors or
different versions, etc. but even if we start very simple (eg resources
required for an idle airflow scheduler), I think we will start having
useful insights.

What's your thoughts?
S.

Re: Benchmarking Airflow

Posted by "ramandumcs@gmail.com" <ra...@gmail.com>.
Hi Sergio,

We did some benchmarking with Local & K8 Executor Mode. We observed that Each Airflow Tasks takes ~100 MB of memory in Local Executor Mode.
With 16 GB of RAM we could run ~140 concurrent tasks. After this we started getting "can not allocate memory error".
With K8 Executor memory footprint of task(worker Pod) increases to ~150 MB.
We also observed that  scheduling latency increases with increase in Number of DAG files.
Airflow.cfg's config "max_threads" controls the number of Dag files to be processed parellely in every scheduling loop.
so 
Time to process DAG = ((Number of Dags)/max_threads) * (Scheduler Loop Time)

Thanks,
Raman Gupta
 


On 2019/07/02 22:55:26, Sergio Kef <se...@gmail.com> wrote: 
> Hey folks,
> 
> Do we have something like airflow benchmarks?
> Seems that many people seem to struggle to understand the limitations of
> airflow  (me included).
> 
> Is there some existing work on bechmarking (ie define a few common cases
> and measure performance while increase volume of tasks)?
> 
> I know it's quite challenging task to compare the different executors or
> different versions, etc. but even if we start very simple (eg resources
> required for an idle airflow scheduler), I think we will start having
> useful insights.
> 
> What's your thoughts?
> S.
>