You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@airflow.apache.org by "ASF GitHub Bot (Jira)" <ji...@apache.org> on 2019/11/03 06:11:00 UTC

[jira] [Commented] (AIRFLOW-5096) reduce the number of times the pickle is inserted into the database by modifying the hash field of Dag

    [ https://issues.apache.org/jira/browse/AIRFLOW-5096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16965592#comment-16965592 ] 

ASF GitHub Bot commented on AIRFLOW-5096:
-----------------------------------------

MeiK2333 commented on pull request #5709: [AIRFLOW-5096] use modification time replace last loaded time, reduce database insert
URL: https://github.com/apache/airflow/pull/5709
 
 
   
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


> reduce the number of times the pickle is inserted into the database by modifying the hash field of Dag
> ------------------------------------------------------------------------------------------------------
>
>                 Key: AIRFLOW-5096
>                 URL: https://issues.apache.org/jira/browse/AIRFLOW-5096
>             Project: Apache Airflow
>          Issue Type: Improvement
>          Components: DAG
>    Affects Versions: 1.10.3
>            Reporter: MeiK
>            Assignee: MeiK
>            Priority: Major
>
> After the scheduler has the --do_pickle option turned on, the scheduler will insert all the file pickles into the database each time it scans the file, which will cause the database to swell rapidly.
> In my opinion, the main reason is because the hash function that determines whether the dag is the same as the pickle version uses the last_loaded field, which changes every time it is read instead of modified. Therefore, airflow inserts a large amount of unchanging data into the database.
> I created a commit in which the last modified time of the file was used instead of last_loaded as the hash field, which works fine on my computer. Please let me know if you have a better way.
> English is not my native language; please excuse typing errors.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)