You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@airflow.apache.org by "ASF GitHub Bot (JIRA)" <ji...@apache.org> on 2019/08/02 07:28:00 UTC

[jira] [Commented] (AIRFLOW-5096) reduce the number of times the pickle is inserted into the database by modifying the hash field of Dag

    [ https://issues.apache.org/jira/browse/AIRFLOW-5096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16898649#comment-16898649 ] 

ASF GitHub Bot commented on AIRFLOW-5096:
-----------------------------------------

MeiK2333 commented on pull request #5709: [AIRFLOW-5096] use modification time replace last loaded time, reduce database insert
URL: https://github.com/apache/airflow/pull/5709
 
 
   Make sure you have checked _all_ steps below.
   
   ### Jira
   
   - [x] My PR addresses the following [Airflow Jira](https://issues.apache.org/jira/browse/AIRFLOW/) issues and references them in the PR title. For example, "\[AIRFLOW-XXX\] My Airflow PR"
     - https://issues.apache.org/jira/browse/AIRFLOW-XXX
     - In case you are fixing a typo in the documentation you can prepend your commit with \[AIRFLOW-XXX\], code changes always need a Jira issue.
     - In case you are proposing a fundamental code change, you need to create an Airflow Improvement Proposal ([AIP](https://cwiki.apache.org/confluence/display/AIRFLOW/Airflow+Improvements+Proposals)).
     - In case you are adding a dependency, check if the license complies with the [ASF 3rd Party License Policy](https://www.apache.org/legal/resolved.html#category-x).
   
   ### Description
   
   - [x] Here are some details about my PR, including screenshots of any UI changes:
   
   After the scheduler has the --do_pickle option turned on, the scheduler will insert all the file pickles into the database each time it scans the file, which will cause the database to swell rapidly.
   
   In my opinion, the main reason is because the hash function that determines whether the dag is the same as the pickle version uses the last_loaded field, which changes every time it is read instead of modified. Therefore, airflow inserts a large amount of unchanging data into the database.
   
   I created a commit in which the last modified time of the file was used instead of last_loaded as the hash field, which works fine on my computer. Please let me know if you have a better way.
   
   English is not my native language; please excuse typing errors.
   
   ### Tests
   
   - [x] My PR adds the following unit tests __OR__ does not need testing for this extremely good reason:
   
   This is an enhancement to existing features, I don't think there is no need for additional testing.
   
   ### Commits
   
   - [x] My commits all reference Jira issues in their subject lines, and I have squashed multiple commits if they address the same issue. In addition, my commits follow the guidelines from "[How to write a good git commit message](http://chris.beams.io/posts/git-commit/)":
     1. Subject is separated from body by a blank line
     1. Subject is limited to 50 characters (not including Jira issue reference)
     1. Subject does not end with a period
     1. Subject uses the imperative mood ("add", not "adding")
     1. Body wraps at 72 characters
     1. Body explains "what" and "why", not "how"
   
   ### Documentation
   
   - [x] In case of new functionality, my PR adds documentation that describes how to use it.
     - All the public functions and the classes in the PR contain docstrings that explain what it does
     - If you implement backwards incompatible changes, please leave a note in the [Updating.md](https://github.com/apache/airflow/blob/master/UPDATING.md) so we can assign it to a appropriate release
   
   ### Code Quality
   
   - [x] Passes `flake8`
   
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


> reduce the number of times the pickle is inserted into the database by modifying the hash field of Dag
> ------------------------------------------------------------------------------------------------------
>
>                 Key: AIRFLOW-5096
>                 URL: https://issues.apache.org/jira/browse/AIRFLOW-5096
>             Project: Apache Airflow
>          Issue Type: Improvement
>          Components: DAG
>    Affects Versions: 1.10.3
>            Reporter: MeiK
>            Assignee: MeiK
>            Priority: Minor
>
> After the scheduler has the --do_pickle option turned on, the scheduler will insert all the file pickles into the database each time it scans the file, which will cause the database to swell rapidly.
> In my opinion, the main reason is because the hash function that determines whether the dag is the same as the pickle version uses the last_loaded field, which changes every time it is read instead of modified. Therefore, airflow inserts a large amount of unchanging data into the database.
> I created a commit in which the last modified time of the file was used instead of last_loaded as the hash field, which works fine on my computer. Please let me know if you have a better way.
> English is not my native language; please excuse typing errors.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)