You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@airflow.apache.org by ka...@apache.org on 2020/08/11 22:34:41 UTC

[airflow] branch v1-10-test updated (a281faa -> 242d6d0)

This is an automated email from the ASF dual-hosted git repository.

kaxilnaik pushed a change to branch v1-10-test
in repository https://gitbox.apache.org/repos/asf/airflow.git.


 discard a281faa  fix static tests
 discard 71c6ace  Makes multi-namespace mode optional (#9570)
 discard 14ca77d  Fix KubernetesPodOperator reattachment (#10230)
    omit 90fe915  add documentation for k8s fixes
    omit 9e2557a  Fix init containers
    omit 215c16f  Init container as dict instead of object
    omit 3773dac  init-container fix
    omit be06b18  Add kubernetes changes to UPDATING.md
    omit 33626c1  Fix volume secret extraction
    omit 9b89f07  Allows secrets with mounts in init containers
    omit 21aade4  Fix more PodMutationHook issues for backwards compatibility (#10084)
    omit a513964  fixup! Pin fsspec<8.0.0 for Python <3.6 to fix Static Checks
    omit badd86a  Pin fsspec<8.0.0 for Python <3.6 to fix Static Checks
    omit 54d8aae  Revert "Enable pretty output in mypy (#9785)"
    omit fa88bf9  compat fix for Xcom
    omit aab7b59  Add getimport for xcom change
    omit ca0dd1a  Make XCom 2.7 compatible
    omit 5e3ba14  Allow to define custom XCom class (#8560)
    omit 707f995  Fix check_integration pre-commit test (#9869)
    omit ab1d7ec  Enable pretty output in mypy (#9785)
    omit f872c54  Fix docstrings in BigQueryGetDataOperator (#10042)
    omit e8d6db6  Pin Pyarrow < 1.0
    omit 2cb1d82  Set pytest version to be < 6.0.0 due to breaking changes (#10043)
    omit f83560d  Pin pymongo version to <3.11.0
    omit 4f7a453  Add pre 1.10.11 Kubernetes Paths back with Deprecation Warning (#10067)
    omit ea6c305  Fixes PodMutationHook for backwards compatibility (#9903)
    omit 133285f  Fix bug in executor_config when defining resources (#9935)
    omit 8c6bebe  Breeze / KinD  - support earlier k8s versions, fix recreate and kubectl versioning (#9905)
    omit 5382db3  Pin google-cloud-container to <2 (#9901)
    omit 4f8b838  Clean up tmp directory when exiting from breeze shell (#9930)
    omit 7d24e73  Pin github checkout action to v2 (#9938)
    omit babe75c  fixup! fixup! Constraint files are now maintained automatically (#9889)
    omit 15fc941  fixup! Constraint files are now maintained automatically (#9889)
    omit 627aa71  Tests are cancelled if any of faster checks fail (#9917)
    omit 5b03424  Simplify pull request template (#9896)
    omit 26d6c6e  Constraint files are now maintained automatically (#9889)
    omit 86cc96c  Added "all" to allowed breeze integrations and tried to clarify on fail (#9872)
    omit d33f6d0  Reorganizing of CI tests (#9654)
    omit 07d0c84  Group CI scripts in subdirectories (#9653)
    omit dff4443  For now cloud tools are not needed in CI (#9818)
    omit d66494f  Remove package.json and yarn.lock from the prod image (#9814)
    omit 5f4f8f4  The group of embedded DAGs should be root to be OpenShift compatible (#9794)
    omit 1c51f98  Tests should also be triggered when there is just setup.py change (#9690)
    omit 6051b41  [AIRFLOW-5391] Do not re-run skipped tasks when they are cleared (#7276)
    omit 320ccad  Fix task_instance_mutation_hook (#9910)
    omit 8ea8dd1  fixup! Update some dependencies (#9684)
    omit 29835cd  Python base image version is retrieved in the right place (#9931)
    omit 0dc20fd  Update some dependencies (#9684)
     new 0b5f0fc  Update some dependencies (#9684)
     new e2e6853  Python base image version is retrieved in the right place (#9931)
     new e6b017a  Fix task_instance_mutation_hook (#9910)
     new 179e930  [AIRFLOW-5391] Do not re-run skipped tasks when they are cleared (#7276)
     new 0718977  Tests should also be triggered when there is just setup.py change (#9690)
     new d61c33d  The group of embedded DAGs should be root to be OpenShift compatible (#9794)
     new 1d4782e  Remove package.json and yarn.lock from the prod image (#9814)
     new 1a41879  For now cloud tools are not needed in CI (#9818)
     new 7ec2b3a  Group CI scripts in subdirectories (#9653)
     new f6c8f51  Reorganizing of CI tests (#9654)
     new 6e290cf  Added "all" to allowed breeze integrations and tried to clarify on fail (#9872)
     new 5f93baf  Constraint files are now maintained automatically (#9889)
     new 25e0e26  Simplify pull request template (#9896)
     new e33ffbe  Tests are cancelled if any of faster checks fail (#9917)
     new c72ce92  Pin github checkout action to v2 (#9938)
     new 213500e  Clean up tmp directory when exiting from breeze shell (#9930)
     new 6d10a7f  Pin google-cloud-container to <2 (#9901)
     new 6b219e1  Breeze / KinD  - support earlier k8s versions, fix recreate and kubectl versioning (#9905)
     new 05ec21a  Fix bug in executor_config when defining resources (#9935)
     new bcd02dd  Fixes PodMutationHook for backwards compatibility (#9903)
     new bfa089d  Add pre 1.10.11 Kubernetes Paths back with Deprecation Warning (#10067)
     new 5c9ff4d  Pin pymongo version to <3.11.0
     new 6f8b0cc  Set pytest version to be < 6.0.0 due to breaking changes (#10043)
     new 70a7416  Pin Pyarrow < 1.0
     new 06b06d7  Fix docstrings in BigQueryGetDataOperator (#10042)
     new 64c89db  Allow to define custom XCom class (#8560)
     new ec1cb7d  Make XCom 2.7 compatible
     new 1a8ba6a  Add getimport for xcom change
     new 3e1d88e  Pin fsspec<8.0.0 for Python <3.6 to fix Static Checks
     new c230156  Fix more PodMutationHook issues for backwards compatibility (#10084)
     new c47a7c4  Fix KubernetesPodOperator reattachment (#10230)
     new 242d6d0  Makes multi-namespace mode optional (#9570)

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (a281faa)
            \
             N -- N -- N   refs/heads/v1-10-test (242d6d0)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 32 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:


[airflow] 14/32: Tests are cancelled if any of faster checks fail (#9917)

Posted by ka...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

kaxilnaik pushed a commit to branch v1-10-test
in repository https://gitbox.apache.org/repos/asf/airflow.git

commit e33ffbef7db99a220642d8b12b0a4c25275ef598
Author: Jarek Potiuk <ja...@polidea.com>
AuthorDate: Wed Jul 22 08:05:36 2020 +0200

    Tests are cancelled if any of faster checks fail (#9917)
    
    (cherry picked from commit 508d7d202ac13531ac840e8fdcc1ad0adb7c9460)
---
 .github/workflows/ci.yml | 14 +++++++++++++-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml
index aac8be1..8cb1efa 100644
--- a/.github/workflows/ci.yml
+++ b/.github/workflows/ci.yml
@@ -60,7 +60,7 @@ jobs:
 
   static-checks:
     timeout-minutes: 60
-    name: "Checks"
+    name: "Static checks"
     runs-on: ubuntu-latest
     needs:
       - cancel-previous-workflow-run
@@ -84,6 +84,9 @@ jobs:
         run: ./scripts/ci/images/ci_prepare_ci_image_on_ci.sh
       - name: "Static checks"
         run: ./scripts/ci/static_checks/ci_run_static_checks.sh
+      - name: "Cancel workflow on static checks failure"
+        if: ${{ failure() }}
+        uses: andymckay/cancel-action@0.2
   docs:
     timeout-minutes: 60
     name: "Build docs"
@@ -99,6 +102,9 @@ jobs:
         run: ./scripts/ci/images/ci_prepare_ci_image_on_ci.sh
       - name: "Build docs"
         run: ./scripts/ci/docs/ci_docs.sh
+      - name: "Cancel workflow on docs failure"
+        if: ${{ failure() }}
+        uses: andymckay/cancel-action@0.2
 
   trigger-tests:
     timeout-minutes: 5
@@ -305,6 +311,9 @@ jobs:
       - uses: actions/checkout@master
       - name: "Helm Tests"
         run: ./scripts/ci/kubernetes/ci_run_helm_testing.sh
+      - name: "Cancel workflow on helm-tests failure"
+        if: ${{ failure() }}
+        uses: andymckay/cancel-action@0.2
 
   build-prod-image:
     timeout-minutes: 60
@@ -319,6 +328,9 @@ jobs:
       - uses: actions/checkout@master
       - name: "Build PROD image ${{ matrix.python-version }}"
         run: ./scripts/ci/images/ci_prepare_prod_image_on_ci.sh
+      - name: "Cancel workflow on build prod image failure"
+        if: ${{ failure() }}
+        uses: andymckay/cancel-action@0.2
 
   push-prod-images-to-github-cache:
     timeout-minutes: 80


[airflow] 08/32: For now cloud tools are not needed in CI (#9818)

Posted by ka...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

kaxilnaik pushed a commit to branch v1-10-test
in repository https://gitbox.apache.org/repos/asf/airflow.git

commit 1a418797eb650cae501ca5460a142b055e335842
Author: Jarek Potiuk <ja...@polidea.com>
AuthorDate: Tue Jul 14 16:35:33 2020 +0200

    For now cloud tools are not needed in CI (#9818)
    
    Currently there is "unbound" variable error printed in CI logs
    because of that.
    
    (cherry picked from commit 69f82e66af54fb85a07ee6c7c85b8d4f5140e758)
---
 scripts/ci/in_container/entrypoint_ci.sh | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/scripts/ci/in_container/entrypoint_ci.sh b/scripts/ci/in_container/entrypoint_ci.sh
index 349b092..4d1bf0c 100755
--- a/scripts/ci/in_container/entrypoint_ci.sh
+++ b/scripts/ci/in_container/entrypoint_ci.sh
@@ -45,9 +45,11 @@ RUN_TESTS=${RUN_TESTS:="false"}
 CI=${CI:="false"}
 INSTALL_AIRFLOW_VERSION="${INSTALL_AIRFLOW_VERSION:=""}"
 
-# Create links for useful CLI tools
-# shellcheck source=scripts/ci/run_cli_tool.sh
-source <(bash scripts/ci/run_cli_tool.sh)
+if [[ ${CI} == "false" ]]; then
+    # Create links for useful CLI tools
+    # shellcheck source=scripts/ci/run_cli_tool.sh
+    source <(bash scripts/ci/run_cli_tool.sh)
+fi
 
 if [[ ${AIRFLOW_VERSION} == *1.10* || ${INSTALL_AIRFLOW_VERSION} == *1.10* ]]; then
     export RUN_AIRFLOW_1_10="true"


[airflow] 13/32: Simplify pull request template (#9896)

Posted by ka...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

kaxilnaik pushed a commit to branch v1-10-test
in repository https://gitbox.apache.org/repos/asf/airflow.git

commit 25e0e2617c536d4ac9128cbace381082441a3db0
Author: Tomek Urbaszek <tu...@gmail.com>
AuthorDate: Tue Jul 21 12:18:07 2020 +0200

    Simplify pull request template (#9896)
    
    Remove the checklist of always checked points.
    
    (cherry picked from commit 7dd5c11f966df0cb20b7503be3438e19248e66ea)
---
 .github/PULL_REQUEST_TEMPLATE.md | 27 ++++++++++++++++++++-------
 1 file changed, 20 insertions(+), 7 deletions(-)

diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md
index 64625e4..1e3c23d 100644
--- a/.github/PULL_REQUEST_TEMPLATE.md
+++ b/.github/PULL_REQUEST_TEMPLATE.md
@@ -1,10 +1,23 @@
-- [ ] Description above provides context of the change
-- [ ] Commit message contains [\[AIRFLOW-XXXX\]](https://issues.apache.org/jira/browse/AIRFLOW-XXXX) or `[AIRFLOW-XXXX]` for document-only changes
-- [ ] Unit tests coverage for changes (not needed for documentation changes)
-- [ ] Commits follow "[How to write a good git commit message](http://chris.beams.io/posts/git-commit/)"
-- [ ] Relevant documentation is updated including usage instructions.
-- [ ] I will engage committers as explained in [Contribution Workflow Example](https://github.com/apache/airflow/blob/master/CONTRIBUTING.rst#contribution-workflow-example).
+<!--
+Thank you for contributing! Please make sure that your code changes
+are covered with tests. And in case of new features or big changes
+remember to adjust the documentation.
+
+Feel free to ping committers for the review!
+
+In case of existing issue, reference it using one of the following:
+
+closes: #ISSUE
+related: #ISSUE
+
+How to write a good git commit message:
+http://chris.beams.io/posts/git-commit/
+-->
+
+---
+**^ Add meaningful description above**
+
+Read the **[Pull Request Guidelines](https://github.com/apache/airflow/blob/master/CONTRIBUTING.rst#pull-request-guidelines)** for more information.
 In case of fundamental code change, Airflow Improvement Proposal ([AIP](https://cwiki.apache.org/confluence/display/AIRFLOW/Airflow+Improvements+Proposals)) is needed.
 In case of a new dependency, check compliance with the [ASF 3rd Party License Policy](https://www.apache.org/legal/resolved.html#category-x).
 In case of backwards incompatible changes please leave a note in [UPDATING.md](https://github.com/apache/airflow/blob/master/UPDATING.md).
-Read the [Pull Request Guidelines](https://github.com/apache/airflow/blob/master/CONTRIBUTING.rst#pull-request-guidelines) for more information.


[airflow] 17/32: Pin google-cloud-container to <2 (#9901)

Posted by ka...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

kaxilnaik pushed a commit to branch v1-10-test
in repository https://gitbox.apache.org/repos/asf/airflow.git

commit 6d10a7f71c6aa10f7020bd4b69355f36bc9179b7
Author: Ephraim Anierobi <sp...@gmail.com>
AuthorDate: Mon Jul 20 22:56:37 2020 +0100

    Pin google-cloud-container to <2 (#9901)
    
    (cherry picked from commit 560e0b504b52ead405b604934893c784ddf4dafa)
---
 Dockerfile.ci | 2 +-
 setup.py      | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/Dockerfile.ci b/Dockerfile.ci
index 5d4f240..2797e34 100644
--- a/Dockerfile.ci
+++ b/Dockerfile.ci
@@ -186,7 +186,7 @@ RUN mkdir -pv ${AIRFLOW_HOME} && \
     mkdir -pv ${AIRFLOW_HOME}/logs
 
 # Increase the value here to force reinstalling Apache Airflow pip dependencies
-ARG PIP_DEPENDENCIES_EPOCH_NUMBER="3"
+ARG PIP_DEPENDENCIES_EPOCH_NUMBER="4"
 ENV PIP_DEPENDENCIES_EPOCH_NUMBER=${PIP_DEPENDENCIES_EPOCH_NUMBER}
 
 # Optimizing installation of Cassandra driver
diff --git a/setup.py b/setup.py
index 1fe821b..e11ce70 100644
--- a/setup.py
+++ b/setup.py
@@ -255,7 +255,7 @@ gcp = [
     'google-auth>=1.0.0, <2.0.0dev',
     'google-auth-httplib2>=0.0.1',
     'google-cloud-bigtable>=1.0.0',
-    'google-cloud-container>=0.1.1',
+    'google-cloud-container>=0.1.1,<2.0',
     'google-cloud-dlp>=0.11.0',
     'google-cloud-language>=1.1.1',
     'google-cloud-secret-manager>=0.2.0',


[airflow] 16/32: Clean up tmp directory when exiting from breeze shell (#9930)

Posted by ka...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

kaxilnaik pushed a commit to branch v1-10-test
in repository https://gitbox.apache.org/repos/asf/airflow.git

commit 213500eaebbad06cb6358d63330d7e82803e517b
Author: Jarek Potiuk <ja...@polidea.com>
AuthorDate: Wed Jul 22 19:54:35 2020 +0200

    Clean up tmp directory when exiting from breeze shell (#9930)
    
    Since we are mountign tmp dir now to inside container, some
    of the remnants of what's going on inside remains after exit.
    This is particularly bad if you are using tmux (some of the
    directories remaining there prevent tmux from re-run)
    
    This change cleans up /tmp directory on exit from Breeze command.
    It does it from inside container so that we clean up all
    root-owned files without sudo.
    
    (cherry picked from commit a9c871b47c017e97374efef64c02fdde9792aff5)
---
 breeze                                              |  1 +
 scripts/ci/in_container/_in_container_utils.sh      | 10 ++++++++++
 scripts/ci/in_container/run_clear_tmp.sh            | 21 +++++++++++++++++++++
 .../tools/{ci_fix_ownership.sh => ci_clear_tmp.sh}  |  5 ++---
 scripts/ci/tools/ci_fix_ownership.sh                |  3 ++-
 5 files changed, 36 insertions(+), 4 deletions(-)

diff --git a/breeze b/breeze
index 27eaee0..91585a1 100755
--- a/breeze
+++ b/breeze
@@ -1963,6 +1963,7 @@ function run_breeze_command {
                 "${BUILD_CACHE_DIR}/${LAST_DC_PROD_FILE}" run --service-ports --rm airflow "${@}"
             else
                 "${BUILD_CACHE_DIR}/${LAST_DC_CI_FILE}" run --service-ports --rm airflow "${@}"
+                "${SCRIPTS_CI_DIR}/tools/ci_clear_tmp.sh"
             fi
             ;;
         run_exec)
diff --git a/scripts/ci/in_container/_in_container_utils.sh b/scripts/ci/in_container/_in_container_utils.sh
index f2e94d4..946bd32 100644
--- a/scripts/ci/in_container/_in_container_utils.sh
+++ b/scripts/ci/in_container/_in_container_utils.sh
@@ -115,6 +115,16 @@ function in_container_fix_ownership() {
     fi
 }
 
+function in_container_clear_tmp() {
+    if [[ ${VERBOSE} == "true" ]]; then
+        echo "Cleaning ${AIRFLOW_SOURCES}/tmp from the container"
+    fi
+    rm -rf /tmp/*
+    if [[ ${VERBOSE} == "true" ]]; then
+        echo "Cleaned ${AIRFLOW_SOURCES}/tmp from the container"
+    fi
+}
+
 function in_container_go_to_airflow_sources() {
     pushd "${AIRFLOW_SOURCES}"  &>/dev/null || exit 1
 }
diff --git a/scripts/ci/in_container/run_clear_tmp.sh b/scripts/ci/in_container/run_clear_tmp.sh
new file mode 100755
index 0000000..324a795
--- /dev/null
+++ b/scripts/ci/in_container/run_clear_tmp.sh
@@ -0,0 +1,21 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# shellcheck source=scripts/ci/in_container/_in_container_script_init.sh
+. "$( dirname "${BASH_SOURCE[0]}" )/_in_container_script_init.sh"
+
+in_container_clear_tmp
diff --git a/scripts/ci/tools/ci_fix_ownership.sh b/scripts/ci/tools/ci_clear_tmp.sh
similarity index 90%
copy from scripts/ci/tools/ci_fix_ownership.sh
copy to scripts/ci/tools/ci_clear_tmp.sh
index 8cde42d..9a8cb4a 100755
--- a/scripts/ci/tools/ci_fix_ownership.sh
+++ b/scripts/ci/tools/ci_clear_tmp.sh
@@ -36,10 +36,9 @@ HOST_OS="$(uname -s)"
 export HOST_USER_ID
 export HOST_GROUP_ID
 export HOST_OS
-export BACKEND="sqlite"
 
 docker-compose \
     -f "${SCRIPTS_CI_DIR}/docker-compose/base.yml" \
     -f "${SCRIPTS_CI_DIR}/docker-compose/local.yml" \
-    -f "${SCRIPTS_CI_DIR}/docker-compose/forward-credentials.yml" \
-    run airflow /opt/airflow/scripts/ci/in_container/run_fix_ownership.sh
+   run --entrypoint /bin/bash \
+    airflow -c /opt/airflow/scripts/ci/in_container/run_clear_tmp.sh
diff --git a/scripts/ci/tools/ci_fix_ownership.sh b/scripts/ci/tools/ci_fix_ownership.sh
index 8cde42d..d3ae4ba 100755
--- a/scripts/ci/tools/ci_fix_ownership.sh
+++ b/scripts/ci/tools/ci_fix_ownership.sh
@@ -42,4 +42,5 @@ docker-compose \
     -f "${SCRIPTS_CI_DIR}/docker-compose/base.yml" \
     -f "${SCRIPTS_CI_DIR}/docker-compose/local.yml" \
     -f "${SCRIPTS_CI_DIR}/docker-compose/forward-credentials.yml" \
-    run airflow /opt/airflow/scripts/ci/in_container/run_fix_ownership.sh
+    run --entrypoint /bin/bash \
+    airflow -c /opt/airflow/scripts/ci/in_container/run_fix_ownership.sh


[airflow] 12/32: Constraint files are now maintained automatically (#9889)

Posted by ka...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

kaxilnaik pushed a commit to branch v1-10-test
in repository https://gitbox.apache.org/repos/asf/airflow.git

commit 5f93baf3f8a785b93b6ee9811d3938d8200c55ad
Author: Jarek Potiuk <ja...@polidea.com>
AuthorDate: Mon Jul 20 14:36:03 2020 +0200

    Constraint files are now maintained automatically (#9889)
    
    * Constraint files are now maintained automatically
    
    * No need to generate requirements when setup.py changes
    * requirements are kept in separate orphan branches not in main repo
    * merges to master verify if latest requirements are working and
      push tested requirements to orphaned branches
    * we keep history of requirement changes and can label them
      individually for each version (by constraint-1.10.n tag name)
    * consistently changed all references to be 'constraints' not
      'requirements'
    
    (cherry picked from commit de9eaeb434747897a192ef31815fbdd519e29c4d)
---
 .dockerignore                                      |   1 -
 .github/workflows/ci.yml                           | 147 +++++++++++++--------
 BREEZE.rst                                         |  64 +++++----
 CI.rst                                             | 113 +++++++++-------
 CONTRIBUTING.rst                                   |  71 ++++------
 Dockerfile                                         |  17 +--
 Dockerfile.ci                                      |  21 +--
 IMAGES.rst                                         |  75 +++++------
 INSTALL                                            |   7 +-
 LOCAL_VIRTUALENV.rst                               |  13 +-
 README.md                                          |  17 +--
 breeze                                             |  41 +++---
 breeze-complete                                    |   2 +-
 common/_default_branch.sh                          |   1 +
 docs/installation.rst                              |  31 +++--
 requirements/REMOVE.md                             |  22 +++
 .../ci_generate_constraints.sh}                    |   2 +-
 scripts/ci/docker-compose/local.yml                |   1 -
 .../ci/in_container/run_generate_constraints.sh    |  50 +++++++
 .../ci/in_container/run_generate_requirements.sh   |  80 -----------
 scripts/ci/kubernetes/ci_run_kubernetes_tests.sh   |   5 +-
 scripts/ci/libraries/_build_images.sh              |  34 ++---
 scripts/ci/libraries/_initialization.sh            |  16 +--
 scripts/ci/libraries/_local_mounts.sh              |   1 -
 scripts/ci/libraries/_runs.sh                      |   8 +-
 .../pre_commit/pre_commit_generate_requirements.sh |  24 ----
 scripts/ci/static_checks/ci_run_static_checks.sh   |   3 +
 .../ci/tools/ci_check_if_tests_should_be_run.sh    |   1 -
 28 files changed, 444 insertions(+), 424 deletions(-)

diff --git a/.dockerignore b/.dockerignore
index 6f89516..d7d621d 100644
--- a/.dockerignore
+++ b/.dockerignore
@@ -46,7 +46,6 @@
 !MANIFEST.in
 !NOTICE
 !.github
-!requirements
 !empty
 
 # Avoid triggering context change on README change (new companies using Airflow)
diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml
index 604fa0d..aac8be1 100644
--- a/.github/workflows/ci.yml
+++ b/.github/workflows/ci.yml
@@ -31,7 +31,7 @@ env:
   SKIP_CI_IMAGE_CHECK: "true"
   DB_RESET: "true"
   VERBOSE: "true"
-  UPGRADE_TO_LATEST_REQUIREMENTS: "false"
+  UPGRADE_TO_LATEST_CONSTRAINTS: ${{ github.event_name == 'push' || github.event_name == 'scheduled' }}
   PYTHON_MAJOR_MINOR_VERSION: 3.6
   USE_GITHUB_REGISTRY: "true"
   CACHE_IMAGE_PREFIX: ${{ github.repository }}
@@ -66,7 +66,6 @@ jobs:
       - cancel-previous-workflow-run
     env:
       MOUNT_SOURCE_DIR_FOR_STATIC_CHECKS: "true"
-      CI_JOB_TYPE: "Static checks"
     steps:
       - uses: actions/checkout@master
       - uses: actions/setup-python@v1
@@ -84,19 +83,13 @@ jobs:
       - name: "Build CI image"
         run: ./scripts/ci/images/ci_prepare_ci_image_on_ci.sh
       - name: "Static checks"
-        if: success()
-        run: |
-          python -m pip install pre-commit \
-              --constraint requirements/requirements-python${PYTHON_MAJOR_MINOR_VERSION}.txt
-          ./scripts/ci/static_checks/ci_run_static_checks.sh
+        run: ./scripts/ci/static_checks/ci_run_static_checks.sh
   docs:
     timeout-minutes: 60
     name: "Build docs"
     runs-on: ubuntu-latest
     needs:
       - cancel-previous-workflow-run
-    env:
-      CI_JOB_TYPE: "Documentation"
     steps:
       - uses: actions/checkout@master
       - uses: actions/setup-python@v1
@@ -142,7 +135,6 @@ jobs:
       BACKEND: postgres
       TEST_TYPE: ${{ matrix.test-type }}
       RUN_TESTS: "true"
-      CI_JOB_TYPE: "Tests"
       SKIP_CI_IMAGE_CHECK: "true"
       RUNTIME: "kubernetes"
       ENABLE_KIND_CLUSTER: "true"
@@ -173,8 +165,7 @@ jobs:
           cache-name: cache-kubernetes-tests-virtualenv-v4
         with:
           path: .build/.kubernetes_venv
-          key: "${{ env.cache-name }}-${{ github.job }}-\
-${{ hashFiles('requirements/requirements-python${{matrix.python-version}}.txt') }}"
+          key: "${{ env.cache-name }}-${{ github.job }}-v1"
       - name: "Tests"
         run: ./scripts/ci/kubernetes/ci_run_kubernetes_tests.sh
       - uses: actions/upload-artifact@v2
@@ -201,7 +192,6 @@ ${{ hashFiles('requirements/requirements-python${{matrix.python-version}}.txt')
       PYTHON_MAJOR_MINOR_VERSION: ${{ matrix.python-version }}
       POSTGRES_VERSION: ${{ matrix.postgres-version }}
       RUN_TESTS: "true"
-      CI_JOB_TYPE: "Tests"
       TEST_TYPE: ${{ matrix.test-type }}
     if: needs.trigger-tests.outputs.run-tests == 'true' || github.event_name != 'pull_request'
     steps:
@@ -232,7 +222,6 @@ ${{ hashFiles('requirements/requirements-python${{matrix.python-version}}.txt')
       PYTHON_MAJOR_MINOR_VERSION: ${{ matrix.python-version }}
       MYSQL_VERSION: ${{ matrix.mysql-version }}
       RUN_TESTS: "true"
-      CI_JOB_TYPE: "Tests"
       TEST_TYPE: ${{ matrix.test-type }}
     if: needs.trigger-tests.outputs.run-tests == 'true' || github.event_name != 'pull_request'
     steps:
@@ -262,7 +251,6 @@ ${{ hashFiles('requirements/requirements-python${{matrix.python-version}}.txt')
       PYTHON_MAJOR_MINOR_VERSION: ${{ matrix.python-version }}
       TEST_TYPE: ${{ matrix.test-type }}
       RUN_TESTS: "true"
-      CI_JOB_TYPE: "Tests"
     if: needs.trigger-tests.outputs.run-tests == 'true' || github.event_name != 'pull_request'
     steps:
       - uses: actions/checkout@master
@@ -293,7 +281,6 @@ ${{ hashFiles('requirements/requirements-python${{matrix.python-version}}.txt')
       PYTHON_MAJOR_MINOR_VERSION: ${{ matrix.python-version }}
       POSTGRES_VERSION: ${{ matrix.postgres-version }}
       RUN_TESTS: "true"
-      CI_JOB_TYPE: "Tests"
       TEST_TYPE: ${{ matrix.test-type }}
     if: needs.trigger-tests.outputs.run-tests == 'true' || github.event_name != 'pull_request'
     steps:
@@ -314,38 +301,11 @@ ${{ hashFiles('requirements/requirements-python${{matrix.python-version}}.txt')
     runs-on: ubuntu-latest
     needs:
       - cancel-previous-workflow-run
-    env:
-      CI_JOB_TYPE: "Tests"
     steps:
       - uses: actions/checkout@master
       - name: "Helm Tests"
         run: ./scripts/ci/kubernetes/ci_run_helm_testing.sh
 
-  requirements:
-    timeout-minutes: 80
-    name: "Requirements"
-    runs-on: ubuntu-latest
-    strategy:
-      matrix:
-        python-version: [2.7, 3.5, 3.6, 3.7, 3.8]
-      fail-fast: false
-    needs:
-      - cancel-previous-workflow-run
-    env:
-      PYTHON_MAJOR_MINOR_VERSION: ${{ matrix.python-version }}
-      CHECK_REQUIREMENTS_ONLY: true
-      UPGRADE_WHILE_GENERATING_REQUIREMENTS: ${{ github.event_name == 'schedule' }}
-      CI_JOB_TYPE: "Requirements"
-    steps:
-      - uses: actions/checkout@master
-      - uses: actions/setup-python@v1
-      - name: "Free space"
-        run: ./scripts/ci/tools/ci_free_space_on_ci.sh
-      - name: "Build CI image ${{ matrix.python-version }}"
-        run: ./scripts/ci/images/ci_prepare_ci_image_on_ci.sh
-      - name: "Generate requirements"
-        run: ./scripts/ci/requirements/ci_generate_requirements.sh
-
   build-prod-image:
     timeout-minutes: 60
     name: "Build prod image Py${{ matrix.python-version }}"
@@ -355,7 +315,6 @@ ${{ hashFiles('requirements/requirements-python${{matrix.python-version}}.txt')
         python-version: [2.7, 3.5, 3.6, 3.7, 3.8]
     env:
       PYTHON_MAJOR_MINOR_VERSION: ${{ matrix.python-version }}
-      CI_JOB_TYPE: "Prod image"
     steps:
       - uses: actions/checkout@master
       - name: "Build PROD image ${{ matrix.python-version }}"
@@ -369,16 +328,16 @@ ${{ hashFiles('requirements/requirements-python${{matrix.python-version}}.txt')
       - tests-sqlite
       - tests-postgres
       - tests-mysql
-      - requirements
       - build-prod-image
       - docs
-    if: github.ref == 'refs/heads/master' && github.event_name != 'schedule'
+    if: |
+      (github.ref == 'refs/heads/master' || github.ref == 'refs/heads/v1-10-test') &&
+      github.event_name != 'schedule'
     strategy:
       matrix:
         python-version: [2.7, 3.5, 3.6, 3.7, 3.8]
     env:
       PYTHON_MAJOR_MINOR_VERSION: ${{ matrix.python-version }}
-      CI_JOB_TYPE: "Prod image"
     steps:
       - uses: actions/checkout@master
       - name: "Free space"
@@ -396,12 +355,9 @@ ${{ hashFiles('requirements/requirements-python${{matrix.python-version}}.txt')
       - tests-sqlite
       - tests-postgres
       - tests-mysql
-      - requirements
-      - build-prod-image
       - docs
     if: |
-      (github.ref == 'refs/heads/master' ||
-      github.ref == 'refs/heads/v1-10-test' ) &&
+      (github.ref == 'refs/heads/master' || github.ref == 'refs/heads/v1-10-test' ) &&
       github.event_name != 'schedule'
     strategy:
       matrix:
@@ -409,7 +365,6 @@ ${{ hashFiles('requirements/requirements-python${{matrix.python-version}}.txt')
     env:
       PULL_PYTHON_BASE_IMAGES_FROM_CACHE: "false"
       PYTHON_MAJOR_MINOR_VERSION: ${{ matrix.python-version }}
-      CI_JOB_TYPE: "Prod image"
     steps:
       - uses: actions/checkout@master
       - name: "Free space"
@@ -418,3 +373,91 @@ ${{ hashFiles('requirements/requirements-python${{matrix.python-version}}.txt')
         run: ./scripts/ci/images/ci_prepare_ci_image_on_ci.sh
       - name: "Push CI image ${{ matrix.python-version }}"
         run: ./scripts/ci/images/ci_push_ci_image.sh
+
+  constraints:
+    timeout-minutes: 80
+    name: "Constraints"
+    runs-on: ubuntu-latest
+    strategy:
+      matrix:
+        python-version: [2.7, 3.5, 3.6, 3.7, 3.8]
+      fail-fast: false
+    needs:
+      - cancel-previous-workflow-run
+      - tests-sqlite
+      - tests-mysql
+      - tests-postgres
+      - tests-kubernetes
+    env:
+      PYTHON_MAJOR_MINOR_VERSION: ${{ matrix.python-version }}
+    if: |
+      github.event_name == 'push' &&
+      (github.ref == 'refs/heads/master' || github.ref == 'refs/heads/v1-10-test')
+    steps:
+      - uses: actions/checkout@master
+      - uses: actions/setup-python@v1
+      - name: "Free space"
+        run: ./scripts/ci/tools/ci_free_space_on_ci.sh
+      - name: "Build CI image ${{ matrix.python-version }}"
+        run: ./scripts/ci/images/ci_prepare_ci_image_on_ci.sh
+      - name: "Generate constraints"
+        run: ./scripts/ci/constraints/ci_generate_constraints.sh
+      - uses: actions/upload-artifact@v2
+        name: Upload constraint artifacts
+        with:
+          name: 'constraints-${{matrix.python-version}}'
+          path: 'files/constraints-${{matrix.python-version}}/constraints-${{matrix.python-version}}.txt'
+
+  constraints-push:
+    timeout-minutes: 10
+    name: "Constraints push"
+    runs-on: ubuntu-latest
+    needs:
+      - constraints
+    if: |
+      github.event_name == 'push' &&
+      (github.ref == 'refs/heads/master' || github.ref == 'refs/heads/v1-10-test')
+    steps:
+      - name: "Set constraints branch name"
+        id: constraints-branch
+        run: |
+          if [[ ${GITHUB_REF} == 'refs/heads/master' ]]; then
+              echo "::set-output name=branch::constraints-master"
+          elif [[ ${GITHUB_REF} == 'refs/heads/v1-10-test' ]]; then
+              echo "::set-output name=branch::constraints-1-10"
+          else
+              echo
+              echo "Unexpected ref ${GITHUB_REF}. Exiting!"
+              echo
+              exit 1
+          fi
+      - uses: actions/checkout@v2
+        with:
+          path: "repo"
+          ref: ${{ steps.constraints-branch.outputs.branch }}
+      - uses: actions/download-artifact@v2
+        with:
+          path: 'artifacts'
+        name: "Get all artifacts (constraints)"
+      - name: "Commit changed constraint files"
+        run: |
+          cp -v ./artifacts/constraints-*/constraints*.txt repo/
+          cd repo
+          git config --local user.email "dev@airflow.apache.org"
+          git config --local user.name "Automated Github Actions commit"
+          git diff --exit-code || git commit --all --message "Updating constraints. GH run id:${GITHUB_RUN_ID}
+
+          This update in constraints is automatically committed by CI pushing
+          reference '${GITHUB_REF}' to ${GITHUB_REPOSITORY} with commit sha
+          ${GITHUB_SHA}.
+
+          All tests passed in this build so we determined we can push the updated constraints.
+
+          See https://github.com/apache/airflow/blob/master/README.md#installing-from-pypi for details.
+          "
+      - name: Push changes
+        uses: ad-m/github-push-action@master
+        with:
+          github_token: ${{ secrets.GITHUB_TOKEN }}
+          branch: ${{ steps.constraints-branch.outputs.branch }}
+          directory: "repo"
diff --git a/BREEZE.rst b/BREEZE.rst
index c377ec0..5976b0f 100644
--- a/BREEZE.rst
+++ b/BREEZE.rst
@@ -328,7 +328,7 @@ Managing CI environment:
     * Stop running interactive environment with ``breeze stop`` command
     * Restart running interactive environment with ``breeze restart`` command
     * Run test specified with ``breeze tests`` command
-    * Generate requirements with ``breeze generate-requirements`` command
+    * Generate constraints with ``breeze generate-constraints`` command
     * Execute arbitrary command in the test environment with ``breeze shell`` command
     * Execute arbitrary docker-compose command with ``breeze docker-compose`` command
     * Push docker images with ``breeze push-image`` command (require committer's rights to push images)
@@ -693,40 +693,34 @@ easily identify the location the problems with documentation originated from.
       </a>
     </div>
 
-Generating requirements
------------------------
+Generating constraints
+----------------------
 
-Whenever you modify and commit setup.py, you need to re-generate requirement files. Those requirement
-files ara stored separately for each python version in the ``requirements`` folder. Those are
-constraints rather than requirements as described in detail in the
-`CONTRIBUTING <CONTRIBUTING.rst#pinned-requirement-files>`_ contributing documentation.
+Whenever setup.py gets modified, the CI master job will re-generate constraint files. Those constraint
+files ara stored in separated orphan branches: ``constraints-master`` and ``constraint-1-10``.
+They are stored separately for each python version. Those are
+constraint files as described in detail in the
+`CONTRIBUTING <CONTRIBUTING.rst#pinned-constraint-files>`_ contributing documentation.
 
-In case you modify setup.py you need to update the requirements - for every python version supported.
+In case someone modifies setup.py, the ``CRON`` scheduled CI build automatically upgrades and
+pushes changed to the constraint files, however you can also perform test run of this locally using
+``generate-constraints`` command of Breeze.
 
 .. code-block:: bash
 
-  ./breeze generate-requirements --python 3.6
+  ./breeze generate-constraints --python 3.6
 
 .. code-block:: bash
 
-  ./breeze generate-requirements --python 3.7
+  ./breeze generate-constraints --python 3.7
 
 .. code-block:: bash
 
-  ./breeze generate-requirements --python 3.8
+  ./breeze generate-constraints --python 3.8
 
-
-This bumps requirements to latest versions and stores hash of setup.py so that we are automatically
-upgrading the requirements as we add new ones.
-
-.. raw:: html
-
-    <div align="center">
-      <a href="https://youtu.be/4MCTXq-oF68?t=1823">
-        <img src="images/breeze/overlayed_breeze_generate_requirements.png" width="640"
-             alt="Airflow Breeze - Generate requirements">
-      </a>
-    </div>
+This bumps the constraint files to latest versions and stores hash of setup.py. The generated constraint
+and setup.py hash files are stored in the ``files`` folder and while generating the constraints diff
+of changes vs the previous constraint files is printed.
 
 Using local virtualenv environment in Your Host IDE
 ---------------------------------------------------
@@ -752,7 +746,7 @@ To use your host IDE with Breeze:
 
 .. code-block:: bash
 
-  ./breeze generate-requirements --python 3.8
+  ./breeze initialize-local-virtualenv --python 3.8
 
 4. Select the virtualenv you created as the project's default virtualenv in your IDE.
 
@@ -1050,7 +1044,7 @@ This is the current syntax for  `./breeze <./breeze>`_:
     build-image                              Builds CI or Production docker image
     cleanup-image                            Cleans up the container image created
     exec                                     Execs into running breeze container in new terminal
-    generate-requirements                    Generates pinned requirements for pip dependencies
+    generate-constraints                     Generates pinned constraint files
     push-image                               Pushes images to registry
     initialize-local-virtualenv              Initializes local virtualenv
     setup-autocomplete                       Sets up autocomplete for breeze
@@ -1295,16 +1289,18 @@ This is the current syntax for  `./breeze <./breeze>`_:
   ####################################################################################################
 
 
-  Detailed usage for command: generate-requirements
+  Detailed usage for command: generate-constraints
 
 
-  breeze generate-requirements [FLAGS]
+  breeze generate-constraints [FLAGS]
 
-        Generates pinned requirements from setup.py. Those requirements are generated in requirements
-        directory - separately for different python version. Those requirements are used to run
-        CI builds as well as run repeatable production image builds. You can use those requirements
-        to predictably install released Airflow versions. You should run it always after you update
-        setup.py.
+        Generates pinned constraint files from setup.py. Those files are generated in files folder
+        - separate files for different python version. Those constraint files when pushed to orphan
+        constraint-master and constraint-1-10 branches are used to generate repeatable
+        CI builds as well as run repeatable production image builds. You can use those constraints
+        to predictably install released Airflow versions. This is mainly used to test the constraint
+        generation - constraints are pushed to the orphan branches by a successful scheduled
+        CRON job in CI automatically.
 
   Flags:
 
@@ -1380,8 +1376,8 @@ This is the current syntax for  `./breeze <./breeze>`_:
   breeze initialize-local-virtualenv [FLAGS]
 
         Initializes locally created virtualenv installing all dependencies of Airflow
-        taking into account the frozen requirements from requirements folder.
-        This local virtualenv can be used to aid autocompletion and IDE support as
+        taking into account the constraints for the version specified.
+        This local virtualenv can be used to aid auto-completion and IDE support as
         well as run unit tests directly from the IDE. You need to have virtualenv
         activated before running this command.
 
diff --git a/CI.rst b/CI.rst
index 210cf90..8966073 100644
--- a/CI.rst
+++ b/CI.rst
@@ -70,41 +70,58 @@ CI run types
 The following CI Job runs are currently run for Apache Airflow, and each of the runs have different
 purpose and context.
 
-* **Pull Request Run** - Those runs are results of PR from the forks made by contributors. Most builds
-  for Apache Airflow fall into this category. They are executed in the context of the "Fork", not main
-  Airflow Code Repository which means that they have only "read" permission to all the GitHub resources
-  (container registry, code repository). This is necessary as the code in those PRs (including CI job
-  definition) might be modified by people who are not committers for the Apache Airflow Code Repository.
-  The main purpose of those jobs is to check if PR builds cleanly, if the test run properly and if
-  the PR is ready to review and merge. The runs are using cached images from the Private GitHub registry -
-  CI, Production Images as well as base Python images that are also cached in the Private GitHub registry.
-
-* **Direct Push/Merge Run** - Those runs are results of direct pushes done by the committers or as result
-  of merge of a Pull Request by the committers. Those runs execute in the context of the Apache Airflow
-  Code Repository and have also write permission for GitHub resources (container registry, code repository).
-  The main purpose for the run is to check if the code after merge still holds all the assertions - like
-  whether it still builds, all tests are green. This is needed because some of the conflicting changes from
-  multiple PRs might cause build and test failures after merge even if they do not fail in isolation. Also
-  those runs are already reviewed and confirmed by the committers so they can be used to do some housekeeping
-  - for now they are pushing most recent image build in the PR to the Github Private Registry - which is our
-  image cache for all the builds. Another purpose of those runs is to refresh latest Python base images.
-  Python base images are refreshed with varying frequency (once every few months usually but sometimes
-  several times per week) with the latest security and bug fixes. Those patch level images releases can
-  occasionally break Airflow builds (specifically Docker image builds based on those images) therefore
-  in PRs we always use latest "good" python image that we store in the private GitHub cache. The direct
-  push/master builds are not using registry cache to pull the python images - they are directly
-  pulling the images from DockerHub, therefore they will try the latest images after they are released
-  and in case they are fine, CI Docker image is build and tests are passing - those jobs will push the base
-  images to the private GitHub Registry so that they be used by subsequent PR runs.
-
-* **Scheduled Run** - those runs are results of (nightly) triggered jobs - only for well-defined branches:
-  ``master`` and ``v1-10-test`` they execute nightly. Their main purpose is to check if there was no impact
-  of external dependency changes on the Apache Airflow code (for example transitive dependencies released
-  that fail the build). They also check if the Docker images can be build from the scratch (again - to see
-  if some dependencies have not changed - for example downloaded package releases etc. Another reason for
-  the nightly build is that the builds tags most recent master or v1-10-test code with "master-nightly" and
-  "v1-10-test" tags respectively so that DockerHub build can pick up the moved tag and prepare a nightly
-  "public" build in the DockerHub.
+Pull request run
+----------------
+
+Those runs are results of PR from the forks made by contributors. Most builds for Apache Airflow fall
+into this category. They are executed in the context of the "Fork", not main
+Airflow Code Repository which means that they have only "read" permission to all the GitHub resources
+(container registry, code repository). This is necessary as the code in those PRs (including CI job
+definition) might be modified by people who are not committers for the Apache Airflow Code Repository.
+
+The main purpose of those jobs is to check if PR builds cleanly, if the test run properly and if
+the PR is ready to review and merge. The runs are using cached images from the Private GitHub registry -
+CI, Production Images as well as base Python images that are also cached in the Private GitHub registry.
+Also for those builds we only execute Python tests if important files changed (so for example if it is
+doc-only change, no tests will be executed.
+
+Direct Push/Merge Run
+---------------------
+
+Those runs are results of direct pushes done by the committers or as result of merge of a Pull Request
+by the committers. Those runs execute in the context of the Apache Airflow Code Repository and have also
+write permission for GitHub resources (container registry, code repository).
+The main purpose for the run is to check if the code after merge still holds all the assertions - like
+whether it still builds, all tests are green.
+
+This is needed because some of the conflicting changes from multiple PRs might cause build and test failures
+after merge even if they do not fail in isolation. Also those runs are already reviewed and confirmed by the
+committers so they can be used to do some housekeeping:
+- pushing most recent image build in the PR to the Github Private Registry (for caching)
+- upgrading to latest constraints and pushing those constraints if all tests succeed
+- refresh latest Python base images in case new patch-level is released
+
+The housekeeping is important - Python base images are refreshed with varying frequency (once every few months
+usually but sometimes several times per week) with the latest security and bug fixes.
+Those patch level images releases can occasionally break Airflow builds (specifically Docker image builds
+based on those images) therefore in PRs we only use latest "good" python image that we store in the
+private GitHub cache. The direct push/master builds are not using registry cache to pull the python images
+- they are directly pulling the images from DockerHub, therefore they will try the latest images
+after they are released and in case they are fine, CI Docker image is build and tests are passing -
+those jobs will push the base images to the private GitHub Registry so that they be used by subsequent
+PR runs.
+
+Scheduled runs
+--------------
+
+Those runs are results of (nightly) triggered job - only for ``master`` branch. The
+main purpose of the job is to check if there was no impact of external dependency changes on the Apache
+Airflow code (for example transitive dependencies released that fail the build). It also checks if the
+Docker images can be build from the scratch (again - to see if some dependencies have not changed - for
+example downloaded package releases etc. Another reason for the nightly build is that the builds tags most
+recent master with ``nightly-master`` tag so that DockerHub build can pick up the moved tag and prepare a
+nightly public master build in the DockerHub registry. The ``v1-10-test`` branch images are build in
+DockerHub when pushing ``v1-10-stable`` manually.
 
 All runs consist of the same jobs, but the jobs behave slightly differently or they are skipped in different
 run categories. Here is a summary of the run categories with regards of the jobs they are running.
@@ -115,31 +132,35 @@ Those jobs often have matrix run strategy which runs several different variation
 | Job                       | Description                                                                                                    | Pull Request Run                   | Direct Push/Merge Run           | Scheduled Run                                                        |
 |                           |                                                                                                                |                                    |                                 |   (*) Builds all images from scratch                                 |
 +===========================+================================================================================================================+====================================+=================================+======================================================================+
-| Static checks 1           | Performs first set of static checks                                                                            | Yes                                | Yes                             | Yes *                                                                |
+| Cancel previous workflow  | Cancels the previously running workflow run if there is one running                                            | Yes                                | Yes                             | Yes *                                                                |
 +---------------------------+----------------------------------------------------------------------------------------------------------------+------------------------------------+---------------------------------+----------------------------------------------------------------------+
-| Static checks 2           | Performs second set of static checks                                                                           | Yes                                | Yes                             | Yes *                                                                |
+| Static checks             | Performs static checks                                                                                         | Yes                                | Yes                             | Yes *                                                                |
 +---------------------------+----------------------------------------------------------------------------------------------------------------+------------------------------------+---------------------------------+----------------------------------------------------------------------+
 | Docs                      | Builds documentation                                                                                           | Yes                                | Yes                             | Yes *                                                                |
 +---------------------------+----------------------------------------------------------------------------------------------------------------+------------------------------------+---------------------------------+----------------------------------------------------------------------+
+| Prepare Backport packages | Prepares Backport Packages for 1.10.*                                                                          | Yes                                | Yes                             | Yes *                                                                |
++---------------------------+----------------------------------------------------------------------------------------------------------------+------------------------------------+---------------------------------+----------------------------------------------------------------------+
+| Trigger tests             | Checks if tests should be triggered                                                                            | Yes                                | Yes                             | Yes *                                                                |
++---------------------------+----------------------------------------------------------------------------------------------------------------+------------------------------------+---------------------------------+----------------------------------------------------------------------+
 | Build Prod Image          | Builds production image                                                                                        | Yes                                | Yes                             | Yes *                                                                |
 +---------------------------+----------------------------------------------------------------------------------------------------------------+------------------------------------+---------------------------------+----------------------------------------------------------------------+
-| Prepare Backport packages | Prepares Backport Packages for 1.10.*                                                                          | Yes                                | Yes                             | Yes *                                                                |
+| Tests                     | Run all the combinations of Pytest tests for Python code                                                       | Yes (if tests-triggered)           | Yes                             | Yes *                                                                |
 +---------------------------+----------------------------------------------------------------------------------------------------------------+------------------------------------+---------------------------------+----------------------------------------------------------------------+
-| Pyfiles                   | Counts how many python files changed in the  change.Used to determine if tests should be run                   | Yes                                | Yes (but it is not used)        | Yes (but it is not used)                                             |
+| Tests Kubernetes          | Run Kubernetes test                                                                                            | Yes (if tests-triggered)           | Yes                             | Yes *                                                                |
 +---------------------------+----------------------------------------------------------------------------------------------------------------+------------------------------------+---------------------------------+----------------------------------------------------------------------+
-| Tests                     | Run all the combinations of Pytest tests for Python code                                                       | Yes (if pyfiles count >0)          | Yes                             | Yes*                                                                 |
+| Quarantined tests         | Those are tests that are flaky and we need to fix them                                                         | Yes (if tests-triggered)           | Yes                             | Yes *                                                                |
 +---------------------------+----------------------------------------------------------------------------------------------------------------+------------------------------------+---------------------------------+----------------------------------------------------------------------+
-| Quarantined tests         | Those are tests that are flaky and we need to fix them                                                         | Yes (if pyfiles count >0)          | Yes                             | Yes *                                                                |
+| Test OpenAPI client gen   | Tests if OpenAPIClient continues to generate                                                                   | Yes                                | Yes                             | Yes *                                                                |
 +---------------------------+----------------------------------------------------------------------------------------------------------------+------------------------------------+---------------------------------+----------------------------------------------------------------------+
-| Requirements              | Checks if requirement constraints in the code are up-to-date                                                   | Yes (fails if missing requirement) | Yes (fails missing requirement) | Yes (Eager dependency upgrade - does not fail changed requirements)  |
+| Helm tests                | Runs tests for the Helm chart                                                                                  | Yes                                | Yes                             | Yes *                                                                |
 +---------------------------+----------------------------------------------------------------------------------------------------------------+------------------------------------+---------------------------------+----------------------------------------------------------------------+
-| Pull python from cache    | Pulls Python base images from Github Private Image registry to keep the last good python image used in PRs     | Yes                                | No                              | -                                                                    |
+| Constraints               | Upgrade constraints to latest eagerly pushed ones (only if tests successful)                                   | -                                  | Yes                             | Yes *                                                                    |
 +---------------------------+----------------------------------------------------------------------------------------------------------------+------------------------------------+---------------------------------+----------------------------------------------------------------------+
-| Push python to cache      | Pushes Python base images to Github Private Image registry - checks if latest image is fine and pushes if so   | No                                 | Yes                             | -                                                                    |
+| Constraints push          | Pushes updated constraints (only if tests successful)                                                          | -                                  | Yes                             | -                                                                    |
 +---------------------------+----------------------------------------------------------------------------------------------------------------+------------------------------------+---------------------------------+----------------------------------------------------------------------+
-| Push Prod image           | Pushes production images to GitHub Private Image Registry to cache the build images for following runs         | -                                  | Yes                             | -                                                                    |
+| Push Prod images          | Pushes production images to GitHub Private Image Registry to cache the build images for following runs         | -                                  | Yes                             | -                                                                    |
 +---------------------------+----------------------------------------------------------------------------------------------------------------+------------------------------------+---------------------------------+----------------------------------------------------------------------+
-| Push CI image             | Pushes CI images to GitHub Private Image Registry to cache the build images for following runs                 | -                                  | Yes                             | -                                                                    |
+| Push CI images            | Pushes CI images to GitHub Private Image Registry to cache the build images for following runs                 | -                                  | Yes                             | -                                                                    |
 +---------------------------+----------------------------------------------------------------------------------------------------------------+------------------------------------+---------------------------------+----------------------------------------------------------------------+
 | Tag Repo nightly          | Tags the repository with nightly tagIt is a lightweight tag that moves nightly                                 | -                                  | -                               | Yes. Triggers DockerHub build for public registry                    |
 +---------------------------+----------------------------------------------------------------------------------------------------------------+------------------------------------+---------------------------------+----------------------------------------------------------------------+
diff --git a/CONTRIBUTING.rst b/CONTRIBUTING.rst
index e77a526..bbe5d8d 100644
--- a/CONTRIBUTING.rst
+++ b/CONTRIBUTING.rst
@@ -333,7 +333,7 @@ Airflow dependencies
 Airflow is not a standard python project. Most of the python projects fall into one of two types -
 application or library. As described in
 [StackOverflow Question](https://stackoverflow.com/questions/28509481/should-i-pin-my-python-dependencies-versions)
-decision whether to pin (freeze) requirements for a python project depdends on the type. For
+decision whether to pin (freeze) dependency versions for a python project depends on the type. For
 applications, dependencies should be pinned, but for libraries, they should be open.
 
 For application, pinning the dependencies makes it more stable to install in the future - because new
@@ -343,76 +343,61 @@ be open to allow several different libraries with the same requirements to be in
 The problem is that Apache Airflow is a bit of both - application to install and library to be used when
 you are developing your own operators and DAGs.
 
-This - seemingly unsolvable - puzzle is solved by having pinned requirement files. Those are available
-as of airflow 1.10.10.
+This - seemingly unsolvable - puzzle is solved by having pinned constraints files. Those are available
+as of airflow 1.10.10 and further improved with 1.10.12 (moved to separate orphan branches)
 
-Pinned requirement files
-------------------------
+Pinned constraint files
+-----------------------
 
 By default when you install ``apache-airflow`` package - the dependencies are as open as possible while
-still allowing the apache-airflow package to install. This means that 'apache-airflow' package might fail to
+still allowing the apache-airflow package to install. This means that ``apache-airflow`` package might fail to
 install in case a direct or transitive dependency is released that breaks the installation. In such case
 when installing ``apache-airflow``, you might need to provide additional constraints (for
 example ``pip install apache-airflow==1.10.2 Werkzeug<1.0.0``)
 
-However we now have ``requirements-python<PYTHON_MAJOR_MINOR_VERSION>.txt`` file generated
-automatically and committed in the requirements folder based on the set of all latest working and tested
-requirement versions. Those ``requirement-python<PYTHON_MAJOR_MINOR_VERSION>.txt`` files can be used as
-constraints file when installing Apache Airflow - either from the sources
+However we now have ``constraints-<PYTHON_MAJOR_MINOR_VERSION>.txt`` files generated
+automatically and committed to orphan ``constraints-master`` and ``constraint-1-10`` branches based on
+the set of all latest working and tested dependency versions. Those
+``constraints-<PYTHON_MAJOR_MINOR_VERSION>.txt`` files can be used as
+constraints file when installing Apache Airflow - either from the sources:
 
 .. code-block:: bash
 
-  pip install -e . --constraint requirements/requirements-python3.6.txt
+  pip install -e . \
+    --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-1-10/constraints-3.6.txt"
 
 
-or from the pypi package
+or from the pypi package:
 
 .. code-block:: bash
 
-  pip install apache-airflow --constraint requirements/requirements-python3.6.txt
+  pip install apache-airflow \
+    --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-1-10/constraints-3.6.txt"
 
 
 This works also with extras - for example:
 
 .. code-block:: bash
 
-  pip install .[gcp] --constraint requirements/requirements-python3.6.txt
+  pip install .[ssh] \
+    --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-master/constraints-3.6.txt"
 
 
-It is also possible to use constraints directly from github using tag/version name:
+As of apache-airflow 1.10.12 it is also possible to use constraints directly from github using specific
+tag/hash name. We tag commits working for particular release with constraints-<version> tag. So for example
+fixed valid constraints 1.10.12 can be used by using ``constraints-1.10.12`` tag:
 
 .. code-block:: bash
 
-  pip install apache-airflow[gcp]==1.10.10 \
-      --constraint https://raw.githubusercontent.com/apache/airflow/1.10.10/requirements/requirements-python3.6.txt
+  pip install apache-airflow[ssh]==1.10.12 \
+      --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-1.10.12/constraints-3.6.txt"
 
-There are different set of fixed requirements for different python major/minor versions and you should
-use the right requirements file for the right python version.
+There are different set of fixed constraint files for different python major/minor versions and you should
+use the right file for the right python version.
 
-The ``requirements-python<PYTHON_MAJOR_MINOR_VERSION>.txt`` file MUST be regenerated every time after
-the ``setup.py`` is updated. This is checked automatically in the CI build. There are separate
-jobs for each python version that checks if the requirements should be updated.
-
-If they are not updated, you should regenerate the requirements locally using Breeze as described below.
-
-Generating requirement files
-----------------------------
-
-This should be done every time after you modify setup.py file. You can generate requirement files
-using `Breeze <BREEZE.rst>`_ . Simply use those commands:
-
-.. code-block:: bash
-
-  breeze generate-requirements --python 3.7
-
-.. code-block:: bash
-
-  breeze generate-requirements --python 3.6
-
-Note that when you generate requirements this way, you might update to latest version of requirements
-that were released since the last time so during tests you might get errors unrelated to your change.
-In this case the easiest way to fix it is to limit the culprit dependency to the previous version
-with ``<NNNN.NN>`` constraint added in setup.py.
+The ``constraints-<PYTHON_MAJOR_MINOR_VERSION>.txt`` will be automatically regenerated by CI cron job
+every time after the ``setup.py`` is updated and pushed if the tests are successful. There are separate
+jobs for each python version.
 
 Backport providers packages
 ---------------------------
diff --git a/Dockerfile b/Dockerfile
index c06105d..98cf3dc 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -168,12 +168,15 @@ ARG AIRFLOW_EXTRAS
 ARG ADDITIONAL_AIRFLOW_EXTRAS=""
 ENV AIRFLOW_EXTRAS=${AIRFLOW_EXTRAS}${ADDITIONAL_AIRFLOW_EXTRAS:+,}${ADDITIONAL_AIRFLOW_EXTRAS}
 
+ARG AIRFLOW_CONSTRAINTS_REFERENCE="constraints-master"
+ARG AIRFLOW_CONSTRAINTS_URL="https://raw.githubusercontent.com/apache/airflow/${AIRFLOW_CONSTRAINTS_REFERENCE}/constraints-${PYTHON_MAJOR_MINOR_VERSION}.txt"
+ENV AIRFLOW_CONSTRAINTS_URL=${AIRFLOW_CONSTRAINTS_URL}
+
 # In case of Production build image segment we want to pre-install master version of airflow
 # dependencies from github so that we do not have to always reinstall it from the scratch.
 RUN pip install --user \
     "https://github.com/${AIRFLOW_REPO}/archive/${AIRFLOW_BRANCH}.tar.gz#egg=apache-airflow[${AIRFLOW_EXTRAS}]" \
-        --constraint "https://raw.githubusercontent.com/${AIRFLOW_REPO}/${AIRFLOW_BRANCH}/requirements/requirements-python${PYTHON_MAJOR_MINOR_VERSION}.txt" \
-    && pip uninstall --yes apache-airflow;
+        --constraint "${AIRFLOW_CONSTRAINTS_URL}" && pip uninstall --yes apache-airflow;
 
 ARG AIRFLOW_SOURCES_FROM="."
 ENV AIRFLOW_SOURCES_FROM=${AIRFLOW_SOURCES_FROM}
@@ -198,20 +201,14 @@ ENV AIRFLOW_INSTALL_SOURCES=${AIRFLOW_INSTALL_SOURCES}
 ARG AIRFLOW_INSTALL_VERSION=""
 ENV AIRFLOW_INSTALL_VERSION=${AIRFLOW_INSTALL_VERSION}
 
-ARG CONSTRAINT_REQUIREMENTS="requirements/requirements-python${PYTHON_MAJOR_MINOR_VERSION}.txt"
-ENV CONSTRAINT_REQUIREMENTS=${CONSTRAINT_REQUIREMENTS}
-
 WORKDIR /opt/airflow
 
-# hadolint ignore=DL3020
-ADD "${CONSTRAINT_REQUIREMENTS}" /requirements.txt
-
 ENV PATH=${PATH}:/root/.local/bin
 
 RUN pip install --user "${AIRFLOW_INSTALL_SOURCES}[${AIRFLOW_EXTRAS}]${AIRFLOW_INSTALL_VERSION}" \
-    --constraint /requirements.txt && \
+    --constraint "${AIRFLOW_CONSTRAINTS_URL}" && \
     if [ -n "${ADDITIONAL_PYTHON_DEPS}" ]; then pip install --user ${ADDITIONAL_PYTHON_DEPS} \
-    --constraint /requirements.txt; fi && \
+    --constraint "${AIRFLOW_CONSTRAINTS_URL}"; fi && \
     find /root/.local/ -name '*.pyc' -print0 | xargs -0 rm -r && \
     find /root/.local/ -type d -name '__pycache__' -print0 | xargs -0 rm -r
 
diff --git a/Dockerfile.ci b/Dockerfile.ci
index 2b2157a..5d4f240 100644
--- a/Dockerfile.ci
+++ b/Dockerfile.ci
@@ -29,9 +29,6 @@ ENV AIRFLOW_VERSION=$AIRFLOW_VERSION
 ARG PYTHON_MAJOR_MINOR_VERSION="3.6"
 ENV PYTHON_MAJOR_MINOR_VERSION=${PYTHON_MAJOR_MINOR_VERSION}
 
-ARG UPGRADE_TO_LATEST_REQUIREMENTS="false"
-ENV UPGRADE_TO_LATEST_REQUIREMENTS=${UPGRADE_TO_LATEST_REQUIREMENTS}
-
 # Print versions
 RUN echo "Base image: ${PYTHON_BASE_IMAGE}"
 RUN echo "Airflow version: ${AIRFLOW_VERSION}"
@@ -214,6 +211,10 @@ ENV AIRFLOW_EXTRAS=${AIRFLOW_EXTRAS}${ADDITIONAL_AIRFLOW_EXTRAS:+,}${ADDITIONAL_
 
 RUN echo "Installing with extras: ${AIRFLOW_EXTRAS}."
 
+ARG AIRFLOW_CONSTRAINTS_REFERENCE="constraints-master"
+ARG AIRFLOW_CONSTRAINTS_URL="https://raw.githubusercontent.com/apache/airflow/${AIRFLOW_CONSTRAINTS_REFERENCE}/constraints-${PYTHON_MAJOR_MINOR_VERSION}.txt"
+ENV AIRFLOW_CONSTRAINTS_URL=${AIRFLOW_CONSTRAINTS_URL}
+
 # By changing the CI build epoch we can force reinstalling Arflow from the current master
 # It can also be overwritten manually by setting the AIRFLOW_CI_BUILD_EPOCH environment variable.
 ARG AIRFLOW_CI_BUILD_EPOCH="1"
@@ -225,8 +226,7 @@ ENV AIRFLOW_CI_BUILD_EPOCH=${AIRFLOW_CI_BUILD_EPOCH}
 # And is automatically reinstalled from the scratch every time patch release of python gets released
 RUN pip install \
     "https://github.com/${AIRFLOW_REPO}/archive/${AIRFLOW_BRANCH}.tar.gz#egg=apache-airflow[${AIRFLOW_EXTRAS}]" \
-        --constraint "https://raw.githubusercontent.com/${AIRFLOW_REPO}/${AIRFLOW_BRANCH}/requirements/requirements-python${PYTHON_MAJOR_MINOR_VERSION}.txt" \
-    && pip uninstall --yes apache-airflow;
+        --constraint "${AIRFLOW_CONSTRAINTS_URL}" && pip uninstall --yes apache-airflow;
 
 
 # Link dumb-init for backwards compatibility (so that older images also work)
@@ -252,20 +252,21 @@ COPY airflow/version.py ${AIRFLOW_SOURCES}/airflow/version.py
 COPY airflow/__init__.py ${AIRFLOW_SOURCES}/airflow/__init__.py
 COPY airflow/bin/airflow ${AIRFLOW_SOURCES}/airflow/bin/airflow
 
-COPY requirements/requirements-python${PYTHON_MAJOR_MINOR_VERSION}.txt \
-        ${AIRFLOW_SOURCES}/requirements/requirements-python${PYTHON_MAJOR_MINOR_VERSION}.txt
+ARG UPGRADE_TO_LATEST_CONSTRAINTS="false"
+ENV UPGRADE_TO_LATEST_CONSTRAINTS=${UPGRADE_TO_LATEST_CONSTRAINTS}
 
 # The goal of this line is to install the dependencies from the most current setup.py from sources
 # This will be usually incremental small set of packages in CI optimized build, so it will be very fast
 # In non-CI optimized build this will install all dependencies before installing sources.
-# Usually we will install versions constrained to the current requirements file
+# Usually we will install versions constrained to the current constraints file
 # But in cron job we will install latest versions matching setup.py to see if there is no breaking change
+# and push the constraints if everything is successful
 RUN \
-    if [[ "${UPGRADE_TO_LATEST_REQUIREMENTS}" == "true" ]]; then \
+    if [[ "${UPGRADE_TO_LATEST_CONSTRAINTS}" == "true" ]]; then \
         pip install -e ".[${AIRFLOW_EXTRAS}]" --upgrade --upgrade-strategy eager; \
     else \
         pip install -e ".[${AIRFLOW_EXTRAS}]" \
-            --constraint  ${AIRFLOW_SOURCES}/requirements/requirements-python${PYTHON_MAJOR_MINOR_VERSION}.txt ; \
+            --constraint "${AIRFLOW_CONSTRAINTS_URL}" ; \
     fi
 
 # Copy all the www/ files we need to compile assets. Done as two separate COPY
diff --git a/IMAGES.rst b/IMAGES.rst
index b1890e1..6cdc3b2 100644
--- a/IMAGES.rst
+++ b/IMAGES.rst
@@ -92,25 +92,23 @@ parameter to Breeze:
 
 .. code-block:: bash
 
-  ./breeze build-image --python 3.7 --extras=gcp --production-image --install-airflow-version=1.10.9
+  ./breeze build-image --python 3.7 --extras=gcp --production-image --install-airflow-version=1.10.12
 
 This will build the image using command similar to:
 
 .. code-block:: bash
 
-    pip install apache-airflow[sendgrid]==1.10.9 \
-       --constraint https://raw.githubusercontent.com/apache/airflow/v1-10-test/requirements/requirements-python3.7.txt
-
-The requirement files only appeared in version 1.10.10 of airflow so if you install
-an earlier version -  both constraint and requirements should point to 1.10.10 version.
+    pip install apache-airflow[sendgrid]==1.10.12 \
+      --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-1.10.12/constraints-3.6.txt"
 
 You can also build production images from specific Git version via providing ``--install-airflow-reference``
-parameter to Breeze:
+parameter to Breeze (this time constraints are taken from the ``constraints-master`` branch which is the
+HEAD of development for constraints):
 
 .. code-block:: bash
 
-    pip install https://github.com/apache/airflow/archive/<tag>.tar.gz#egg=apache-airflow \
-       --constraint https://raw.githubusercontent.com/apache/airflow/<tag>/requirements/requirements-python3.7.txt
+    pip install "https://github.com/apache/airflow/archive/<tag>.tar.gz#egg=apache-airflow" \
+      --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-master/constraints-3.6.txt"
 
 Using cache during builds
 =========================
@@ -381,23 +379,19 @@ The following build arguments (``--build-arg`` in docker build command) can be u
 +------------------------------------------+------------------------------------------+------------------------------------------+
 | ``AIRFLOW_VERSION``                      | ``2.0.0.dev0``                           | version of Airflow                       |
 +------------------------------------------+------------------------------------------+------------------------------------------+
-| ``AIRFLOW_ORG``                          | ``apache``                               | Github organisation from which Airflow   |
-|                                          |                                          | is installed (when installed from repo)  |
-+------------------------------------------+------------------------------------------+------------------------------------------+
 | ``AIRFLOW_REPO``                         | ``apache/airflow``                       | the repository from which PIP            |
 |                                          |                                          | dependencies are pre-installed           |
 +------------------------------------------+------------------------------------------+------------------------------------------+
 | ``AIRFLOW_BRANCH``                       | ``master``                               | the branch from which PIP dependencies   |
-|                                          |                                          | are pre-installed                        |
-+------------------------------------------+------------------------------------------+------------------------------------------+
-| ``AIRFLOW_GIT_REFERENCE``                | ``master``                               | reference (branch or tag) from Github    |
-|                                          |                                          | repository from which Airflow is         |
-|                                          |                                          | installed (when installed from repo)     |
+|                                          |                                          | are pre-installed initially              |
 +------------------------------------------+------------------------------------------+------------------------------------------+
-| ``REQUIREMENTS_GIT_REFERENCE``           | ``master``                               | reference (branch or tag) from Github    |
-|                                          |                                          | repository from which requirements are   |
-|                                          |                                          | downloaded for constraints (when         |
-|                                          |                                          | installed from repo).                    |
+| ``AIRFLOW_CONSTRAINTS_REFERENCE``        | ``constraints-master``                   | reference (branch or tag) from Github    |
+|                                          |                                          | repository from which constraints are    |
+|                                          |                                          | used. By default it is set to            |
+|                                          |                                          | ``constraints-master`` but can be        |
+|                                          |                                          | ``constraints-1-10`` for 1.10.* versions |
+|                                          |                                          | or it could point to specific version    |
+|                                          |                                          | for example ``constraints-1.10.12``      |
 +------------------------------------------+------------------------------------------+------------------------------------------+
 | ``AIRFLOW_EXTRAS``                       | (see Dockerfile)                         | Default extras with which airflow is     |
 |                                          |                                          | installed                                |
@@ -462,10 +456,14 @@ production image. There are three types of build:
 |                                   | set Airflow version for example   |
 |                                   | "==1.10.10"                       |
 +-----------------------------------+-----------------------------------+
-| ``CONSTRAINT_REQUIREMENTS``       | Should point to requirements file |
-|                                   | in case of installation from      |
-|                                   | the package or from GitHub URL.   |
-|                                   | See examples below                |
+| ``AIRFLOW_CONSTRAINTS_REFERENCE`` | reference (branch or tag) from    |
+|                                   | Github where constraints file     |
+|                                   | is taken from. By default it is   |
+|                                   | ``constraints-master`` but can be |
+|                                   | ``constraints-1-10`` for 1.10.*   |
+|                                   | constraint or if you want to      |
+|                                   | point to specific varsion         |
+|                                   | ``constraints-1.10.12             |
 +-----------------------------------+-----------------------------------+
 | ``AIRFLOW_WWW``                   | In case of Airflow 2.0 it should  |
 |                                   | be "www", in case of Airflow 1.10 |
@@ -495,24 +493,22 @@ of 2.0 currently):
 
   docker build .
 
-This builds the production image in version 3.7 with default extras from 1.10.9 tag and
-requirements taken from v1-10-test branch in Github.
-Note that versions 1.10.9 and below have no requirements so requirements should be taken from head of
-the 1.10.10 tag.
+This builds the production image in version 3.7 with default extras from 1.10.12 tag and
+constraints taken from constraints-1-10-12 branch in Github.
 
 .. code-block:: bash
 
   docker build . \
     --build-arg PYTHON_BASE_IMAGE="python:3.7-slim-buster" \
     --build-arg PYTHON_MAJOR_MINOR_VERSION=3.7 \
-    --build-arg AIRFLOW_INSTALL_SOURCES="https://github.com/apache/airflow/archive/1.10.10.tar.gz#egg=apache-airflow" \
-    --build-arg CONSTRAINT_REQUIREMENTS="https://raw.githubusercontent.com/apache/airflow/1.10.10/requirements/requirements-python3.7.txt" \
+    --build-arg AIRFLOW_INSTALL_SOURCES="https://github.com/apache/airflow/archive/1.10.12.tar.gz#egg=apache-airflow" \
+    --build-arg AIRFLOW_CONSTRAINTS_REFERENCE="constraints-1-10" \
     --build-arg AIRFLOW_BRANCH="v1-10-test" \
     --build-arg AIRFLOW_SOURCES_FROM="empty" \
     --build-arg AIRFLOW_SOURCES_TO="/empty"
 
-This builds the production image in version 3.7 with default extras from 1.10.10 Pypi package and
-requirements taken from 1.10.10 tag in Github and pre-installed pip dependencies from the top
+This builds the production image in version 3.7 with default extras from 1.10.12 Pypi package and
+constraints taken from 1.10.12 tag in Github and pre-installed pip dependencies from the top
 of v1-10-test branch.
 
 .. code-block:: bash
@@ -521,15 +517,14 @@ of v1-10-test branch.
     --build-arg PYTHON_BASE_IMAGE="python:3.7-slim-buster" \
     --build-arg PYTHON_MAJOR_MINOR_VERSION=3.7 \
     --build-arg AIRFLOW_INSTALL_SOURCES="apache-airflow" \
-    --build-arg AIRFLOW_INSTALL_VERSION="==1.10.10" \
+    --build-arg AIRFLOW_INSTALL_VERSION="==1.10.12" \
     --build-arg AIRFLOW_BRANCH="v1-10-test" \
-    --build-arg CONSTRAINT_REQUIREMENTS="https://raw.githubusercontent.com/apache/airflow/1.10.10/requirements/requirements-python3.7.txt" \
+    --build-arg AIRFLOW_CONSTRAINTS_REFERENCE="constraints-1.10.12" \
     --build-arg AIRFLOW_SOURCES_FROM="empty" \
     --build-arg AIRFLOW_SOURCES_TO="/empty"
 
 This builds the production image in version 3.7 with additional airflow extras from 1.10.10 Pypi package and
-additional python dependencies and pre-installed pip dependencies from the top
-of v1-10-test branch.
+additional python dependencies and pre-installed pip dependencies from 1.10.10 tagged constraints.
 
 .. code-block:: bash
 
@@ -539,7 +534,7 @@ of v1-10-test branch.
     --build-arg AIRFLOW_INSTALL_SOURCES="apache-airflow" \
     --build-arg AIRFLOW_INSTALL_VERSION="==1.10.10" \
     --build-arg AIRFLOW_BRANCH="v1-10-test" \
-    --build-arg CONSTRAINT_REQUIREMENTS="https://raw.githubusercontent.com/apache/airflow/1.10.10/requirements/requirements-python3.7.txt" \
+    --build-arg AIRFLOW_CONSTRAINTS_REFERENCE="constraints-1.10.10" \
     --build-arg AIRFLOW_SOURCES_FROM="empty" \
     --build-arg AIRFLOW_SOURCES_TO="/empty" \
     --build-arg ADDITIONAL_AIRFLOW_EXTRAS="mssql,hdfs"
@@ -554,8 +549,8 @@ additional apt dev and runtime dependencies.
     --build-arg PYTHON_BASE_IMAGE="python:3.7-slim-buster" \
     --build-arg PYTHON_MAJOR_MINOR_VERSION=3.7 \
     --build-arg AIRFLOW_INSTALL_SOURCES="apache-airflow" \
-    --build-arg AIRFLOW_INSTALL_VERSION="==1.10.10" \
-    --build-arg CONSTRAINT_REQUIREMENTS="https://raw.githubusercontent.com/apache/airflow/1.10.11/requirements/requirements-python3.7.txt" \
+    --build-arg AIRFLOW_INSTALL_VERSION="==1.10.12" \
+    --build-arg AIRFLOW_CONSTRAINTS_REFERENCE="constraints-1-10" \
     --build-arg AIRFLOW_SOURCES_FROM="empty" \
     --build-arg AIRFLOW_SOURCES_TO="/empty" \
     --build-arg ADDITIONAL_AIRFLOW_EXTRAS="jdbc"
diff --git a/INSTALL b/INSTALL
index 40fed43..7e633cb 100644
--- a/INSTALL
+++ b/INSTALL
@@ -35,11 +35,12 @@ pip install .
 python setup.py install
 
 # You can also install recommended version of the dependencies by using
-# requirements-python<PYTHON_MAJOR_MINOR_VERSION>.txt as constraint file. This is needed in case
+# constraint-python<PYTHON_MAJOR_MINOR_VERSION>.txt files as constraint file. This is needed in case
 # you have problems with installing the current requirements from PyPI.
-# There are different requirements for different python versions. For example"
+# There are different constraint files for different python versions. For example"
 
-pip install . --constraint requirements/requirements-python3.7.txt
+pip install . \
+  --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-master/constraints-3.6.txt"
 
 # You can also install Airflow with extras specified. The list of available extras:
 # START EXTRAS HERE
diff --git a/LOCAL_VIRTUALENV.rst b/LOCAL_VIRTUALENV.rst
index b744fc3..8a20c02 100644
--- a/LOCAL_VIRTUALENV.rst
+++ b/LOCAL_VIRTUALENV.rst
@@ -36,11 +36,11 @@ These are examples of the development options available with the local virtualen
 
 * local debugging;
 * Airflow source view;
-* autocompletion;
+* auto-completion;
 * documentation support;
 * unit tests.
 
-This document describes minimum requirements and insructions for using a standalone version of the local virtualenv.
+This document describes minimum requirements and instructions for using a standalone version of the local virtualenv.
 
 Prerequisites
 =============
@@ -118,6 +118,15 @@ To create and initialize the local virtualenv:
 
     pip install -U -e ".[devel,<OTHER EXTRAS>]" # for example: pip install -U -e ".[devel,gcp,postgres]"
 
+In case you have problems with installing airflow because of some requirements are not installable, you can
+try to install it with the set of working constraints (note that there are different constraint files
+for different python versions:
+
+   .. code-block:: bash
+
+    pip install -U -e ".[devel,<OTHER EXTRAS>]" \
+        --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-master/constraints-3.6.txt"
+
 Note: when you first initialize database (the next step), you may encounter some problems.
 This is because airflow by default will try to load in example dags where some of them requires dependencies ``gcp`` and ``postgres``.
 You can solve the problem by:
diff --git a/README.md b/README.md
index 940e972..c7f3ad0 100644
--- a/README.md
+++ b/README.md
@@ -98,23 +98,24 @@ our dependencies as open as possible (in `setup.py`) so users can install differ
 if needed. This means that from time to time plain `pip install apache-airflow` will not work or will
 produce unusable Airflow installation.
 
-In order to have repeatable installation, however, starting from **Airflow 1.10.10** we also keep a set of
-"known-to-be-working" requirement files in the `requirements` folder. Those "known-to-be-working"
-requirements are per major/minor python version (3.6/3.7/3.8). You can use them as constraint files
-when installing Airflow from PyPI. Note that you have to specify correct Airflow version and python versions
-in the URL.
+In order to have repeatable installation, however, introduced in **Airflow 1.10.10** and updated in
+**Airflow 1.10.12** we also keep a set of "known-to-be-working" constraint files in the
+orphan `constraints-master` and `constraints-1-10` branches. We keep those "known-to-be-working"
+constraints files separately per major/minor python version.
+You can use them as constraint files when installing Airflow from PyPI. Note that you have to specify
+correct Airflow tag/version/branch and python versions in the URL.
 
 1. Installing just airflow:
 
 ```bash
-pip install apache-airflow==1.10.11 \
- --constraint https://raw.githubusercontent.com/apache/airflow/1.10.11/requirements/requirements-python3.7.txt
+pip install apache-airflow==1.10.12 \
+ --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-1.10.12/constraints-3.7.txt"
 ```
 
 2. Installing with extras (for example postgres,gcp)
 ```bash
 pip install apache-airflow[postgres,gcp]==1.10.11 \
- --constraint https://raw.githubusercontent.com/apache/airflow/1.10.11/requirements/requirements-python3.7.txt
+ --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-1.10.12/constraints-3.7.txt"
 ```
 
 ## Building customized production images
diff --git a/breeze b/breeze
index 571d383..27eaee0 100755
--- a/breeze
+++ b/breeze
@@ -186,7 +186,8 @@ function initialize_virtualenv() {
         echo
         pushd "${AIRFLOW_SOURCES}"
         set +e
-        pip install -e ".[devel]" --constraint "requirements/requirements-python${PYTHON_MAJOR_MINOR_VERSION}.txt"
+        pip install -e ".[devel]" \
+            --constraint "https://raw.githubusercontent.com/apache/airflow/${DEFAULT_CONSTRAINTS_BRANCH}/constraints-${PYTHON_MAJOR_MINOR_VERSION}.txt"
         RES=$?
         set -e
         popd
@@ -825,9 +826,12 @@ function parse_arguments() {
           fi
           COMMAND_TO_RUN="run_docker_compose"
           ;;
-        generate-requirements)
+        generate-constraints)
           LAST_SUBCOMMAND="${1}"
-          COMMAND_TO_RUN="perform_generate_requirements"
+          COMMAND_TO_RUN="perform_generate_constraints"
+          export FORCE_ANSWER_TO_QUESTIONS="yes"
+          export FORCE_BUILD_IMAGES="true"
+          export UPGRADE_TO_LATEST_CONSTRAINTS="true"
           shift ;;
         push-image)
           LAST_SUBCOMMAND="${1}"
@@ -1010,7 +1014,7 @@ function prepare_usage() {
     export USAGE_CLEANUP_IMAGE="Cleans up the container image created"
     export USAGE_DOCKER_COMPOSE="Executes specified docker-compose command"
     export USAGE_FLAGS="Shows all breeze's flags"
-    export USAGE_GENERATE_REQUIREMENTS="Generates pinned requirements for pip dependencies"
+    export USAGE_GENERATE_CONSTRAINTS="Generates pinned constraint files"
     export USAGE_INITIALIZE_LOCAL_VIRTUALENV="Initializes local virtualenv"
     export USAGE_PUSH_IMAGE="Pushes images to registry"
     export USAGE_KIND_CLUSTER="Manages KinD cluster on the host"
@@ -1128,27 +1132,30 @@ $(flag_verbosity)
     export DETAILED_USAGE_FLAGS="
       Explains in detail all the flags that can be used with breeze.
 "
-    DETAILED_USAGE_GENERATE_REQUIREMENTS="
-${CMDNAME} generate-requirements [FLAGS]
+    # shellcheck disable=SC2089
+    DETAILED_USAGE_GENERATE_CONSTRAINTS="
+${CMDNAME} generate-constraints [FLAGS]
 
-      Generates pinned requirements from setup.py. Those requirements are generated in requirements
-      directory - separately for different python version. Those requirements are used to run
-      CI builds as well as run repeatable production image builds. You can use those requirements
-      to predictably install released Airflow versions. You should run it always after you update
-      setup.py.
+      Generates pinned constraint files from setup.py. Those files are generated in files folder
+      - separate files for different python version. Those constraint files when pushed to orphan
+      constraint-master and constraint-1-10 branches are used to generate repeatable
+      CI builds as well as run repeatable production image builds. You can use those constraints
+      to predictably install released Airflow versions. This is mainly used to test the constraint
+      generation - constraints are pushed to the orphan branches by a successful scheduled
+      CRON job in CI automatically.
 
 Flags:
 $(flag_airflow_variants)
 $(flag_verbosity)
 "
     # shellcheck disable=SC2090
-    export DETAILED_USAGE_GENERATE_REQUIREMENTS
+    export DETAILED_USAGE_GENERATE_CONSTRAINTS
     DETAILED_USAGE_INITIALIZE_LOCAL_VIRTUALENV="
 ${CMDNAME} initialize-local-virtualenv [FLAGS]
 
       Initializes locally created virtualenv installing all dependencies of Airflow
-      taking into account the frozen requirements from requirements folder.
-      This local virtualenv can be used to aid autocompletion and IDE support as
+      taking into account the constraints for the version specified.
+      This local virtualenv can be used to aid auto-completion and IDE support as
       well as run unit tests directly from the IDE. You need to have virtualenv
       activated before running this command.
 
@@ -1880,7 +1887,7 @@ function run_build_command {
                 rebuild_ci_image_if_needed
             fi
             ;;
-        build_docs|perform_static_checks|perform_generate_requirements)
+        build_docs|perform_static_checks|perform_generate_constraints)
             prepare_ci_build
             rebuild_ci_image_if_needed
             ;;
@@ -1996,8 +2003,8 @@ function run_breeze_command {
         cleanup_image)
             remove_images
             ;;
-        perform_generate_requirements)
-            run_generate_requirements
+        perform_generate_constraints)
+            run_generate_constraints
             ;;
         perform_push_image)
             if [[ ${PRODUCTION_IMAGE} == "true" ]]; then
diff --git a/breeze-complete b/breeze-complete
index 73368b8..574b9c7 100644
--- a/breeze-complete
+++ b/breeze-complete
@@ -118,7 +118,7 @@ build-docs
 build-image
 cleanup-image
 exec
-generate-requirements
+generate-constraints
 push-image
 initialize-local-virtualenv
 setup-autocomplete
diff --git a/common/_default_branch.sh b/common/_default_branch.sh
index e4c00ec..b47d672 100644
--- a/common/_default_branch.sh
+++ b/common/_default_branch.sh
@@ -17,3 +17,4 @@
 # under the License.
 
 export DEFAULT_BRANCH="v1-10-test"
+export DEFAULT_CONSTRAINTS_BRANCH="constraints-1-10"
diff --git a/docs/installation.rst b/docs/installation.rst
index d5652cb..ed82157 100644
--- a/docs/installation.rst
+++ b/docs/installation.rst
@@ -30,33 +30,40 @@ our dependencies as open as possible (in ``setup.py``) so users can install diff
 if needed. This means that from time to time plain ``pip install apache-airflow`` will not work or will
 produce unusable Airflow installation.
 
-In order to have repeatable installation, however, starting from **Airflow 1.10.10** we also keep a set of
-"known-to-be-working" requirement files in the ``requirements`` folder. Those "known-to-be-working"
-requirements are per major/minor python version (3.6/3.7). You can use them as constraint
+In order to have repeatable installation, however, starting from **Airflow 1.10.10** and updated in
+**Airflow 1.10.12** we also keep a set of "known-to-be-working" constraint files in the
+``constraints-master`` and ``constraints-1-10`` orphan branches.
+Those "known-to-be-working" constraints are per major/minor python version. You can use them as constraint
 files when installing Airflow from PyPI. Note that you have to specify correct Airflow version
 and python versions in the URL.
 
+
+  **Prerequisites**
+
+  On Debian based Linux OS:
+
+  .. code-block:: bash
+
+      sudo apt-get update
+      sudo apt-get install build-essential
+
+
 1. Installing just airflow
 
 .. code-block:: bash
 
     pip install \
-     apache-airflow==1.10.10 \
-     --constraint \
-            https://raw.githubusercontent.com/apache/airflow/1.10.10/requirements/requirements-python3.7.txt
-
+     apache-airflow==1.10.12 \
+     --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-1.10.12/constraints-3.7.txt"
 
-You need certain system level requirements in order to install Airflow. Those are requirements that are known
-to be needed for Linux system (Tested on Ubuntu Buster LTS) :
 
 2. Installing with extras (for example postgres, gcp)
 
 .. code-block:: bash
 
     pip install \
-     apache-airflow[postgres,gcp]==1.10.10 \
-     --constraint \
-            https://raw.githubusercontent.com/apache/airflow/1.10.10/requirements/requirements-python3.7.txt
+     apache-airflow[postgres,gcp]==1.10.12 \
+     --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-1.10.12/constraints-3.7.txt"
 
 
 You need certain system level requirements in order to install Airflow. Those are requirements that are known
diff --git a/requirements/REMOVE.md b/requirements/REMOVE.md
new file mode 100644
index 0000000..e5163fb
--- /dev/null
+++ b/requirements/REMOVE.md
@@ -0,0 +1,22 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one
+ or more contributor license agreements.  See the NOTICE file
+ distributed with this work for additional information
+ regarding copyright ownership.  The ASF licenses this file
+ to you under the Apache License, Version 2.0 (the
+ "License"); you may not use this file except in compliance
+ with the License.  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing,
+ software distributed under the License is distributed on an
+ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ KIND, either express or implied.  See the License for the
+ specific language governing permissions and limitations
+ under the License.
+ -->
+
+This directory should be removed as soon as we release Airflow 1.10.12
+and sufficient time passes for everyone to switch to new way of retrieving
+constraints.
diff --git a/scripts/ci/requirements/ci_generate_requirements.sh b/scripts/ci/constraints/ci_generate_constraints.sh
similarity index 97%
rename from scripts/ci/requirements/ci_generate_requirements.sh
rename to scripts/ci/constraints/ci_generate_constraints.sh
index 5cc4a0e..1b7ee4f 100755
--- a/scripts/ci/requirements/ci_generate_requirements.sh
+++ b/scripts/ci/constraints/ci_generate_constraints.sh
@@ -26,4 +26,4 @@ prepare_ci_build
 
 rebuild_ci_image_if_needed
 
-run_generate_requirements
+run_generate_constraints
diff --git a/scripts/ci/docker-compose/local.yml b/scripts/ci/docker-compose/local.yml
index 822d49d..3e1abab 100644
--- a/scripts/ci/docker-compose/local.yml
+++ b/scripts/ci/docker-compose/local.yml
@@ -49,7 +49,6 @@ services:
       - ../../../hooks:/opt/airflow/hooks:cached
       - ../../../logs:/root/airflow/logs:cached
       - ../../../pytest.ini:/opt/airflow/pytest.ini:cached
-      - ../../../requirements:/opt/airflow/requirements:cached
       - ../../../scripts:/opt/airflow/scripts:cached
       - ../../../scripts/ci/in_container/entrypoint_ci.sh:/entrypoint:cached
       - ../../../setup.cfg:/opt/airflow/setup.cfg:cached
diff --git a/scripts/ci/in_container/run_generate_constraints.sh b/scripts/ci/in_container/run_generate_constraints.sh
new file mode 100755
index 0000000..9b18c79
--- /dev/null
+++ b/scripts/ci/in_container/run_generate_constraints.sh
@@ -0,0 +1,50 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# shellcheck source=scripts/ci/in_container/_in_container_script_init.sh
+. "$( dirname "${BASH_SOURCE[0]}" )/_in_container_script_init.sh"
+
+# adding trap to exiting trap
+HANDLERS="$( trap -p EXIT | cut -f2 -d \' )"
+# shellcheck disable=SC2064
+trap "${HANDLERS}${HANDLERS:+;}in_container_fix_ownership" EXIT
+
+CONSTRAINTS_DIR="/files/constraints-${PYTHON_MAJOR_MINOR_VERSION}"
+
+LATEST_CONSTRAINT_FILE="${CONSTRAINTS_DIR}/original-constraints-${PYTHON_MAJOR_MINOR_VERSION}.txt"
+CURRENT_CONSTRAINT_FILE="${CONSTRAINTS_DIR}/constraints-${PYTHON_MAJOR_MINOR_VERSION}.txt"
+
+mkdir -pv "${CONSTRAINTS_DIR}"
+
+curl "${AIRFLOW_CONSTRAINTS_URL}" --output "${LATEST_CONSTRAINT_FILE}"
+
+echo
+echo "Freezing constraints to ${CURRENT_CONSTRAINT_FILE}"
+echo
+
+pip freeze | sort | \
+    grep -v "apache_airflow" | \
+    grep -v "/opt/airflow" >"${CURRENT_CONSTRAINT_FILE}"
+
+echo
+echo "Constraints generated in ${CURRENT_CONSTRAINT_FILE}"
+echo
+
+set +e
+diff --color=always "${LATEST_CONSTRAINT_FILE}" "${CURRENT_CONSTRAINT_FILE}"
+
+exit 0
diff --git a/scripts/ci/in_container/run_generate_requirements.sh b/scripts/ci/in_container/run_generate_requirements.sh
deleted file mode 100755
index 5022e13..0000000
--- a/scripts/ci/in_container/run_generate_requirements.sh
+++ /dev/null
@@ -1,80 +0,0 @@
-#!/usr/bin/env bash
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-#   http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
-# shellcheck source=scripts/ci/in_container/_in_container_script_init.sh
-. "$( dirname "${BASH_SOURCE[0]}" )/_in_container_script_init.sh"
-
-# adding trap to exiting trap
-HANDLERS="$( trap -p EXIT | cut -f2 -d \' )"
-# shellcheck disable=SC2064
-trap "${HANDLERS}${HANDLERS:+;}in_container_fix_ownership" EXIT
-
-STORED_SETUP_PY_HASH_FILE="${AIRFLOW_SOURCES}/requirements/setup-${PYTHON_MAJOR_MINOR_VERSION}.md5"
-
-CURRENT_SETUP_PY_HASH=$(md5sum "${AIRFLOW_SOURCES}/setup.py")
-STORED_SETUP_PY_HASH=$(cat "${STORED_SETUP_PY_HASH_FILE}" 2>/dev/null || true)
-
-if [[ ${STORED_SETUP_PY_HASH} != "${CURRENT_SETUP_PY_HASH}" && ${CHECK_REQUIREMENTS_ONLY:=} == "true" ]]; then
-    echo
-    echo "ERROR! Setup.py changed since last time requirements were generated"
-    echo
-    echo "     When you update setup.py, you have to run"
-    echo
-    echo "           breeze generate-requirements --python ${PYTHON_MAJOR_MINOR_VERSION}"
-    echo
-    echo
-    exit 1
-fi
-
-# Upgrading requirements will happen only in CRON job to see that we have some
-# new requirements released
-if [[ ${UPGRADE_WHILE_GENERATING_REQUIREMENTS} == "true" ]]; then
-    echo
-    echo "Upgrading requirements to latest ones"
-    echo
-    pip install -e ".[${AIRFLOW_EXTRAS}]" --upgrade --upgrade-strategy eager
-fi
-
-OLD_REQUIREMENTS_FILE="/tmp/requirements-python${PYTHON_MAJOR_MINOR_VERSION}.txt"
-GENERATED_REQUIREMENTS_FILE="${AIRFLOW_SOURCES}/requirements/requirements-python${PYTHON_MAJOR_MINOR_VERSION}.txt"
-
-echo
-echo "Copying requirements ${GENERATED_REQUIREMENTS_FILE} -> ${OLD_REQUIREMENTS_FILE}"
-echo
-cp "${GENERATED_REQUIREMENTS_FILE}" "${OLD_REQUIREMENTS_FILE}"
-
-echo
-echo "Freezing requirements to ${GENERATED_REQUIREMENTS_FILE}"
-echo
-
-pip freeze | sort | \
-    grep -v "apache_airflow" | \
-    grep -v "/opt/airflow" >"${GENERATED_REQUIREMENTS_FILE}"
-
-echo
-echo "Requirements generated in ${GENERATED_REQUIREMENTS_FILE}"
-echo
-
-echo
-echo "Storing setup.py hash in ${STORED_SETUP_PY_HASH_FILE}"
-echo
-echo "${CURRENT_SETUP_PY_HASH}" > "${STORED_SETUP_PY_HASH_FILE}"
-
-set +e
-diff --color=always "${OLD_REQUIREMENTS_FILE}" "${GENERATED_REQUIREMENTS_FILE}"
-
-exit 0
diff --git a/scripts/ci/kubernetes/ci_run_kubernetes_tests.sh b/scripts/ci/kubernetes/ci_run_kubernetes_tests.sh
index 3d8194a..bb5a31e 100755
--- a/scripts/ci/kubernetes/ci_run_kubernetes_tests.sh
+++ b/scripts/ci/kubernetes/ci_run_kubernetes_tests.sh
@@ -87,10 +87,11 @@ fi
 . "${VIRTUALENV_PATH}/bin/activate"
 
 pip install pytest freezegun pytest-cov \
-    --constraint "requirements/requirements-python${PYTHON_MAJOR_MINOR_VERSION}.txt"
+  --constraint "https://raw.githubusercontent.com/apache/airflow/${DEFAULT_CONSTRAINTS_BRANCH}/constraints-${PYTHON_MAJOR_MINOR_VERSION}.txt"
+
 
 pip install -e ".[kubernetes]" \
-    --constraint "requirements/requirements-python${PYTHON_MAJOR_MINOR_VERSION}.txt"
+  --constraint "https://raw.githubusercontent.com/apache/airflow/${DEFAULT_CONSTRAINTS_BRANCH}/constraints-${PYTHON_MAJOR_MINOR_VERSION}.txt"
 
 if [[ ${INTERACTIVE} == "true" ]]; then
     echo
diff --git a/scripts/ci/libraries/_build_images.sh b/scripts/ci/libraries/_build_images.sh
index 01eac17..9020173 100644
--- a/scripts/ci/libraries/_build_images.sh
+++ b/scripts/ci/libraries/_build_images.sh
@@ -29,26 +29,27 @@ function add_build_args_for_remote_install() {
     )
     if [[ ${AIRFLOW_VERSION} =~ [^0-9]*1[^0-9]*10[^0-9]([0-9]*) ]]; then
         # All types of references/versions match this regexp for 1.10 series
-        # for example v1_10_test, 1.10.10, 1.10.9 etc. ${BASH_REMATCH[1]} is the () group matches last
+        # for example v1_10_test, 1.10.10, 1.10.9 etc. ${BASH_REMATCH[1]} matches last
         # minor digit of version and it's length is 0 for v1_10_test, 1 for 1.10.9 and 2 for 1.10.10+
-        if [[ ${#BASH_REMATCH[1]} == "1" ]]; then
-            # This is only for 1.10.0 - 1.10.9
+        AIRFLOW_MINOR_VERSION_NUMBER=${BASH_REMATCH[1]}
+        if [[ ${#AIRFLOW_MINOR_VERSION_NUMBER} == "0" ]]; then
+            # For v1_10_* branches use constraints-1-10 branch
             EXTRA_DOCKER_PROD_BUILD_FLAGS+=(
-                "--build-arg" "CONSTRAINT_REQUIREMENTS=https://raw.githubusercontent.com/apache/airflow/1.10.10/requirements/requirements-python${PYTHON_MAJOR_MINOR_VERSION}.txt"
+                "--build-arg" "AIRFLOW_CONSTRAINTS_REFERENCE=constraints-1-10"
             )
         else
             EXTRA_DOCKER_PROD_BUILD_FLAGS+=(
-                # For 1.10.10+ and v1-10-test it's ok to use AIRFLOW_VERSION as reference
-                "--build-arg" "CONSTRAINT_REQUIREMENTS=https://raw.githubusercontent.com/apache/airflow/${AIRFLOW_VERSION}/requirements/requirements-python${PYTHON_MAJOR_MINOR_VERSION}.txt"
+                # For specified minor version of 1.10 use specific reference constraints
+                "--build-arg" "AIRFLOW_CONSTRAINTS_REFERENCE=constraints-${AIRFLOW_VERSION}"
             )
         fi
         AIRFLOW_BRANCH_FOR_PYPI_PRELOADING="v1-10-test"
     else
-        # For all other (master, 2.0+) we just match ${AIRFLOW_VERSION}
+        # For all other (master, 2.0+) we just get the default constraint branch
         EXTRA_DOCKER_PROD_BUILD_FLAGS+=(
-            "--build-arg" "CONSTRAINT_REQUIREMENTS=https://raw.githubusercontent.com/apache/airflow/${AIRFLOW_VERSION}/requirements/requirements-python${PYTHON_MAJOR_MINOR_VERSION}.txt"
+            "--build-arg" "AIRFLOW_CONSTRAINTS_REFERENCE=${DEFAULT_CONSTRAINTS_BRANCH}"
         )
-        AIRFLOW_BRANCH_FOR_PYPI_PRELOADING="master"
+        AIRFLOW_BRANCH_FOR_PYPI_PRELOADING=${DEFAULT_BRANCH}
     fi
 }
 
@@ -485,7 +486,7 @@ function rebuild_ci_image_if_needed_and_confirmed() {
 }
 
 # Determines the strategy to be used for caching based on the type of CI job run.
-# In case of CRON jobs we run builds without cache and upgrade to latest requirements
+# In case of CRON jobs we run builds without cache and upgrade contstraint files to latest
 function determine_cache_strategy() {
     if [[ "${CI_EVENT_TYPE:=}" == "schedule" ]]; then
         echo
@@ -493,18 +494,14 @@ function determine_cache_strategy() {
         echo
         export DOCKER_CACHE="disabled"
         echo
-        echo "Requirements are upgraded to latest for scheduled CI build"
-        echo
-        export UPGRADE_TO_LATEST_REQUIREMENTS="true"
     else
         echo
         echo "Pull cache used for regular CI builds"
         echo
         export DOCKER_CACHE="pulled"
         echo
-        echo "Requirements are not upgraded to latest ones for regular CI builds"
+        echo "Constraints are not upgraded to latest ones for regular CI builds"
         echo
-        export UPGRADE_TO_LATEST_REQUIREMENTS="false"
     fi
 }
 
@@ -571,14 +568,15 @@ Docker building ${AIRFLOW_CI_IMAGE}.
     verbose_docker build \
         --build-arg PYTHON_BASE_IMAGE="${PYTHON_BASE_IMAGE}" \
         --build-arg PYTHON_MAJOR_MINOR_VERSION="${PYTHON_MAJOR_MINOR_VERSION}" \
-            --build-arg AIRFLOW_VERSION="${AIRFLOW_VERSION}" \
+        --build-arg AIRFLOW_VERSION="${AIRFLOW_VERSION}" \
         --build-arg AIRFLOW_BRANCH="${BRANCH_NAME}" \
         --build-arg AIRFLOW_EXTRAS="${AIRFLOW_EXTRAS}" \
         --build-arg ADDITIONAL_AIRFLOW_EXTRAS="${ADDITIONAL_AIRFLOW_EXTRAS}" \
         --build-arg ADDITIONAL_PYTHON_DEPS="${ADDITIONAL_PYTHON_DEPS}" \
         --build-arg ADDITIONAL_DEV_DEPS="${ADDITIONAL_DEV_DEPS}" \
         --build-arg ADDITIONAL_RUNTIME_DEPS="${ADDITIONAL_RUNTIME_DEPS}" \
-        --build-arg UPGRADE_TO_LATEST_REQUIREMENTS="${UPGRADE_TO_LATEST_REQUIREMENTS}" \
+        --build-arg AIRFLOW_CONSTRAINTS_REFERENCE="constraints-1-10" \
+        --build-arg UPGRADE_TO_LATEST_CONSTRAINTS="${UPGRADE_TO_LATEST_CONSTRAINTS}" \
         "${DOCKER_CACHE_CI_DIRECTIVE[@]}" \
         -t "${AIRFLOW_CI_IMAGE}" \
         --target "main" \
@@ -727,6 +725,7 @@ function build_prod_image() {
         --build-arg ADDITIONAL_AIRFLOW_EXTRAS="${ADDITIONAL_AIRFLOW_EXTRAS}" \
         --build-arg ADDITIONAL_PYTHON_DEPS="${ADDITIONAL_PYTHON_DEPS}" \
         --build-arg ADDITIONAL_DEV_DEPS="${ADDITIONAL_DEV_DEPS}" \
+        --build-arg AIRFLOW_CONSTRAINTS_REFERENCE="${DEFAULT_CONSTRAINTS_BRANCH}" \
         "${DOCKER_CACHE_PROD_BUILD_DIRECTIVE[@]}" \
         -t "${AIRFLOW_PROD_BUILD_IMAGE}" \
         --target "airflow-build-image" \
@@ -743,6 +742,7 @@ function build_prod_image() {
         --build-arg AIRFLOW_BRANCH="${AIRFLOW_BRANCH_FOR_PYPI_PRELOADING}" \
         --build-arg AIRFLOW_EXTRAS="${AIRFLOW_EXTRAS}" \
         --build-arg EMBEDDED_DAGS="${EMBEDDED_DAGS}" \
+        --build-arg AIRFLOW_CONSTRAINTS_REFERENCE="${DEFAULT_CONSTRAINTS_BRANCH}" \
         "${DOCKER_CACHE_PROD_DIRECTIVE[@]}" \
         -t "${AIRFLOW_PROD_IMAGE}" \
         --target "main" \
diff --git a/scripts/ci/libraries/_initialization.sh b/scripts/ci/libraries/_initialization.sh
index e759b03..d3224e7 100644
--- a/scripts/ci/libraries/_initialization.sh
+++ b/scripts/ci/libraries/_initialization.sh
@@ -157,9 +157,9 @@ function initialize_common_environment {
         done
     fi
 
-    # By default we are not upgrading to latest requirements when building Docker CI image
+    # By default we are not upgrading to latest version of constraints when building Docker CI image
     # This will only be done in cron jobs
-    export UPGRADE_TO_LATEST_REQUIREMENTS=${UPGRADE_TO_LATEST_REQUIREMENTS:="false"}
+    export UPGRADE_TO_LATEST_CONSTRAINTS=${UPGRADE_TO_LATEST_CONSTRAINTS:="false"}
 
     # In case of MacOS we need to use gstat - gnu version of the stats
     export STAT_BIN=stat
@@ -174,23 +174,13 @@ function initialize_common_environment {
     # default version of python used to tag the "master" and "latest" images in DockerHub
     export DEFAULT_PYTHON_MAJOR_MINOR_VERSION=3.6
 
-    # In case we are not in CI - we assume we run locally. There are subtle changes if you run
-    # CI scripts locally - for example requirements are eagerly updated if you do local run
-    # in generate requirements
+    # In case we are not in CI - we assume we run locally.
     if [[ ${CI:="false"} == "true" ]]; then
         export LOCAL_RUN="false"
     else
         export LOCAL_RUN="true"
     fi
 
-    # eager upgrade while generating requirements should only happen in locally run
-    # pre-commits or in cron job
-    if [[ ${LOCAL_RUN} == "true" ]]; then
-        export UPGRADE_WHILE_GENERATING_REQUIREMENTS="true"
-    else
-        export UPGRADE_WHILE_GENERATING_REQUIREMENTS=${UPGRADE_WHILE_GENERATING_REQUIREMENTS:="false"}
-    fi
-
     # Default extras used for building CI image
     export DEFAULT_CI_EXTRAS="devel_ci"
 
diff --git a/scripts/ci/libraries/_local_mounts.sh b/scripts/ci/libraries/_local_mounts.sh
index 127ebb3..da16744 100644
--- a/scripts/ci/libraries/_local_mounts.sh
+++ b/scripts/ci/libraries/_local_mounts.sh
@@ -45,7 +45,6 @@ function generate_local_mounts_list {
         "$prefix"hooks:/opt/airflow/hooks:cached
         "$prefix"logs:/root/airflow/logs:cached
         "$prefix"pytest.ini:/opt/airflow/pytest.ini:cached
-        "$prefix"requirements:/opt/airflow/requirements:cached
         "$prefix"scripts:/opt/airflow/scripts:cached
         "$prefix"scripts/ci/in_container/entrypoint_ci.sh:/entrypoint:cached
         "$prefix"setup.cfg:/opt/airflow/setup.cfg:cached
diff --git a/scripts/ci/libraries/_runs.sh b/scripts/ci/libraries/_runs.sh
index 76b674d..ef18af8 100644
--- a/scripts/ci/libraries/_runs.sh
+++ b/scripts/ci/libraries/_runs.sh
@@ -34,8 +34,8 @@ function run_docs() {
             | tee -a "${OUTPUT_LOG}"
 }
 
-# Docker command to generate constraint requirement files.
-function run_generate_requirements() {
+# Docker command to generate constraint files.
+function run_generate_constraints() {
     docker run "${EXTRA_DOCKER_FLAGS[@]}" \
         --entrypoint "/usr/local/bin/dumb-init"  \
         --env PYTHONDONTWRITEBYTECODE \
@@ -46,11 +46,9 @@ function run_generate_requirements() {
         --env HOST_OS="$(uname -s)" \
         --env HOST_HOME="${HOME}" \
         --env HOST_AIRFLOW_SOURCES="${AIRFLOW_SOURCES}" \
-        --env UPGRADE_WHILE_GENERATING_REQUIREMENTS \
         --env PYTHON_MAJOR_MINOR_VERSION \
-        --env CHECK_REQUIREMENTS_ONLY \
         --rm \
         "${AIRFLOW_CI_IMAGE}" \
-        "--" "/opt/airflow/scripts/ci/in_container/run_generate_requirements.sh" \
+        "--" "/opt/airflow/scripts/ci/in_container/run_generate_constraints.sh" \
         | tee -a "${OUTPUT_LOG}"
 }
diff --git a/scripts/ci/pre_commit/pre_commit_generate_requirements.sh b/scripts/ci/pre_commit/pre_commit_generate_requirements.sh
deleted file mode 100755
index d0c2deb..0000000
--- a/scripts/ci/pre_commit/pre_commit_generate_requirements.sh
+++ /dev/null
@@ -1,24 +0,0 @@
-#!/usr/bin/env bash
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-#   http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
-export FORCE_ANSWER_TO_QUESTIONS=${FORCE_ANSWER_TO_QUESTIONS:="quit"}
-export REMEMBER_LAST_ANSWER="true"
-
-export PYTHON_MAJOR_MINOR_VERSION="${1}"
-
-# shellcheck source=scripts/ci/requirements/ci_generate_requirements.sh
-. "$( dirname "${BASH_SOURCE[0]}" )/../generate_requirements/ci_generate_requirements.sh"
diff --git a/scripts/ci/static_checks/ci_run_static_checks.sh b/scripts/ci/static_checks/ci_run_static_checks.sh
index 6b7f124..e7cf6f2 100755
--- a/scripts/ci/static_checks/ci_run_static_checks.sh
+++ b/scripts/ci/static_checks/ci_run_static_checks.sh
@@ -33,6 +33,9 @@ prepare_ci_build
 
 rebuild_ci_image_if_needed
 
+python -m pip install pre-commit \
+  --constraint "https://raw.githubusercontent.com/apache/airflow/${DEFAULT_CONSTRAINTS_BRANCH}/constraints-${PYTHON_MAJOR_MINOR_VERSION}.txt"
+
 if [[ $# == "0" ]]; then
     pre-commit run --all-files --show-diff-on-failure --color always
 else
diff --git a/scripts/ci/tools/ci_check_if_tests_should_be_run.sh b/scripts/ci/tools/ci_check_if_tests_should_be_run.sh
index 5a51048..0b8a7b1 100755
--- a/scripts/ci/tools/ci_check_if_tests_should_be_run.sh
+++ b/scripts/ci/tools/ci_check_if_tests_should_be_run.sh
@@ -26,7 +26,6 @@ CHANGED_FILES_PATTERNS=(
     "^scripts"
     "^chart"
     "^setup.py"
-    "^requirements"
     "^tests"
     "^kubernetes_tests"
 )


[airflow] 04/32: [AIRFLOW-5391] Do not re-run skipped tasks when they are cleared (#7276)

Posted by ka...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

kaxilnaik pushed a commit to branch v1-10-test
in repository https://gitbox.apache.org/repos/asf/airflow.git

commit 179e93093b7b21fa60cf8d5984ec4fd0c6b52730
Author: yuqian90 <yu...@gmail.com>
AuthorDate: Fri Feb 21 19:35:55 2020 +0800

    [AIRFLOW-5391] Do not re-run skipped tasks when they are cleared (#7276)
    
    If a task is skipped by BranchPythonOperator, BaseBranchOperator or ShortCircuitOperator and the user then clears the skipped task later, it'll execute. This is probably not the right
    behaviour.
    
    This commit changes that so it will be skipped again. This can be ignored by running the task again with "Ignore Task Deps" override.
    
    (cherry picked from commit 1cdab56a6192f69962506b7ff632c986c84eb10d)
---
 UPDATING.md                                        |   7 ++
 airflow/models/baseoperator.py                     |   2 +
 airflow/models/skipmixin.py                        | 112 ++++++++++++-----
 airflow/ti_deps/dep_context.py                     |  27 ++++-
 airflow/ti_deps/deps/not_previously_skipped_dep.py |  88 ++++++++++++++
 requirements/requirements-python2.7.txt            |  67 ++++++-----
 requirements/requirements-python3.8.txt            |  92 +++++++-------
 requirements/setup-2.7.md5                         |   2 +-
 requirements/setup-3.8.md5                         |   2 +-
 tests/jobs/test_scheduler_job.py                   |  39 ++++++
 tests/operators/test_latest_only_operator.py       |  60 ++++++----
 tests/operators/test_python_operator.py            | 119 +++++++++++++++++-
 .../deps/test_not_previously_skipped_dep.py        | 133 +++++++++++++++++++++
 tests/ti_deps/deps/test_trigger_rule_dep.py        |  40 +++++++
 14 files changed, 655 insertions(+), 135 deletions(-)

diff --git a/UPDATING.md b/UPDATING.md
index 61734bb..f82ba10 100644
--- a/UPDATING.md
+++ b/UPDATING.md
@@ -25,6 +25,7 @@ assists users migrating to a new version.
 <!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
 **Table of contents**
 
+- [Airflow 1.10.12](#airflow-11012)
 - [Airflow 1.10.11](#airflow-11011)
 - [Airflow 1.10.10](#airflow-11010)
 - [Airflow 1.10.9](#airflow-1109)
@@ -59,6 +60,12 @@ More tips can be found in the guide:
 https://developers.google.com/style/inclusive-documentation
 
 -->
+## Airflow 1.10.12
+
+### Clearing tasks skipped by SkipMixin will skip them
+
+Previously, when tasks skipped by SkipMixin (such as BranchPythonOperator, BaseBranchOperator and ShortCircuitOperator) are cleared, they execute. Since 1.10.12, when such skipped tasks are cleared,
+they will be skipped again by the newly introduced NotPreviouslySkippedDep.
 
 ## Airflow 1.10.11
 
diff --git a/airflow/models/baseoperator.py b/airflow/models/baseoperator.py
index 52037c5..266ad64 100644
--- a/airflow/models/baseoperator.py
+++ b/airflow/models/baseoperator.py
@@ -45,6 +45,7 @@ from airflow.models.pool import Pool
 from airflow.models.taskinstance import TaskInstance, clear_task_instances
 from airflow.models.xcom import XCOM_RETURN_KEY
 from airflow.ti_deps.deps.not_in_retry_period_dep import NotInRetryPeriodDep
+from airflow.ti_deps.deps.not_previously_skipped_dep import NotPreviouslySkippedDep
 from airflow.ti_deps.deps.prev_dagrun_dep import PrevDagrunDep
 from airflow.ti_deps.deps.trigger_rule_dep import TriggerRuleDep
 from airflow.utils import timezone
@@ -575,6 +576,7 @@ class BaseOperator(LoggingMixin):
             NotInRetryPeriodDep(),
             PrevDagrunDep(),
             TriggerRuleDep(),
+            NotPreviouslySkippedDep(),
         }
 
     @property
diff --git a/airflow/models/skipmixin.py b/airflow/models/skipmixin.py
index 57341d8..3b4531f 100644
--- a/airflow/models/skipmixin.py
+++ b/airflow/models/skipmixin.py
@@ -19,28 +19,28 @@
 
 from airflow.models.taskinstance import TaskInstance
 from airflow.utils import timezone
-from airflow.utils.db import provide_session
+from airflow.utils.db import create_session, provide_session
 from airflow.utils.log.logging_mixin import LoggingMixin
 from airflow.utils.state import State
 
 import six
-from typing import Union, Iterable, Set
+from typing import Set
+
+# The key used by SkipMixin to store XCom data.
+XCOM_SKIPMIXIN_KEY = "skipmixin_key"
+
+# The dictionary key used to denote task IDs that are skipped
+XCOM_SKIPMIXIN_SKIPPED = "skipped"
+
+# The dictionary key used to denote task IDs that are followed
+XCOM_SKIPMIXIN_FOLLOWED = "followed"
 
 
 class SkipMixin(LoggingMixin):
-    @provide_session
-    def skip(self, dag_run, execution_date, tasks, session=None):
+    def _set_state_to_skipped(self, dag_run, execution_date, tasks, session):
         """
-        Sets tasks instances to skipped from the same dag run.
-
-        :param dag_run: the DagRun for which to set the tasks to skipped
-        :param execution_date: execution_date
-        :param tasks: tasks to skip (not task_ids)
-        :param session: db session to use
+        Used internally to set state of task instances to skipped from the same dag run.
         """
-        if not tasks:
-            return
-
         task_ids = [d.task_id for d in tasks]
         now = timezone.utcnow()
 
@@ -48,12 +48,15 @@ class SkipMixin(LoggingMixin):
             session.query(TaskInstance).filter(
                 TaskInstance.dag_id == dag_run.dag_id,
                 TaskInstance.execution_date == dag_run.execution_date,
-                TaskInstance.task_id.in_(task_ids)
-            ).update({TaskInstance.state: State.SKIPPED,
-                      TaskInstance.start_date: now,
-                      TaskInstance.end_date: now},
-                     synchronize_session=False)
-            session.commit()
+                TaskInstance.task_id.in_(task_ids),
+            ).update(
+                {
+                    TaskInstance.state: State.SKIPPED,
+                    TaskInstance.start_date: now,
+                    TaskInstance.end_date: now,
+                },
+                synchronize_session=False,
+            )
         else:
             assert execution_date is not None, "Execution date is None and no dag run"
 
@@ -66,14 +69,56 @@ class SkipMixin(LoggingMixin):
                 ti.end_date = now
                 session.merge(ti)
 
-            session.commit()
+    @provide_session
+    def skip(
+        self, dag_run, execution_date, tasks, session=None,
+    ):
+        """
+        Sets tasks instances to skipped from the same dag run.
+
+        If this instance has a `task_id` attribute, store the list of skipped task IDs to XCom
+        so that NotPreviouslySkippedDep knows these tasks should be skipped when they
+        are cleared.
 
-    def skip_all_except(self, ti, branch_task_ids):
-        # type: (TaskInstance, Union[str, Iterable[str]]) -> None
+        :param dag_run: the DagRun for which to set the tasks to skipped
+        :param execution_date: execution_date
+        :param tasks: tasks to skip (not task_ids)
+        :param session: db session to use
+        """
+        if not tasks:
+            return
+
+        self._set_state_to_skipped(dag_run, execution_date, tasks, session)
+        session.commit()
+
+        # SkipMixin may not necessarily have a task_id attribute. Only store to XCom if one is available.
+        try:
+            task_id = self.task_id
+        except AttributeError:
+            task_id = None
+
+        if task_id is not None:
+            from airflow.models.xcom import XCom
+
+            XCom.set(
+                key=XCOM_SKIPMIXIN_KEY,
+                value={XCOM_SKIPMIXIN_SKIPPED: [d.task_id for d in tasks]},
+                task_id=task_id,
+                dag_id=dag_run.dag_id,
+                execution_date=dag_run.execution_date,
+                session=session
+            )
+
+    def skip_all_except(
+        self, ti, branch_task_ids
+    ):
         """
         This method implements the logic for a branching operator; given a single
         task ID or list of task IDs to follow, this skips all other tasks
         immediately downstream of this operator.
+
+        branch_task_ids is stored to XCom so that NotPreviouslySkippedDep knows skipped tasks or
+        newly added tasks should be skipped when they are cleared.
         """
         self.log.info("Following branch %s", branch_task_ids)
         if isinstance(branch_task_ids, six.string_types):
@@ -90,13 +135,22 @@ class SkipMixin(LoggingMixin):
             # is also a downstream task of the branch task, we exclude it from skipping.
             branch_downstream_task_ids = set()  # type: Set[str]
             for b in branch_task_ids:
-                branch_downstream_task_ids.update(dag.
-                                                  get_task(b).
-                                                  get_flat_relative_ids(upstream=False))
+                branch_downstream_task_ids.update(
+                    dag.get_task(b).get_flat_relative_ids(upstream=False)
+                )
 
-            skip_tasks = [t for t in downstream_tasks
-                          if t.task_id not in branch_task_ids and
-                          t.task_id not in branch_downstream_task_ids]
+            skip_tasks = [
+                t
+                for t in downstream_tasks
+                if t.task_id not in branch_task_ids
+                and t.task_id not in branch_downstream_task_ids
+            ]
 
             self.log.info("Skipping tasks %s", [t.task_id for t in skip_tasks])
-            self.skip(dag_run, ti.execution_date, skip_tasks)
+            with create_session() as session:
+                self._set_state_to_skipped(
+                    dag_run, ti.execution_date, skip_tasks, session=session
+                )
+                ti.xcom_push(
+                    key=XCOM_SKIPMIXIN_KEY, value={XCOM_SKIPMIXIN_FOLLOWED: branch_task_ids}
+                )
diff --git a/airflow/ti_deps/dep_context.py b/airflow/ti_deps/dep_context.py
index c5d999a..74307e4 100644
--- a/airflow/ti_deps/dep_context.py
+++ b/airflow/ti_deps/dep_context.py
@@ -66,6 +66,8 @@ class DepContext(object):
     :type ignore_task_deps: bool
     :param ignore_ti_state: Ignore the task instance's previous failure/success
     :type ignore_ti_state: bool
+    :param finished_tasks: A list of all the finished tasks of this run
+    :type finished_tasks: list[airflow.models.TaskInstance]
     """
     def __init__(
             self,
@@ -76,7 +78,8 @@ class DepContext(object):
             ignore_in_retry_period=False,
             ignore_in_reschedule_period=False,
             ignore_task_deps=False,
-            ignore_ti_state=False):
+            ignore_ti_state=False,
+            finished_tasks=None):
         self.deps = deps or set()
         self.flag_upstream_failed = flag_upstream_failed
         self.ignore_all_deps = ignore_all_deps
@@ -85,6 +88,28 @@ class DepContext(object):
         self.ignore_in_reschedule_period = ignore_in_reschedule_period
         self.ignore_task_deps = ignore_task_deps
         self.ignore_ti_state = ignore_ti_state
+        self.finished_tasks = finished_tasks
+
+    def ensure_finished_tasks(self, dag, execution_date, session):
+        """
+        This method makes sure finished_tasks is populated if it's currently None.
+        This is for the strange feature of running tasks without dag_run.
+
+        :param dag: The DAG for which to find finished tasks
+        :type dag: airflow.models.DAG
+        :param execution_date: The execution_date to look for
+        :param session: Database session to use
+        :return: A list of all the finished tasks of this DAG and execution_date
+        :rtype: list[airflow.models.TaskInstance]
+        """
+        if self.finished_tasks is None:
+            self.finished_tasks = dag.get_task_instances(
+                start_date=execution_date,
+                end_date=execution_date,
+                state=State.finished() + [State.UPSTREAM_FAILED],
+                session=session,
+            )
+        return self.finished_tasks
 
 
 # In order to be able to get queued a task must have one of these states
diff --git a/airflow/ti_deps/deps/not_previously_skipped_dep.py b/airflow/ti_deps/deps/not_previously_skipped_dep.py
new file mode 100644
index 0000000..34ff6ac
--- /dev/null
+++ b/airflow/ti_deps/deps/not_previously_skipped_dep.py
@@ -0,0 +1,88 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+from airflow.ti_deps.deps.base_ti_dep import BaseTIDep
+
+
+class NotPreviouslySkippedDep(BaseTIDep):
+    """
+    Determines if any of the task's direct upstream relatives have decided this task should
+    be skipped.
+    """
+
+    NAME = "Not Previously Skipped"
+    IGNORABLE = True
+    IS_TASK_DEP = True
+
+    def _get_dep_statuses(
+        self, ti, session, dep_context
+    ):  # pylint: disable=signature-differs
+        from airflow.models.skipmixin import (
+            SkipMixin,
+            XCOM_SKIPMIXIN_KEY,
+            XCOM_SKIPMIXIN_SKIPPED,
+            XCOM_SKIPMIXIN_FOLLOWED,
+        )
+        from airflow.utils.state import State
+
+        upstream = ti.task.get_direct_relatives(upstream=True)
+
+        finished_tasks = dep_context.ensure_finished_tasks(
+            ti.task.dag, ti.execution_date, session
+        )
+
+        finished_task_ids = {t.task_id for t in finished_tasks}
+
+        for parent in upstream:
+            if isinstance(parent, SkipMixin):
+                if parent.task_id not in finished_task_ids:
+                    # This can happen if the parent task has not yet run.
+                    continue
+
+                prev_result = ti.xcom_pull(
+                    task_ids=parent.task_id, key=XCOM_SKIPMIXIN_KEY
+                )
+
+                if prev_result is None:
+                    # This can happen if the parent task has not yet run.
+                    continue
+
+                should_skip = False
+                if (
+                    XCOM_SKIPMIXIN_FOLLOWED in prev_result
+                    and ti.task_id not in prev_result[XCOM_SKIPMIXIN_FOLLOWED]
+                ):
+                    # Skip any tasks that are not in "followed"
+                    should_skip = True
+                elif (
+                    XCOM_SKIPMIXIN_SKIPPED in prev_result
+                    and ti.task_id in prev_result[XCOM_SKIPMIXIN_SKIPPED]
+                ):
+                    # Skip any tasks that are in "skipped"
+                    should_skip = True
+
+                if should_skip:
+                    # If the parent SkipMixin has run, and the XCom result stored indicates this
+                    # ti should be skipped, set ti.state to SKIPPED and fail the rule so that the
+                    # ti does not execute.
+                    ti.set_state(State.SKIPPED, session)
+                    yield self._failing_status(
+                        reason="Skipping because of previous XCom result from parent task {}"
+                        .format(parent.task_id)
+                    )
+                    return
diff --git a/requirements/requirements-python2.7.txt b/requirements/requirements-python2.7.txt
index 6973e5a..2dc0f9b 100644
--- a/requirements/requirements-python2.7.txt
+++ b/requirements/requirements-python2.7.txt
@@ -8,12 +8,12 @@ Flask-Caching==1.3.3
 Flask-JWT-Extended==3.24.1
 Flask-Login==0.4.1
 Flask-OpenID==1.2.5
-Flask-SQLAlchemy==2.4.3
+Flask-SQLAlchemy==2.4.4
 Flask-WTF==0.14.3
 Flask==1.1.2
 JPype1==0.7.1
 JayDeBeApi==1.2.3
-Jinja2==2.10.3
+Jinja2==2.11.2
 Mako==1.1.3
 Markdown==2.6.11
 MarkupSafe==1.1.1
@@ -38,7 +38,7 @@ ansiwrap==0.8.4
 apipkg==1.5
 apispec==2.0.2
 appdirs==1.4.4
-argcomplete==1.11.1
+argcomplete==1.12.0
 asn1crypto==1.3.0
 aspy.yaml==1.3.0
 astroid==1.6.6
@@ -48,11 +48,11 @@ attrs==19.3.0
 aws-sam-translator==1.25.0
 aws-xray-sdk==2.6.0
 azure-common==1.1.25
-azure-cosmos==3.1.2
+azure-cosmos==3.2.0
 azure-datalake-store==0.0.48
 azure-mgmt-containerinstance==1.5.0
 azure-mgmt-nspkg==3.0.2
-azure-mgmt-resource==10.0.0
+azure-mgmt-resource==10.1.0
 azure-nspkg==3.0.2
 azure-storage-blob==2.1.0
 azure-storage-common==2.1.0
@@ -69,9 +69,9 @@ beautifulsoup4==4.7.1
 billiard==3.6.3.0
 bleach==3.1.5
 blinker==1.4
-boto3==1.14.14
+boto3==1.14.25
 boto==2.49.0
-botocore==1.17.14
+botocore==1.17.25
 cached-property==1.5.1
 cachetools==3.1.1
 cassandra-driver==3.20.2
@@ -80,7 +80,7 @@ celery==4.4.6
 certifi==2020.6.20
 cffi==1.14.0
 cfgv==2.0.1
-cfn-lint==0.33.2
+cfn-lint==0.34.0
 cgroupspy==0.1.6
 chardet==3.0.4
 click==6.7
@@ -91,11 +91,11 @@ configparser==3.5.3
 contextdecorator==0.10.0
 contextlib2==0.6.0.post1
 cookies==2.2.1
-coverage==5.1
+coverage==5.2
 croniter==0.3.34
-cryptography==2.9.2
+cryptography==3.0
 cx-Oracle==7.3.0
-datadog==0.37.1
+datadog==0.38.0
 decorator==4.4.2
 defusedxml==0.6.0
 dill==0.3.2
@@ -112,13 +112,13 @@ email-validator==1.1.1
 entrypoints==0.3
 enum34==1.1.10
 execnet==1.7.1
-fastavro==0.23.5
+fastavro==0.23.6
 filelock==3.0.12
 flake8-colors==0.1.6
 flake8==3.8.3
-flaky==3.6.1
-flask-swagger==0.2.13
-flower==0.9.4
+flaky==3.7.0
+flask-swagger==0.2.14
+flower==0.9.5
 freezegun==0.3.15
 funcsigs==1.0.2
 functools32==3.2.3.post2
@@ -126,13 +126,13 @@ future-fstrings==1.2.0
 future==0.18.2
 futures==3.3.0
 gcsfs==0.2.3
-google-api-core==1.21.0
-google-api-python-client==1.9.3
-google-auth-httplib2==0.0.3
+google-api-core==1.22.0
+google-api-python-client==1.10.0
+google-auth-httplib2==0.0.4
 google-auth-oauthlib==0.4.1
-google-auth==1.18.0
-google-cloud-bigquery==1.25.0
-google-cloud-bigtable==1.2.1
+google-auth==1.19.2
+google-cloud-bigquery==1.26.0
+google-cloud-bigtable==1.3.0
 google-cloud-container==1.0.1
 google-cloud-core==1.3.0
 google-cloud-dlp==1.0.0
@@ -147,7 +147,7 @@ google-cloud-videointelligence==1.15.0
 google-cloud-vision==1.0.0
 google-resumable-media==0.5.1
 googleapis-common-protos==1.52.0
-graphviz==0.14
+graphviz==0.14.1
 grpc-google-iam-v1==0.12.3
 grpcio-gcp==0.2.2
 grpcio==1.30.0
@@ -155,9 +155,9 @@ gunicorn==19.10.0
 hdfs==2.5.8
 hmsclient==0.1.1
 httplib2==0.18.1
-humanize==0.5.1
+humanize==1.0.0
 hvac==0.10.4
-identify==1.4.21
+identify==1.4.25
 idna==2.10
 ijson==2.6.1
 imagesize==1.2.0
@@ -230,9 +230,10 @@ pluggy==0.13.1
 pre-commit==1.21.0
 presto-python-client==0.7.0
 prison==0.1.0
+prometheus-client==0.8.0
 prompt-toolkit==1.0.18
 protobuf==3.12.2
-psutil==5.7.0
+psutil==5.7.2
 psycopg2-binary==2.8.5
 ptyprocess==0.6.0
 py==1.9.0
@@ -255,8 +256,8 @@ pytest-cov==2.10.0
 pytest-forked==1.2.0
 pytest-instafail==0.4.2
 pytest-rerunfailures==9.0
-pytest-timeout==1.4.1
-pytest-xdist==1.32.0
+pytest-timeout==1.4.2
+pytest-xdist==1.33.0
 pytest==4.6.11
 python-daemon==2.2.4
 python-dateutil==2.8.1
@@ -268,7 +269,7 @@ python-nvd3==0.15.0
 python-openid==2.2.5
 python-slugify==4.0.1
 pytz==2020.1
-pytzdata==2019.3
+pytzdata==2020.1
 pywinrm==0.4.1
 pyzmq==19.0.1
 qds-sdk==1.16.0
@@ -287,7 +288,7 @@ sasl==0.2.1
 scandir==1.10.0
 sendgrid==5.6.0
 sentinels==1.0.0
-sentry-sdk==0.15.1
+sentry-sdk==0.16.1
 setproctitle==1.1.10
 simplegeneric==0.8.1
 singledispatch==3.4.0.3
@@ -318,11 +319,11 @@ thrift==0.13.0
 tokenize-rt==3.2.0
 toml==0.10.1
 tornado==5.1.1
-tqdm==4.47.0
+tqdm==4.48.0
 traceback2==1.4.0
 traitlets==4.3.3
 typing-extensions==3.7.4.2
-typing==3.7.4.1
+typing==3.7.4.3
 tzlocal==1.5.1
 unicodecsv==0.14.1
 unittest2==1.1.0
@@ -330,13 +331,13 @@ uritemplate==3.0.1
 urllib3==1.25.9
 vertica-python==0.10.4
 vine==1.3.0
-virtualenv==20.0.25
+virtualenv==20.0.27
 wcwidth==0.2.5
 webencodings==0.5.1
 websocket-client==0.57.0
 wrapt==1.12.1
 xmltodict==0.12.0
-yamllint==1.23.0
+yamllint==1.24.2
 zdesk==2.7.1
 zipp==1.2.0
 zope.deprecation==4.4.0
diff --git a/requirements/requirements-python3.8.txt b/requirements/requirements-python3.8.txt
index 747ae42..e715477 100644
--- a/requirements/requirements-python3.8.txt
+++ b/requirements/requirements-python3.8.txt
@@ -8,12 +8,12 @@ Flask-Caching==1.3.3
 Flask-JWT-Extended==3.24.1
 Flask-Login==0.4.1
 Flask-OpenID==1.2.5
-Flask-SQLAlchemy==2.4.3
+Flask-SQLAlchemy==2.4.4
 Flask-WTF==0.14.3
 Flask==1.1.2
 JPype1==0.7.1
 JayDeBeApi==1.2.3
-Jinja2==2.10.3
+Jinja2==2.11.2
 Mako==1.1.3
 Markdown==2.6.11
 MarkupSafe==1.1.1
@@ -24,9 +24,9 @@ PySmbClient==0.1.5
 PyYAML==5.3.1
 Pygments==2.6.1
 SQLAlchemy-JSONField==0.9.0
-SQLAlchemy-Utils==0.36.6
+SQLAlchemy-Utils==0.36.8
 SQLAlchemy==1.3.18
-Sphinx==3.1.1
+Sphinx==3.1.2
 Unidecode==1.1.1
 WTForms==2.3.1
 Werkzeug==0.16.1
@@ -39,7 +39,7 @@ ansiwrap==0.8.4
 apipkg==1.5
 apispec==1.3.3
 appdirs==1.4.4
-argcomplete==1.11.1
+argcomplete==1.12.0
 asn1crypto==1.3.0
 astroid==2.4.2
 async-generator==1.10
@@ -48,10 +48,10 @@ attrs==19.3.0
 aws-sam-translator==1.25.0
 aws-xray-sdk==2.6.0
 azure-common==1.1.25
-azure-cosmos==3.1.2
+azure-cosmos==3.2.0
 azure-datalake-store==0.0.48
 azure-mgmt-containerinstance==1.5.0
-azure-mgmt-resource==10.0.0
+azure-mgmt-resource==10.1.0
 azure-nspkg==3.0.2
 azure-storage-blob==2.1.0
 azure-storage-common==2.1.0
@@ -62,9 +62,9 @@ beautifulsoup4==4.7.1
 billiard==3.6.3.0
 black==19.10b0
 blinker==1.4
-boto3==1.14.14
+boto3==1.14.25
 boto==2.49.0
-botocore==1.17.14
+botocore==1.17.25
 cached-property==1.5.1
 cachetools==4.1.1
 cassandra-driver==3.20.2
@@ -73,7 +73,7 @@ celery==4.4.6
 certifi==2020.6.20
 cffi==1.14.0
 cfgv==3.1.0
-cfn-lint==0.33.2
+cfn-lint==0.34.0
 cgroupspy==0.1.6
 chardet==3.0.4
 click==6.7
@@ -81,11 +81,11 @@ cloudant==0.5.10
 colorama==0.4.3
 colorlog==4.0.2
 configparser==3.5.3
-coverage==5.1
+coverage==5.2
 croniter==0.3.34
-cryptography==2.9.2
+cryptography==3.0
 cx-Oracle==8.0.0
-datadog==0.37.1
+datadog==0.38.0
 decorator==4.4.2
 defusedxml==0.6.0
 dill==0.3.2
@@ -101,27 +101,27 @@ elasticsearch==5.5.3
 email-validator==1.1.1
 entrypoints==0.3
 execnet==1.7.1
-fastavro==0.23.5
+fastavro==0.23.6
 filelock==3.0.12
 flake8-colors==0.1.6
 flake8==3.8.3
-flaky==3.6.1
-flask-swagger==0.2.13
-flower==0.9.4
+flaky==3.7.0
+flask-swagger==0.2.14
+flower==0.9.5
 freezegun==0.3.15
 fsspec==0.7.4
 funcsigs==1.0.2
 future-fstrings==1.2.0
 future==0.18.2
 gcsfs==0.6.2
-google-api-core==1.21.0
-google-api-python-client==1.9.3
-google-auth-httplib2==0.0.3
+google-api-core==1.22.0
+google-api-python-client==1.10.0
+google-auth-httplib2==0.0.4
 google-auth-oauthlib==0.4.1
-google-auth==1.18.0
-google-cloud-bigquery==1.25.0
-google-cloud-bigtable==1.2.1
-google-cloud-container==1.0.1
+google-auth==1.19.2
+google-cloud-bigquery==1.26.0
+google-cloud-bigtable==1.3.0
+google-cloud-container==2.0.0
 google-cloud-core==1.3.0
 google-cloud-dlp==1.0.0
 google-cloud-language==1.3.0
@@ -135,17 +135,17 @@ google-cloud-videointelligence==1.15.0
 google-cloud-vision==1.0.0
 google-resumable-media==0.5.1
 googleapis-common-protos==1.52.0
-graphviz==0.14
+graphviz==0.14.1
 grpc-google-iam-v1==0.12.3
 grpcio-gcp==0.2.2
 grpcio==1.30.0
-gunicorn==19.10.0
+gunicorn==20.0.4
 hdfs==2.5.8
 hmsclient==0.1.1
 httplib2==0.18.1
-humanize==0.5.1
+humanize==2.5.0
 hvac==0.10.4
-identify==1.4.21
+identify==1.4.25
 idna==2.10
 imagesize==1.2.0
 importlib-metadata==1.7.0
@@ -156,7 +156,7 @@ ipython==7.16.1
 iso8601==0.1.12
 isodate==0.6.0
 itsdangerous==1.1.0
-jedi==0.17.1
+jedi==0.17.2
 jira==2.0.0
 jmespath==0.10.0
 json-merge-patch==0.2
@@ -166,12 +166,13 @@ jsonpickle==1.4.1
 jsonpointer==2.0
 jsonschema==3.2.0
 junit-xml==1.9
-jupyter-client==6.1.5
+jupyter-client==6.1.6
 jupyter-core==4.6.3
 kombu==4.6.11
 kubernetes==11.0.0
 lazy-object-proxy==1.5.0
 ldap3==2.7
+libcst==0.3.7
 lockfile==0.12.2
 marshmallow-enum==1.5.1
 marshmallow-sqlalchemy==0.23.1
@@ -188,14 +189,14 @@ mypy-extensions==0.4.3
 mypy==0.720
 mysqlclient==1.3.14
 natsort==7.0.1
-nbclient==0.4.0
+nbclient==0.4.1
 nbformat==5.0.7
-nest-asyncio==1.3.3
+nest-asyncio==1.4.0
 networkx==2.4
 nodeenv==1.4.0
 nteract-scrapbook==0.4.1
 ntlm-auth==1.5.0
-numpy==1.19.0
+numpy==1.19.1
 oauthlib==3.1.0
 oscrypto==1.2.0
 packaging==20.4
@@ -212,12 +213,14 @@ pexpect==4.8.0
 pickleshare==0.7.5
 pinotdb==0.1.1
 pluggy==0.13.1
-pre-commit==2.5.1
+pre-commit==2.6.0
 presto-python-client==0.7.0
 prison==0.1.3
+prometheus-client==0.8.0
 prompt-toolkit==3.0.5
+proto-plus==1.3.2
 protobuf==3.12.2
-psutil==5.7.0
+psutil==5.7.2
 psycopg2-binary==2.8.5
 ptyprocess==0.6.0
 py==1.9.0
@@ -240,8 +243,8 @@ pytest-cov==2.10.0
 pytest-forked==1.2.0
 pytest-instafail==0.4.2
 pytest-rerunfailures==9.0
-pytest-timeout==1.4.1
-pytest-xdist==1.32.0
+pytest-timeout==1.4.2
+pytest-xdist==1.33.0
 pytest==5.4.3
 python-daemon==2.2.4
 python-dateutil==2.8.1
@@ -253,12 +256,12 @@ python-nvd3==0.15.0
 python-slugify==4.0.1
 python3-openid==3.2.0
 pytz==2020.1
-pytzdata==2019.3
+pytzdata==2020.1
 pywinrm==0.4.1
 pyzmq==19.0.1
 qds-sdk==1.16.0
 redis==3.5.3
-regex==2020.6.8
+regex==2020.7.14
 requests-futures==0.9.4
 requests-kerberos==0.12.0
 requests-mock==1.8.0
@@ -272,12 +275,12 @@ s3transfer==0.3.3
 sasl==0.2.1
 sendgrid==5.6.0
 sentinels==1.0.0
-sentry-sdk==0.15.1
+sentry-sdk==0.16.1
 setproctitle==1.1.10
 six==1.15.0
 slackclient==1.3.2
 snowballstemmer==2.0.0
-snowflake-connector-python==2.2.8
+snowflake-connector-python==2.2.9
 snowflake-sqlalchemy==1.2.3
 soupsieve==2.0.1
 sphinx-argparse==0.2.5
@@ -304,22 +307,23 @@ thrift-sasl==0.4.2
 thrift==0.13.0
 toml==0.10.1
 tornado==5.1.1
-tqdm==4.47.0
+tqdm==4.48.0
 traitlets==4.3.3
 typed-ast==1.4.1
 typing-extensions==3.7.4.2
+typing-inspect==0.6.0
 tzlocal==1.5.1
 unicodecsv==0.14.1
 uritemplate==3.0.1
 urllib3==1.25.9
 vertica-python==0.10.4
 vine==1.3.0
-virtualenv==20.0.25
+virtualenv==20.0.27
 wcwidth==0.2.5
 websocket-client==0.57.0
 wrapt==1.12.1
 xmltodict==0.12.0
-yamllint==1.23.0
+yamllint==1.24.2
 zdesk==2.7.1
 zipp==3.1.0
 zope.deprecation==4.4.0
diff --git a/requirements/setup-2.7.md5 b/requirements/setup-2.7.md5
index 7302c51..d24fa17 100644
--- a/requirements/setup-2.7.md5
+++ b/requirements/setup-2.7.md5
@@ -1 +1 @@
-da591fb5f6ed08129068e227610706cb  /opt/airflow/setup.py
+52a5d9b968ee82e35b5b49ed02361377  /opt/airflow/setup.py
diff --git a/requirements/setup-3.8.md5 b/requirements/setup-3.8.md5
index 7302c51..d24fa17 100644
--- a/requirements/setup-3.8.md5
+++ b/requirements/setup-3.8.md5
@@ -1 +1 @@
-da591fb5f6ed08129068e227610706cb  /opt/airflow/setup.py
+52a5d9b968ee82e35b5b49ed02361377  /opt/airflow/setup.py
diff --git a/tests/jobs/test_scheduler_job.py b/tests/jobs/test_scheduler_job.py
index 48f70a9..161e479 100644
--- a/tests/jobs/test_scheduler_job.py
+++ b/tests/jobs/test_scheduler_job.py
@@ -3011,3 +3011,42 @@ class SchedulerJobTest(unittest.TestCase):
                 self.assertIsNone(start_date)
                 self.assertIsNone(end_date)
                 self.assertIsNone(duration)
+
+
+def test_task_with_upstream_skip_process_task_instances():
+    """
+    Test if _process_task_instances puts a task instance into SKIPPED state if any of its
+    upstream tasks are skipped according to TriggerRuleDep.
+    """
+    with DAG(
+        dag_id='test_task_with_upstream_skip_dag',
+        start_date=DEFAULT_DATE,
+        schedule_interval=None
+    ) as dag:
+        dummy1 = DummyOperator(task_id='dummy1')
+        dummy2 = DummyOperator(task_id="dummy2")
+        dummy3 = DummyOperator(task_id="dummy3")
+        [dummy1, dummy2] >> dummy3
+
+    dag_file_processor = SchedulerJob(dag_ids=[], log=mock.MagicMock())
+    dag.clear()
+    dr = dag.create_dagrun(run_id="manual__{}".format(DEFAULT_DATE.isoformat()),
+                           state=State.RUNNING,
+                           execution_date=DEFAULT_DATE)
+    assert dr is not None
+
+    with create_session() as session:
+        tis = {ti.task_id: ti for ti in dr.get_task_instances(session=session)}
+        # Set dummy1 to skipped and dummy2 to success. dummy3 remains as none.
+        tis[dummy1.task_id].state = State.SKIPPED
+        tis[dummy2.task_id].state = State.SUCCESS
+        assert tis[dummy3.task_id].state == State.NONE
+
+    dag_file_processor._process_task_instances(dag, task_instances_list=Mock())
+
+    with create_session() as session:
+        tis = {ti.task_id: ti for ti in dr.get_task_instances(session=session)}
+        assert tis[dummy1.task_id].state == State.SKIPPED
+        assert tis[dummy2.task_id].state == State.SUCCESS
+        # dummy3 should be skipped because dummy1 is skipped.
+        assert tis[dummy3.task_id].state == State.SKIPPED
diff --git a/tests/operators/test_latest_only_operator.py b/tests/operators/test_latest_only_operator.py
index 3edff8d..6f23f59 100644
--- a/tests/operators/test_latest_only_operator.py
+++ b/tests/operators/test_latest_only_operator.py
@@ -47,15 +47,40 @@ def get_task_instances(task_id):
 
 
 class LatestOnlyOperatorTest(unittest.TestCase):
+    @classmethod
+    def setUpClass(cls):
+        from tests.compat import MagicMock
+        from airflow.jobs import SchedulerJob
 
-    def setUp(self):
-        super(LatestOnlyOperatorTest, self).setUp()
-        self.dag = DAG(
+        cls.dag = DAG(
             'test_dag',
             default_args={
                 'owner': 'airflow',
                 'start_date': DEFAULT_DATE},
             schedule_interval=INTERVAL)
+
+        cls.dag.create_dagrun(
+            run_id="manual__1",
+            execution_date=DEFAULT_DATE,
+            state=State.RUNNING
+        )
+
+        cls.dag.create_dagrun(
+            run_id="manual__2",
+            execution_date=timezone.datetime(2016, 1, 1, 12),
+            state=State.RUNNING
+        )
+
+        cls.dag.create_dagrun(
+            run_id="manual__3",
+            execution_date=END_DATE,
+            state=State.RUNNING
+        )
+
+        cls.dag_file_processor = SchedulerJob(dag_ids=[], log=MagicMock())
+
+    def setUp(self):
+        super(LatestOnlyOperatorTest, self).setUp()
         self.addCleanup(self.dag.clear)
         freezer = freeze_time(FROZEN_NOW)
         freezer.start()
@@ -86,6 +111,7 @@ class LatestOnlyOperatorTest(unittest.TestCase):
         downstream_task2.run(start_date=DEFAULT_DATE, end_date=END_DATE)
 
         latest_instances = get_task_instances('latest')
+        self.dag_file_processor._process_task_instances(self.dag, task_instances_list=latest_instances)
         exec_date_to_latest_state = {
             ti.execution_date: ti.state for ti in latest_instances}
         self.assertEqual({
@@ -95,6 +121,7 @@ class LatestOnlyOperatorTest(unittest.TestCase):
             exec_date_to_latest_state)
 
         downstream_instances = get_task_instances('downstream')
+        self.dag_file_processor._process_task_instances(self.dag, task_instances_list=downstream_instances)
         exec_date_to_downstream_state = {
             ti.execution_date: ti.state for ti in downstream_instances}
         self.assertEqual({
@@ -104,6 +131,7 @@ class LatestOnlyOperatorTest(unittest.TestCase):
             exec_date_to_downstream_state)
 
         downstream_instances = get_task_instances('downstream_2')
+        self.dag_file_processor._process_task_instances(self.dag, task_instances_list=downstream_instances)
         exec_date_to_downstream_state = {
             ti.execution_date: ti.state for ti in downstream_instances}
         self.assertEqual({
@@ -126,32 +154,13 @@ class LatestOnlyOperatorTest(unittest.TestCase):
         downstream_task.set_upstream(latest_task)
         downstream_task2.set_upstream(downstream_task)
 
-        self.dag.create_dagrun(
-            run_id="manual__1",
-            start_date=timezone.utcnow(),
-            execution_date=DEFAULT_DATE,
-            state=State.RUNNING
-        )
-
-        self.dag.create_dagrun(
-            run_id="manual__2",
-            start_date=timezone.utcnow(),
-            execution_date=timezone.datetime(2016, 1, 1, 12),
-            state=State.RUNNING
-        )
-
-        self.dag.create_dagrun(
-            run_id="manual__3",
-            start_date=timezone.utcnow(),
-            execution_date=END_DATE,
-            state=State.RUNNING
-        )
-
         latest_task.run(start_date=DEFAULT_DATE, end_date=END_DATE)
         downstream_task.run(start_date=DEFAULT_DATE, end_date=END_DATE)
         downstream_task2.run(start_date=DEFAULT_DATE, end_date=END_DATE)
 
         latest_instances = get_task_instances('latest')
+        self.dag_file_processor._process_task_instances(self.dag, task_instances_list=latest_instances)
+
         exec_date_to_latest_state = {
             ti.execution_date: ti.state for ti in latest_instances}
         self.assertEqual({
@@ -161,6 +170,8 @@ class LatestOnlyOperatorTest(unittest.TestCase):
             exec_date_to_latest_state)
 
         downstream_instances = get_task_instances('downstream')
+        self.dag_file_processor._process_task_instances(self.dag, task_instances_list=downstream_instances)
+
         exec_date_to_downstream_state = {
             ti.execution_date: ti.state for ti in downstream_instances}
         self.assertEqual({
@@ -170,6 +181,7 @@ class LatestOnlyOperatorTest(unittest.TestCase):
             exec_date_to_downstream_state)
 
         downstream_instances = get_task_instances('downstream_2')
+        self.dag_file_processor._process_task_instances(self.dag, task_instances_list=downstream_instances)
         exec_date_to_downstream_state = {
             ti.execution_date: ti.state for ti in downstream_instances}
         self.assertEqual({
diff --git a/tests/operators/test_python_operator.py b/tests/operators/test_python_operator.py
index 6f3dfe2..a92213a 100644
--- a/tests/operators/test_python_operator.py
+++ b/tests/operators/test_python_operator.py
@@ -32,6 +32,7 @@ from datetime import timedelta, date
 
 from airflow.exceptions import AirflowException
 from airflow.models import TaskInstance as TI, DAG, DagRun
+from airflow.models.taskinstance import clear_task_instances
 from airflow.operators.dummy_operator import DummyOperator
 from airflow.operators.python_operator import PythonOperator, BranchPythonOperator
 from airflow.operators.python_operator import ShortCircuitOperator
@@ -491,7 +492,7 @@ class BranchOperatorTest(unittest.TestCase):
             elif ti.task_id == 'branch_2':
                 self.assertEqual(ti.state, State.NONE)
             else:
-                raise Exception
+                raise
 
     def test_with_skip_in_branch_downstream_dependencies2(self):
         self.branch_op = BranchPythonOperator(task_id='make_choice',
@@ -520,7 +521,63 @@ class BranchOperatorTest(unittest.TestCase):
             elif ti.task_id == 'branch_2':
                 self.assertEqual(ti.state, State.NONE)
             else:
-                raise Exception
+                raise
+
+    def test_clear_skipped_downstream_task(self):
+        """
+        After a downstream task is skipped by BranchPythonOperator, clearing the skipped task
+        should not cause it to be executed.
+        """
+        branch_op = BranchPythonOperator(task_id='make_choice',
+                                         dag=self.dag,
+                                         python_callable=lambda: 'branch_1')
+        branches = [self.branch_1, self.branch_2]
+        branch_op >> branches
+        self.dag.clear()
+
+        dr = self.dag.create_dagrun(
+            run_id="manual__",
+            start_date=timezone.utcnow(),
+            execution_date=DEFAULT_DATE,
+            state=State.RUNNING
+        )
+
+        branch_op.run(start_date=DEFAULT_DATE, end_date=DEFAULT_DATE)
+
+        for task in branches:
+            task.run(start_date=DEFAULT_DATE, end_date=DEFAULT_DATE)
+
+        tis = dr.get_task_instances()
+        for ti in tis:
+            if ti.task_id == 'make_choice':
+                self.assertEqual(ti.state, State.SUCCESS)
+            elif ti.task_id == 'branch_1':
+                self.assertEqual(ti.state, State.SUCCESS)
+            elif ti.task_id == 'branch_2':
+                self.assertEqual(ti.state, State.SKIPPED)
+            else:
+                raise
+
+        children_tis = [ti for ti in tis if ti.task_id in branch_op.get_direct_relative_ids()]
+
+        # Clear the children tasks.
+        with create_session() as session:
+            clear_task_instances(children_tis, session=session, dag=self.dag)
+
+        # Run the cleared tasks again.
+        for task in branches:
+            task.run(start_date=DEFAULT_DATE, end_date=DEFAULT_DATE)
+
+        # Check if the states are correct after children tasks are cleared.
+        for ti in dr.get_task_instances():
+            if ti.task_id == 'make_choice':
+                self.assertEqual(ti.state, State.SUCCESS)
+            elif ti.task_id == 'branch_1':
+                self.assertEqual(ti.state, State.SUCCESS)
+            elif ti.task_id == 'branch_2':
+                self.assertEqual(ti.state, State.SKIPPED)
+            else:
+                raise
 
 
 class ShortCircuitOperatorTest(unittest.TestCase):
@@ -660,3 +717,61 @@ class ShortCircuitOperatorTest(unittest.TestCase):
                 self.assertEqual(ti.state, State.NONE)
             else:
                 raise
+
+    def test_clear_skipped_downstream_task(self):
+        """
+        After a downstream task is skipped by ShortCircuitOperator, clearing the skipped task
+        should not cause it to be executed.
+        """
+        dag = DAG('shortcircuit_clear_skipped_downstream_task',
+                  default_args={
+                      'owner': 'airflow',
+                      'start_date': DEFAULT_DATE
+                  },
+                  schedule_interval=INTERVAL)
+        short_op = ShortCircuitOperator(task_id='make_choice',
+                                        dag=dag,
+                                        python_callable=lambda: False)
+        downstream = DummyOperator(task_id='downstream', dag=dag)
+
+        short_op >> downstream
+
+        dag.clear()
+
+        dr = dag.create_dagrun(
+            run_id="manual__",
+            start_date=timezone.utcnow(),
+            execution_date=DEFAULT_DATE,
+            state=State.RUNNING
+        )
+
+        short_op.run(start_date=DEFAULT_DATE, end_date=DEFAULT_DATE)
+        downstream.run(start_date=DEFAULT_DATE, end_date=DEFAULT_DATE)
+
+        tis = dr.get_task_instances()
+
+        for ti in tis:
+            if ti.task_id == 'make_choice':
+                self.assertEqual(ti.state, State.SUCCESS)
+            elif ti.task_id == 'downstream':
+                self.assertEqual(ti.state, State.SKIPPED)
+            else:
+                raise
+
+        # Clear downstream
+        with create_session() as session:
+            clear_task_instances([t for t in tis if t.task_id == "downstream"],
+                                 session=session,
+                                 dag=dag)
+
+        # Run downstream again
+        downstream.run(start_date=DEFAULT_DATE, end_date=DEFAULT_DATE)
+
+        # Check if the states are correct.
+        for ti in dr.get_task_instances():
+            if ti.task_id == 'make_choice':
+                self.assertEqual(ti.state, State.SUCCESS)
+            elif ti.task_id == 'downstream':
+                self.assertEqual(ti.state, State.SKIPPED)
+            else:
+                raise
diff --git a/tests/ti_deps/deps/test_not_previously_skipped_dep.py b/tests/ti_deps/deps/test_not_previously_skipped_dep.py
new file mode 100644
index 0000000..30da9cf
--- /dev/null
+++ b/tests/ti_deps/deps/test_not_previously_skipped_dep.py
@@ -0,0 +1,133 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import pendulum
+
+from airflow.models import DAG, TaskInstance
+from airflow.operators.dummy_operator import DummyOperator
+from airflow.operators.python_operator import BranchPythonOperator
+from airflow.ti_deps.dep_context import DepContext
+from airflow.ti_deps.deps.not_previously_skipped_dep import NotPreviouslySkippedDep
+from airflow.utils.db import create_session
+from airflow.utils.state import State
+
+
+def test_no_parent():
+    """
+    A simple DAG with a single task. NotPreviouslySkippedDep is met.
+    """
+    start_date = pendulum.datetime(2020, 1, 1)
+    dag = DAG("test_test_no_parent_dag", schedule_interval=None, start_date=start_date)
+    op1 = DummyOperator(task_id="op1", dag=dag)
+
+    ti1 = TaskInstance(op1, start_date)
+
+    with create_session() as session:
+        dep = NotPreviouslySkippedDep()
+        assert len(list(dep.get_dep_statuses(ti1, session, DepContext()))) == 0
+        assert dep.is_met(ti1, session)
+        assert ti1.state != State.SKIPPED
+
+
+def test_no_skipmixin_parent():
+    """
+    A simple DAG with no branching. Both op1 and op2 are DummyOperator. NotPreviouslySkippedDep is met.
+    """
+    start_date = pendulum.datetime(2020, 1, 1)
+    dag = DAG(
+        "test_no_skipmixin_parent_dag", schedule_interval=None, start_date=start_date
+    )
+    op1 = DummyOperator(task_id="op1", dag=dag)
+    op2 = DummyOperator(task_id="op2", dag=dag)
+    op1 >> op2
+
+    ti2 = TaskInstance(op2, start_date)
+
+    with create_session() as session:
+        dep = NotPreviouslySkippedDep()
+        assert len(list(dep.get_dep_statuses(ti2, session, DepContext()))) == 0
+        assert dep.is_met(ti2, session)
+        assert ti2.state != State.SKIPPED
+
+
+def test_parent_follow_branch():
+    """
+    A simple DAG with a BranchPythonOperator that follows op2. NotPreviouslySkippedDep is met.
+    """
+    start_date = pendulum.datetime(2020, 1, 1)
+    dag = DAG(
+        "test_parent_follow_branch_dag", schedule_interval=None, start_date=start_date
+    )
+    op1 = BranchPythonOperator(task_id="op1", python_callable=lambda: "op2", dag=dag)
+    op2 = DummyOperator(task_id="op2", dag=dag)
+    op1 >> op2
+
+    TaskInstance(op1, start_date).run()
+    ti2 = TaskInstance(op2, start_date)
+
+    with create_session() as session:
+        dep = NotPreviouslySkippedDep()
+        assert len(list(dep.get_dep_statuses(ti2, session, DepContext()))) == 0
+        assert dep.is_met(ti2, session)
+        assert ti2.state != State.SKIPPED
+
+
+def test_parent_skip_branch():
+    """
+    A simple DAG with a BranchPythonOperator that does not follow op2. NotPreviouslySkippedDep is not met.
+    """
+    start_date = pendulum.datetime(2020, 1, 1)
+    dag = DAG(
+        "test_parent_skip_branch_dag", schedule_interval=None, start_date=start_date
+    )
+    op1 = BranchPythonOperator(task_id="op1", python_callable=lambda: "op3", dag=dag)
+    op2 = DummyOperator(task_id="op2", dag=dag)
+    op3 = DummyOperator(task_id="op3", dag=dag)
+    op1 >> [op2, op3]
+
+    TaskInstance(op1, start_date).run()
+    ti2 = TaskInstance(op2, start_date)
+
+    with create_session() as session:
+        dep = NotPreviouslySkippedDep()
+        assert len(list(dep.get_dep_statuses(ti2, session, DepContext()))) == 1
+        assert not dep.is_met(ti2, session)
+        assert ti2.state == State.SKIPPED
+
+
+def test_parent_not_executed():
+    """
+    A simple DAG with a BranchPythonOperator that does not follow op2. Parent task is not yet
+    executed (no xcom data). NotPreviouslySkippedDep is met (no decision).
+    """
+    start_date = pendulum.datetime(2020, 1, 1)
+    dag = DAG(
+        "test_parent_not_executed_dag", schedule_interval=None, start_date=start_date
+    )
+    op1 = BranchPythonOperator(task_id="op1", python_callable=lambda: "op3", dag=dag)
+    op2 = DummyOperator(task_id="op2", dag=dag)
+    op3 = DummyOperator(task_id="op3", dag=dag)
+    op1 >> [op2, op3]
+
+    ti2 = TaskInstance(op2, start_date)
+
+    with create_session() as session:
+        dep = NotPreviouslySkippedDep()
+        assert len(list(dep.get_dep_statuses(ti2, session, DepContext()))) == 0
+        assert dep.is_met(ti2, session)
+        assert ti2.state == State.NONE
diff --git a/tests/ti_deps/deps/test_trigger_rule_dep.py b/tests/ti_deps/deps/test_trigger_rule_dep.py
index 45514f6..8255015 100644
--- a/tests/ti_deps/deps/test_trigger_rule_dep.py
+++ b/tests/ti_deps/deps/test_trigger_rule_dep.py
@@ -165,6 +165,46 @@ class TriggerRuleDepTest(unittest.TestCase):
         self.assertEqual(len(dep_statuses), 1)
         self.assertFalse(dep_statuses[0].passed)
 
+    def test_all_success_tr_skip(self):
+        """
+        All-success trigger rule fails when some upstream tasks are skipped.
+        """
+        ti = self._get_task_instance(TriggerRule.ALL_SUCCESS,
+                                     upstream_task_ids=["FakeTaskID",
+                                                        "OtherFakeTaskID"])
+        dep_statuses = tuple(TriggerRuleDep()._evaluate_trigger_rule(
+            ti=ti,
+            successes=1,
+            skipped=1,
+            failed=0,
+            upstream_failed=0,
+            done=2,
+            flag_upstream_failed=False,
+            session="Fake Session"))
+        self.assertEqual(len(dep_statuses), 1)
+        self.assertFalse(dep_statuses[0].passed)
+
+    def test_all_success_tr_skip_flag_upstream(self):
+        """
+        All-success trigger rule fails when some upstream tasks are skipped. The state of the ti
+        should be set to SKIPPED when flag_upstream_failed is True.
+        """
+        ti = self._get_task_instance(TriggerRule.ALL_SUCCESS,
+                                     upstream_task_ids=["FakeTaskID",
+                                                        "OtherFakeTaskID"])
+        dep_statuses = tuple(TriggerRuleDep()._evaluate_trigger_rule(
+            ti=ti,
+            successes=1,
+            skipped=1,
+            failed=0,
+            upstream_failed=0,
+            done=2,
+            flag_upstream_failed=True,
+            session=Mock()))
+        self.assertEqual(len(dep_statuses), 1)
+        self.assertFalse(dep_statuses[0].passed)
+        self.assertEqual(ti.state, State.SKIPPED)
+
     def test_none_failed_tr_success(self):
         """
         All success including skip trigger rule success


[airflow] 20/32: Fixes PodMutationHook for backwards compatibility (#9903)

Posted by ka...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

kaxilnaik pushed a commit to branch v1-10-test
in repository https://gitbox.apache.org/repos/asf/airflow.git

commit bcd02ddb81a07026dcbbc5e5a4dc669a6483b59b
Author: Daniel Imberman <da...@gmail.com>
AuthorDate: Thu Jul 30 11:40:23 2020 -0700

    Fixes PodMutationHook for backwards compatibility (#9903)
    
    Co-authored-by: Daniel Imberman <da...@astronomer.io>
    Co-authored-by: Kaxil Naik <ka...@gmail.com>
---
 airflow/kubernetes/k8s_model.py              |  16 +++
 airflow/kubernetes/pod.py                    |  33 ++++--
 airflow/kubernetes/pod_launcher.py           |  26 +++-
 airflow/kubernetes/pod_launcher_helper.py    |  96 +++++++++++++++
 airflow/kubernetes/volume_mount.py           |   1 +
 airflow/kubernetes_deprecated/__init__.py    |  16 +++
 airflow/kubernetes_deprecated/pod.py         | 171 +++++++++++++++++++++++++++
 docs/conf.py                                 |   1 +
 tests/kubernetes/models/test_pod.py          |  81 +++++++++++++
 tests/kubernetes/test_pod_launcher_helper.py |  97 +++++++++++++++
 tests/test_local_settings.py                 |  96 +++++++++++++++
 11 files changed, 619 insertions(+), 15 deletions(-)

diff --git a/airflow/kubernetes/k8s_model.py b/airflow/kubernetes/k8s_model.py
index 3fd2f9e..e10a946 100644
--- a/airflow/kubernetes/k8s_model.py
+++ b/airflow/kubernetes/k8s_model.py
@@ -29,6 +29,7 @@ else:
 
 
 class K8SModel(ABC):
+
     """
     These Airflow Kubernetes models are here for backwards compatibility
     reasons only. Ideally clients should use the kubernetes api
@@ -39,6 +40,7 @@ class K8SModel(ABC):
     can be avoided. All of these models implement the
     `attach_to_pod` method so that they integrate with the kubernetes client.
     """
+
     @abc.abstractmethod
     def attach_to_pod(self, pod):
         """
@@ -47,9 +49,23 @@ class K8SModel(ABC):
         :return: The pod with the object attached
         """
 
+    def as_dict(self):
+        res = {}
+        if hasattr(self, "__slots__"):
+            for s in self.__slots__:
+                if hasattr(self, s):
+                    res[s] = getattr(self, s)
+        if hasattr(self, "__dict__"):
+            res_dict = self.__dict__.copy()
+            res_dict.update(res)
+            return res_dict
+        return res
+
 
 def append_to_pod(pod, k8s_objects):
     """
+    Attach Kubernetes objects to the given POD
+
     :param pod: A pod to attach a list of Kubernetes objects to
     :type pod: kubernetes.client.models.V1Pod
     :param k8s_objects: a potential None list of K8SModels
diff --git a/airflow/kubernetes/pod.py b/airflow/kubernetes/pod.py
index 0b332c2..9e455af 100644
--- a/airflow/kubernetes/pod.py
+++ b/airflow/kubernetes/pod.py
@@ -26,7 +26,13 @@ from airflow.kubernetes.k8s_model import K8SModel
 
 
 class Resources(K8SModel):
-    __slots__ = ('request_memory', 'request_cpu', 'limit_memory', 'limit_cpu', 'limit_gpu')
+    __slots__ = ('request_memory',
+                 'request_cpu',
+                 'limit_memory',
+                 'limit_cpu',
+                 'limit_gpu',
+                 'request_ephemeral_storage',
+                 'limit_ephemeral_storage')
 
     """
     :param request_memory: requested memory
@@ -44,15 +50,17 @@ class Resources(K8SModel):
     :param limit_ephemeral_storage: Limit for ephemeral storage
     :type limit_ephemeral_storage: float | str
     """
+
     def __init__(
-            self,
-            request_memory=None,
-            request_cpu=None,
-            request_ephemeral_storage=None,
-            limit_memory=None,
-            limit_cpu=None,
-            limit_gpu=None,
-            limit_ephemeral_storage=None):
+        self,
+        request_memory=None,
+        request_cpu=None,
+        request_ephemeral_storage=None,
+        limit_memory=None,
+        limit_cpu=None,
+        limit_gpu=None,
+        limit_ephemeral_storage=None
+    ):
         self.request_memory = request_memory
         self.request_cpu = request_cpu
         self.request_ephemeral_storage = request_ephemeral_storage
@@ -104,9 +112,10 @@ class Port(K8SModel):
     __slots__ = ('name', 'container_port')
 
     def __init__(
-            self,
-            name=None,
-            container_port=None):
+        self,
+        name=None,
+        container_port=None
+    ):
         """Creates port"""
         self.name = name
         self.container_port = container_port
diff --git a/airflow/kubernetes/pod_launcher.py b/airflow/kubernetes/pod_launcher.py
index d27a647..05df204 100644
--- a/airflow/kubernetes/pod_launcher.py
+++ b/airflow/kubernetes/pod_launcher.py
@@ -26,10 +26,12 @@ from kubernetes.stream import stream as kubernetes_stream
 from requests.exceptions import BaseHTTPError
 
 from airflow import AirflowException
+from airflow.kubernetes.pod_launcher_helper import convert_to_airflow_pod
 from airflow.kubernetes.pod_generator import PodDefaults
-from airflow.settings import pod_mutation_hook
+from airflow import settings
 from airflow.utils.log.logging_mixin import LoggingMixin
 from airflow.utils.state import State
+import kubernetes.client.models as k8s  # noqa
 from .kube_client import get_kube_client
 
 
@@ -62,8 +64,12 @@ class PodLauncher(LoggingMixin):
         self.extract_xcom = extract_xcom
 
     def run_pod_async(self, pod, **kwargs):
-        """Runs POD asynchronously"""
-        pod_mutation_hook(pod)
+        """Runs POD asynchronously
+
+        :param pod: Pod to run
+        :type pod: k8s.V1Pod
+        """
+        pod = self._mutate_pod_backcompat(pod)
 
         sanitized_pod = self._client.api_client.sanitize_for_serialization(pod)
         json_pod = json.dumps(sanitized_pod, indent=2)
@@ -79,6 +85,20 @@ class PodLauncher(LoggingMixin):
             raise e
         return resp
 
+    @staticmethod
+    def _mutate_pod_backcompat(pod):
+        """Backwards compatible Pod Mutation Hook"""
+        try:
+            settings.pod_mutation_hook(pod)
+            # attempts to run pod_mutation_hook using k8s.V1Pod, if this
+            # fails we attempt to run by converting pod to Old Pod
+        except AttributeError:
+            dummy_pod = convert_to_airflow_pod(pod)
+            settings.pod_mutation_hook(dummy_pod)
+            dummy_pod = dummy_pod.to_v1_kubernetes_pod()
+            return dummy_pod
+        return pod
+
     def delete_pod(self, pod):
         """Deletes POD"""
         try:
diff --git a/airflow/kubernetes/pod_launcher_helper.py b/airflow/kubernetes/pod_launcher_helper.py
new file mode 100644
index 0000000..d8b2698
--- /dev/null
+++ b/airflow/kubernetes/pod_launcher_helper.py
@@ -0,0 +1,96 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+from typing import List, Union
+
+import kubernetes.client.models as k8s  # noqa
+
+from airflow.kubernetes.volume import Volume
+from airflow.kubernetes.volume_mount import VolumeMount
+from airflow.kubernetes.pod import Port
+from airflow.kubernetes_deprecated.pod import Pod
+
+
+def convert_to_airflow_pod(pod):
+    base_container = pod.spec.containers[0]  # type: k8s.V1Container
+
+    dummy_pod = Pod(
+        image=base_container.image,
+        envs=_extract_env_vars(base_container.env),
+        volumes=_extract_volumes(pod.spec.volumes),
+        volume_mounts=_extract_volume_mounts(base_container.volume_mounts),
+        labels=pod.metadata.labels,
+        name=pod.metadata.name,
+        namespace=pod.metadata.namespace,
+        image_pull_policy=base_container.image_pull_policy or 'IfNotPresent',
+        cmds=[],
+        ports=_extract_ports(base_container.ports)
+    )
+    return dummy_pod
+
+
+def _extract_env_vars(env_vars):
+    """
+
+    :param env_vars:
+    :type env_vars: list
+    :return: result
+    :rtype: dict
+    """
+    result = {}
+    env_vars = env_vars or []  # type: List[Union[k8s.V1EnvVar, dict]]
+    for env_var in env_vars:
+        if isinstance(env_var, k8s.V1EnvVar):
+            env_var.to_dict()
+        result[env_var.get("name")] = env_var.get("value")
+    return result
+
+
+def _extract_volumes(volumes):
+    result = []
+    volumes = volumes or []  # type: List[Union[k8s.V1Volume, dict]]
+    for volume in volumes:
+        if isinstance(volume, k8s.V1Volume):
+            volume = volume.to_dict()
+        result.append(Volume(name=volume.get("name"), configs=volume))
+    return result
+
+
+def _extract_volume_mounts(volume_mounts):
+    result = []
+    volume_mounts = volume_mounts or []  # type: List[Union[k8s.V1VolumeMount, dict]]
+    for volume_mount in volume_mounts:
+        if isinstance(volume_mount, k8s.V1VolumeMount):
+            volume_mount = volume_mount.to_dict()
+        result.append(
+            VolumeMount(
+                name=volume_mount.get("name"),
+                mount_path=volume_mount.get("mount_path"),
+                sub_path=volume_mount.get("sub_path"),
+                read_only=volume_mount.get("read_only"))
+        )
+
+    return result
+
+
+def _extract_ports(ports):
+    result = []
+    ports = ports or []  # type: List[Union[k8s.V1ContainerPort, dict]]
+    for port in ports:
+        if isinstance(port, k8s.V1ContainerPort):
+            port = port.to_dict()
+        result.append(Port(name=port.get("name"), container_port=port.get("container_port")))
+    return result
diff --git a/airflow/kubernetes/volume_mount.py b/airflow/kubernetes/volume_mount.py
index 0dbca5f..ab87ba9 100644
--- a/airflow/kubernetes/volume_mount.py
+++ b/airflow/kubernetes/volume_mount.py
@@ -24,6 +24,7 @@ from airflow.kubernetes.k8s_model import K8SModel
 
 
 class VolumeMount(K8SModel):
+    __slots__ = ('name', 'mount_path', 'sub_path', 'read_only')
     """
     Initialize a Kubernetes Volume Mount. Used to mount pod level volumes to
     running container.
diff --git a/airflow/kubernetes_deprecated/__init__.py b/airflow/kubernetes_deprecated/__init__.py
new file mode 100644
index 0000000..13a8339
--- /dev/null
+++ b/airflow/kubernetes_deprecated/__init__.py
@@ -0,0 +1,16 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
diff --git a/airflow/kubernetes_deprecated/pod.py b/airflow/kubernetes_deprecated/pod.py
new file mode 100644
index 0000000..22a8c12
--- /dev/null
+++ b/airflow/kubernetes_deprecated/pod.py
@@ -0,0 +1,171 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import kubernetes.client.models as k8s
+from airflow.kubernetes.pod import Resources
+
+
+class Pod(object):
+    """
+    Represents a kubernetes pod and manages execution of a single pod.
+
+    :param image: The docker image
+    :type image: str
+    :param envs: A dict containing the environment variables
+    :type envs: dict
+    :param cmds: The command to be run on the pod
+    :type cmds: list[str]
+    :param secrets: Secrets to be launched to the pod
+    :type secrets: list[airflow.contrib.kubernetes.secret.Secret]
+    :param result: The result that will be returned to the operator after
+        successful execution of the pod
+    :type result: any
+    :param image_pull_policy: Specify a policy to cache or always pull an image
+    :type image_pull_policy: str
+    :param image_pull_secrets: Any image pull secrets to be given to the pod.
+        If more than one secret is required, provide a comma separated list:
+        secret_a,secret_b
+    :type image_pull_secrets: str
+    :param affinity: A dict containing a group of affinity scheduling rules
+    :type affinity: dict
+    :param hostnetwork: If True enable host networking on the pod
+    :type hostnetwork: bool
+    :param tolerations: A list of kubernetes tolerations
+    :type tolerations: list
+    :param security_context: A dict containing the security context for the pod
+    :type security_context: dict
+    :param configmaps: A list containing names of configmaps object
+        mounting env variables to the pod
+    :type configmaps: list[str]
+    :param pod_runtime_info_envs: environment variables about
+                                  pod runtime information (ip, namespace, nodeName, podName)
+    :type pod_runtime_info_envs: list[PodRuntimeEnv]
+    :param dnspolicy: Specify a dnspolicy for the pod
+    :type dnspolicy: str
+    """
+    def __init__(
+            self,
+            image,
+            envs,
+            cmds,
+            args=None,
+            secrets=None,
+            labels=None,
+            node_selectors=None,
+            name=None,
+            ports=None,
+            volumes=None,
+            volume_mounts=None,
+            namespace='default',
+            result=None,
+            image_pull_policy='IfNotPresent',
+            image_pull_secrets=None,
+            init_containers=None,
+            service_account_name=None,
+            resources=None,
+            annotations=None,
+            affinity=None,
+            hostnetwork=False,
+            tolerations=None,
+            security_context=None,
+            configmaps=None,
+            pod_runtime_info_envs=None,
+            dnspolicy=None
+    ):
+        self.image = image
+        self.envs = envs or {}
+        self.cmds = cmds
+        self.args = args or []
+        self.secrets = secrets or []
+        self.result = result
+        self.labels = labels or {}
+        self.name = name
+        self.ports = ports or []
+        self.volumes = volumes or []
+        self.volume_mounts = volume_mounts or []
+        self.node_selectors = node_selectors or {}
+        self.namespace = namespace
+        self.image_pull_policy = image_pull_policy
+        self.image_pull_secrets = image_pull_secrets
+        self.init_containers = init_containers
+        self.service_account_name = service_account_name
+        self.resources = resources or Resources()
+        self.annotations = annotations or {}
+        self.affinity = affinity or {}
+        self.hostnetwork = hostnetwork or False
+        self.tolerations = tolerations or []
+        self.security_context = security_context
+        self.configmaps = configmaps or []
+        self.pod_runtime_info_envs = pod_runtime_info_envs or []
+        self.dnspolicy = dnspolicy
+
+    def to_v1_kubernetes_pod(self):
+        """
+        Convert to support k8s V1Pod
+
+        :return: k8s.V1Pod
+        """
+        meta = k8s.V1ObjectMeta(
+            labels=self.labels,
+            name=self.name,
+            namespace=self.namespace,
+        )
+        spec = k8s.V1PodSpec(
+            init_containers=self.init_containers,
+            containers=[
+                k8s.V1Container(
+                    image=self.image,
+                    command=self.cmds,
+                    name="base",
+                    env=[k8s.V1EnvVar(name=key, value=val) for key, val in self.envs.items()],
+                    args=self.args,
+                    image_pull_policy=self.image_pull_policy,
+                )
+            ],
+            image_pull_secrets=self.image_pull_secrets,
+            service_account_name=self.service_account_name,
+            dns_policy=self.dnspolicy,
+            host_network=self.hostnetwork,
+            tolerations=self.tolerations,
+            security_context=self.security_context,
+        )
+
+        pod = k8s.V1Pod(
+            spec=spec,
+            metadata=meta,
+        )
+        for port in self.ports:
+            pod = port.attach_to_pod(pod)
+        for volume in self.volumes:
+            pod = volume.attach_to_pod(pod)
+        for volume_mount in self.volume_mounts:
+            pod = volume_mount.attach_to_pod(pod)
+        for secret in self.secrets:
+            pod = secret.attach_to_pod(pod)
+        for runtime_info in self.pod_runtime_info_envs:
+            pod = runtime_info.attach_to_pod(pod)
+        pod = self.resources.attach_to_pod(pod)
+        return pod
+
+    def as_dict(self):
+        res = self.__dict__
+        res['resources'] = res['resources'].as_dict()
+        res['ports'] = [port.as_dict() for port in res['ports']]
+        res['volume_mounts'] = [volume_mount.as_dict() for volume_mount in res['volume_mounts']]
+        res['volumes'] = [volume.as_dict() for volume in res['volumes']]
+
+        return res
diff --git a/docs/conf.py b/docs/conf.py
index 6df66f8..d18b6ea 100644
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -201,6 +201,7 @@ exclude_patterns = [
     '_api/airflow/example_dags',
     '_api/airflow/index.rst',
     '_api/airflow/jobs',
+    '_api/airflow/kubernetes_deprecated',
     '_api/airflow/lineage',
     '_api/airflow/logging_config',
     '_api/airflow/macros',
diff --git a/tests/kubernetes/models/test_pod.py b/tests/kubernetes/models/test_pod.py
index 45c32aa..096b5f0 100644
--- a/tests/kubernetes/models/test_pod.py
+++ b/tests/kubernetes/models/test_pod.py
@@ -74,3 +74,84 @@ class TestPod(unittest.TestCase):
                 'volumes': []
             }
         }, result)
+
+    def test_to_v1_pod(self):
+        from airflow.kubernetes_deprecated.pod import Pod as DeprecatedPod
+        from airflow.kubernetes.volume import Volume
+        from airflow.kubernetes.volume_mount import VolumeMount
+        from airflow.kubernetes.pod import Resources
+
+        pod = DeprecatedPod(
+            image="foo",
+            name="bar",
+            namespace="baz",
+            image_pull_policy="Never",
+            envs={"test_key": "test_value"},
+            cmds=["airflow"],
+            resources=Resources(
+                request_memory="1G",
+                request_cpu="100Mi",
+                limit_gpu="100G"
+            ),
+            volumes=[Volume(name="foo", configs={})],
+            volume_mounts=[VolumeMount(name="foo", mount_path="/mnt", sub_path="/", read_only=True)]
+        )
+
+        k8s_client = ApiClient()
+
+        result = pod.to_v1_kubernetes_pod()
+        result = k8s_client.sanitize_for_serialization(result)
+
+        expected = \
+            {
+                'metadata':
+                    {
+                        'labels': {},
+                        'name': 'bar',
+                        'namespace': 'baz'
+                    },
+                'spec':
+                    {'containers':
+                        [
+                            {
+                                'args': [],
+                                'command': ['airflow'],
+                                'env': [{'name': 'test_key', 'value': 'test_value'}],
+                                'image': 'foo',
+                                'imagePullPolicy': 'Never',
+                                'name': 'base',
+                                'volumeMounts':
+                                    [
+                                        {
+                                            'mountPath': '/mnt',
+                                            'name': 'foo',
+                                            'readOnly': True, 'subPath': '/'
+                                        }
+                                    ],  # noqa
+                                'resources':
+                                    {
+                                        'limits':
+                                            {
+                                                'cpu': None,
+                                                'memory': None,
+                                                'nvidia.com/gpu': '100G',
+                                                'ephemeral-storage': None
+                                            },
+                                        'requests':
+                                            {
+                                                'cpu': '100Mi',
+                                                'memory': '1G',
+                                                'ephemeral-storage': None
+                                            }
+                                }
+                            }
+                        ],
+                        'hostNetwork': False,
+                        'tolerations': [],
+                        'volumes': [
+                            {'name': 'foo'}
+                        ]
+                     }
+            }
+        self.maxDiff = None
+        self.assertEquals(expected, result)
diff --git a/tests/kubernetes/test_pod_launcher_helper.py b/tests/kubernetes/test_pod_launcher_helper.py
new file mode 100644
index 0000000..a308ac3
--- /dev/null
+++ b/tests/kubernetes/test_pod_launcher_helper.py
@@ -0,0 +1,97 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+import unittest
+
+from airflow.kubernetes.pod import Port
+from airflow.kubernetes.volume_mount import VolumeMount
+from airflow.kubernetes.volume import Volume
+from airflow.kubernetes.pod_launcher_helper import convert_to_airflow_pod
+from airflow.kubernetes_deprecated.pod import Pod
+import kubernetes.client.models as k8s
+
+
+class TestPodLauncherHelper(unittest.TestCase):
+    def test_convert_to_airflow_pod(self):
+        input_pod = k8s.V1Pod(
+            metadata=k8s.V1ObjectMeta(
+                name="foo",
+                namespace="bar"
+            ),
+            spec=k8s.V1PodSpec(
+                containers=[
+                    k8s.V1Container(
+                        name="base",
+                        command="foo",
+                        image="myimage",
+                        ports=[
+                            k8s.V1ContainerPort(
+                                name="myport",
+                                container_port=8080,
+                            )
+                        ],
+                        volume_mounts=[k8s.V1VolumeMount(
+                            name="mymount",
+                            mount_path="/tmp/mount",
+                            read_only="True"
+                        )]
+                    )
+                ],
+                volumes=[
+                    k8s.V1Volume(
+                        name="myvolume"
+                    )
+                ]
+            )
+        )
+        result_pod = convert_to_airflow_pod(input_pod)
+
+        expected = Pod(
+            name="foo",
+            namespace="bar",
+            envs={},
+            cmds=[],
+            image="myimage",
+            ports=[
+                Port(name="myport", container_port=8080)
+            ],
+            volume_mounts=[VolumeMount(
+                name="mymount",
+                mount_path="/tmp/mount",
+                sub_path=None,
+                read_only="True"
+            )],
+            volumes=[Volume(name="myvolume", configs={'name': 'myvolume'})]
+        )
+        expected_dict = expected.as_dict()
+        result_dict = result_pod.as_dict()
+        parsed_configs = self.pull_out_volumes(result_dict)
+        result_dict['volumes'] = parsed_configs
+        self.maxDiff = None
+
+        self.assertDictEqual(expected_dict, result_dict)
+
+    def pull_out_volumes(self, result_dict):
+        parsed_configs = []
+        for volume in result_dict['volumes']:
+            vol = {'name': volume['name']}
+            confs = {}
+            for k, v in volume['configs'].items():
+                if v and k[0] != '_':
+                    confs[k] = v
+            vol['configs'] = confs
+            parsed_configs.append(vol)
+        return parsed_configs
diff --git a/tests/test_local_settings.py b/tests/test_local_settings.py
index 3497ee2..0e45ad8 100644
--- a/tests/test_local_settings.py
+++ b/tests/test_local_settings.py
@@ -21,6 +21,7 @@ import os
 import sys
 import tempfile
 import unittest
+from airflow.kubernetes import pod_generator
 from tests.compat import MagicMock, Mock, call, patch
 
 
@@ -40,8 +41,26 @@ def not_policy():
 """
 
 SETTINGS_FILE_POD_MUTATION_HOOK = """
+from airflow.kubernetes.volume import Volume
+from airflow.kubernetes.pod import Port, Resources
+
 def pod_mutation_hook(pod):
     pod.namespace = 'airflow-tests'
+    pod.image = 'my_image'
+    pod.volumes.append(Volume(name="bar", configs={}))
+    pod.ports = [Port(container_port=8080)]
+    pod.resources = Resources(
+                    request_memory="2G",
+                    request_cpu="200Mi",
+                    limit_gpu="200G"
+                )
+
+"""
+
+SETTINGS_FILE_POD_MUTATION_HOOK_V1_POD = """
+def pod_mutation_hook(pod):
+    pod.spec.containers[0].image = "test-image"
+
 """
 
 
@@ -148,9 +167,86 @@ class LocalSettingsTest(unittest.TestCase):
             settings.import_local_settings()  # pylint: ignore
 
             pod = MagicMock()
+            pod.volumes = []
             settings.pod_mutation_hook(pod)
 
             assert pod.namespace == 'airflow-tests'
+            self.assertEqual(pod.volumes[0].name, "bar")
+
+    def test_pod_mutation_to_k8s_pod(self):
+        with SettingsContext(SETTINGS_FILE_POD_MUTATION_HOOK, "airflow_local_settings"):
+            from airflow import settings
+            settings.import_local_settings()  # pylint: ignore
+            from airflow.kubernetes.pod_launcher import PodLauncher
+
+            self.mock_kube_client = Mock()
+            self.pod_launcher = PodLauncher(kube_client=self.mock_kube_client)
+            pod = pod_generator.PodGenerator(
+                image="foo",
+                name="bar",
+                namespace="baz",
+                image_pull_policy="Never",
+                cmds=["foo"],
+                volume_mounts=[
+                    {"name": "foo", "mount_path": "/mnt", "sub_path": "/", "read_only": "True"}
+                ],
+                volumes=[{"name": "foo"}]
+            ).gen_pod()
+
+            self.assertEqual(pod.metadata.namespace, "baz")
+            self.assertEqual(pod.spec.containers[0].image, "foo")
+            self.assertEqual(pod.spec.volumes, [{'name': 'foo'}])
+            self.assertEqual(pod.spec.containers[0].ports, [])
+            self.assertEqual(pod.spec.containers[0].resources, None)
+
+            pod = self.pod_launcher._mutate_pod_backcompat(pod)
+
+            self.assertEqual(pod.metadata.namespace, "airflow-tests")
+            self.assertEqual(pod.spec.containers[0].image, "my_image")
+            self.assertEqual(pod.spec.volumes, [{'name': 'foo'}, {'name': 'bar'}])
+            self.maxDiff = None
+            self.assertEqual(
+                pod.spec.containers[0].ports[0].to_dict(),
+                {
+                    "container_port": 8080,
+                    "host_ip": None,
+                    "host_port": None,
+                    "name": None,
+                    "protocol": None
+                }
+            )
+            self.assertEqual(
+                pod.spec.containers[0].resources.to_dict(),
+                {
+                    'limits': {
+                        'cpu': None,
+                        'memory': None,
+                        'ephemeral-storage': None,
+                        'nvidia.com/gpu': '200G'},
+                    'requests': {'cpu': '200Mi', 'ephemeral-storage': None, 'memory': '2G'}
+                }
+            )
+
+    def test_pod_mutation_v1_pod(self):
+        with SettingsContext(SETTINGS_FILE_POD_MUTATION_HOOK_V1_POD, "airflow_local_settings"):
+            from airflow import settings
+            settings.import_local_settings()  # pylint: ignore
+            from airflow.kubernetes.pod_launcher import PodLauncher
+
+            self.mock_kube_client = Mock()
+            self.pod_launcher = PodLauncher(kube_client=self.mock_kube_client)
+            pod = pod_generator.PodGenerator(
+                image="myimage",
+                cmds=["foo"],
+                volume_mounts={
+                    "name": "foo", "mount_path": "/mnt", "sub_path": "/", "read_only": "True"
+                },
+                volumes=[{"name": "foo"}]
+            ).gen_pod()
+
+            self.assertEqual(pod.spec.containers[0].image, "myimage")
+            pod = self.pod_launcher._mutate_pod_backcompat(pod)
+            self.assertEqual(pod.spec.containers[0].image, "test-image")
 
 
 class TestStatsWithAllowList(unittest.TestCase):


[airflow] 25/32: Fix docstrings in BigQueryGetDataOperator (#10042)

Posted by ka...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

kaxilnaik pushed a commit to branch v1-10-test
in repository https://gitbox.apache.org/repos/asf/airflow.git

commit 06b06d77e17ec29d74c59f0dae8da33698e286bb
Author: Jinhui Zhang <me...@old-panda.com>
AuthorDate: Wed Jul 29 05:42:41 2020 -0700

    Fix docstrings in BigQueryGetDataOperator (#10042)
    
    (cherry picked from commit 59cbff0874dd5318cda4b9ce7b7eeb1aad1dad4d)
---
 airflow/contrib/operators/bigquery_get_data.py | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/airflow/contrib/operators/bigquery_get_data.py b/airflow/contrib/operators/bigquery_get_data.py
index f5e6e50..e16804b 100644
--- a/airflow/contrib/operators/bigquery_get_data.py
+++ b/airflow/contrib/operators/bigquery_get_data.py
@@ -56,7 +56,7 @@ class BigQueryGetDataOperator(BaseOperator):
     :type table_id: str
     :param max_results: The maximum number of records (rows) to be fetched
         from the table. (templated)
-    :type max_results: str
+    :type max_results: int
     :param selected_fields: List of fields to return (comma-separated). If
         unspecified, all fields are returned.
     :type selected_fields: str
@@ -74,7 +74,7 @@ class BigQueryGetDataOperator(BaseOperator):
     def __init__(self,
                  dataset_id,
                  table_id,
-                 max_results='100',
+                 max_results=100,
                  selected_fields=None,
                  bigquery_conn_id='bigquery_default',
                  delegate_to=None,


[airflow] 22/32: Pin pymongo version to <3.11.0

Posted by ka...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

kaxilnaik pushed a commit to branch v1-10-test
in repository https://gitbox.apache.org/repos/asf/airflow.git

commit 5c9ff4d8176d4409dd19b5c16706d1936102279a
Author: Kaxil Naik <ka...@gmail.com>
AuthorDate: Sun Aug 2 11:42:39 2020 +0100

    Pin pymongo version to <3.11.0
---
 setup.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/setup.py b/setup.py
index e11ce70..35323d2 100644
--- a/setup.py
+++ b/setup.py
@@ -307,7 +307,7 @@ ldap = [
 ]
 mongo = [
     'dnspython>=1.13.0,<2.0.0',
-    'pymongo>=3.6.0',
+    'pymongo>=3.6.0,<3.11.0',
 ]
 mssql = [
     'pymssql~=2.1.1',


[airflow] 18/32: Breeze / KinD - support earlier k8s versions, fix recreate and kubectl versioning (#9905)

Posted by ka...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

kaxilnaik pushed a commit to branch v1-10-test
in repository https://gitbox.apache.org/repos/asf/airflow.git

commit 6b219e19f32ccb7017fae33a761fa5e800f6e56f
Author: Beni Ben zikry <bb...@gmail.com>
AuthorDate: Wed Jul 22 13:24:46 2020 +0300

    Breeze / KinD  - support earlier k8s versions, fix recreate and kubectl versioning (#9905)
    
    (cherry picked from commit 24a951e8ed7159d3c2cbdb76d8d4ab807ee95cbd)
---
 .github/workflows/ci.yml                | 3 +--
 BREEZE.rst                              | 4 ++--
 breeze                                  | 4 +++-
 breeze-complete                         | 2 +-
 scripts/ci/libraries/_initialization.sh | 2 +-
 scripts/ci/libraries/_kind.sh           | 6 +++---
 6 files changed, 11 insertions(+), 10 deletions(-)

diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml
index d0e7ab2..72f6ba5 100644
--- a/.github/workflows/ci.yml
+++ b/.github/workflows/ci.yml
@@ -130,8 +130,7 @@ jobs:
         python-version: [3.6, 3.7]
         kube-mode:
           - image
-        kubernetes-version:
-          - "v1.18.2"
+        kubernetes-version: [v1.18.6, v1.17.5, v1.16.9]
         kind-version:
           - "v0.8.0"
         helm-version:
diff --git a/BREEZE.rst b/BREEZE.rst
index 5976b0f..60ea13a 100644
--- a/BREEZE.rst
+++ b/BREEZE.rst
@@ -1756,9 +1756,9 @@ This is the current syntax for  `./breeze <./breeze>`_:
           Kubernetes version - only used in case one of --kind-cluster-* commands is used.
           One of:
 
-                 v1.18.2
+                 v1.18.6 v1.17.5 v1.16.9
 
-          Default: v1.18.2
+          Default: v1.18.6
 
   --kind-version <KIND_VERSION>
           Kind version - only used in case one of --kind-cluster-* commands is used.
diff --git a/breeze b/breeze
index 91585a1..44e8d03 100755
--- a/breeze
+++ b/breeze
@@ -146,7 +146,7 @@ function setup_default_breeze_variables() {
 
     _BREEZE_DEFAULT_BACKEND="sqlite"
     _BREEZE_DEFAULT_KUBERNETES_MODE="image"
-    _BREEZE_DEFAULT_KUBERNETES_VERSION="v1.18.2"
+    _BREEZE_DEFAULT_KUBERNETES_VERSION="v1.18.6"
     _BREEZE_DEFAULT_KIND_VERSION="v0.8.0"
     _BREEZE_DEFAULT_HELM_VERSION="v3.2.4"
     _BREEZE_DEFAULT_POSTGRES_VERSION="9.6"
@@ -1927,6 +1927,8 @@ function run_build_command {
               echo "Stops KinD cluster"
             elif [[ ${KIND_CLUSTER_OPERATION} == "restart" ]] ; then
               echo "Restarts KinD cluster"
+            elif [[ ${KIND_CLUSTER_OPERATION} == "recreate" ]] ; then
+              echo "Recreates KinD cluster"
             elif [[ ${KIND_CLUSTER_OPERATION} == "status" ]] ; then
               echo "Checks status of KinD cluster"
             elif [[ ${KIND_CLUSTER_OPERATION} == "deploy" ]] ; then
diff --git a/breeze-complete b/breeze-complete
index 574b9c7..45732b3 100644
--- a/breeze-complete
+++ b/breeze-complete
@@ -21,7 +21,7 @@ _BREEZE_ALLOWED_PYTHON_MAJOR_MINOR_VERSIONS="2.7 3.5 3.6 3.7 3.8"
 _BREEZE_ALLOWED_BACKENDS="sqlite mysql postgres"
 _BREEZE_ALLOWED_INTEGRATIONS="cassandra kerberos mongo openldap rabbitmq redis all"
 _BREEZE_ALLOWED_KUBERNETES_MODES="image git"
-_BREEZE_ALLOWED_KUBERNETES_VERSIONS="v1.18.2"
+_BREEZE_ALLOWED_KUBERNETES_VERSIONS="v1.18.6 v1.17.5 v1.16.9"
 _BREEZE_ALLOWED_HELM_VERSIONS="v3.2.4"
 _BREEZE_ALLOWED_KIND_VERSIONS="v0.8.0"
 _BREEZE_ALLOWED_MYSQL_VERSIONS="5.6 5.7"
diff --git a/scripts/ci/libraries/_initialization.sh b/scripts/ci/libraries/_initialization.sh
index d3224e7..63041fd 100644
--- a/scripts/ci/libraries/_initialization.sh
+++ b/scripts/ci/libraries/_initialization.sh
@@ -205,7 +205,7 @@ function initialize_common_environment {
     export VERSION_SUFFIX_FOR_SVN=""
 
     # Default Kubernetes version
-    export DEFAULT_KUBERNETES_VERSION="v1.18.2"
+    export DEFAULT_KUBERNETES_VERSION="v1.18.6"
 
     # Default KinD version
     export DEFAULT_KIND_VERSION="v0.8.0"
diff --git a/scripts/ci/libraries/_kind.sh b/scripts/ci/libraries/_kind.sh
index 2cf43c3..173af6d 100644
--- a/scripts/ci/libraries/_kind.sh
+++ b/scripts/ci/libraries/_kind.sh
@@ -45,7 +45,7 @@ function make_sure_kubernetes_tools_are_installed() {
     HELM_VERSION=${HELM_VERSION:=${DEFAULT_HELM_VERSION}}
     HELM_URL="https://get.helm.sh/helm-${HELM_VERSION}-${SYSTEM}-amd64.tar.gz"
     HELM_PATH="${BUILD_CACHE_DIR}/bin/helm"
-    KUBECTL_VERSION=${KUBENETES_VERSION:=${DEFAULT_KUBERNETES_VERSION}}
+    KUBECTL_VERSION=${KUBERNETES_VERSION:=${DEFAULT_KUBERNETES_VERSION}}
     KUBECTL_URL="https://storage.googleapis.com/kubernetes-release/release/${KUBECTL_VERSION}/bin/${SYSTEM}/amd64/kubectl"
     KUBECTL_PATH="${BUILD_CACHE_DIR}/bin/kubectl"
     mkdir -pv "${BUILD_CACHE_DIR}/bin"
@@ -125,7 +125,7 @@ function delete_cluster() {
 }
 
 function perform_kind_cluster_operation() {
-    ALLOWED_KIND_OPERATIONS="[ start restart stop deploy test shell ]"
+    ALLOWED_KIND_OPERATIONS="[ start restart stop deploy test shell recreate ]"
 
     set +u
     if [[ -z "${1}" ]]; then
@@ -215,7 +215,7 @@ function perform_kind_cluster_operation() {
             echo "Creating cluster"
             echo
             create_cluster
-        elif [[ ${OPERATION} == "stop" || ${OEPRATON} == "deploy" || \
+        elif [[ ${OPERATION} == "stop" || ${OPERATION} == "deploy" || \
                 ${OPERATION} == "test" || ${OPERATION} == "shell" ]]; then
             echo >&2
             echo >&2 "Cluster ${KIND_CLUSTER_NAME} does not exist. It should exist for ${OPERATION} operation"


[airflow] 11/32: Added "all" to allowed breeze integrations and tried to clarify on fail (#9872)

Posted by ka...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

kaxilnaik pushed a commit to branch v1-10-test
in repository https://gitbox.apache.org/repos/asf/airflow.git

commit 6e290cf0f850c36bb5386bf2f21045765bbbe600
Author: Alexander Sutcliffe <41...@users.noreply.github.com>
AuthorDate: Sat Jul 18 07:38:02 2020 +0200

    Added "all" to allowed breeze integrations and tried to clarify on fail (#9872)
    
    (cherry picked from commit 64929eeb70dcb5aa39ce3d504603721c20518cd4)
---
 BREEZE.rst                                             | 11 ++++++-----
 breeze                                                 |  1 -
 breeze-complete                                        |  2 +-
 scripts/ci/pre_commit/pre_commit_check_integrations.sh |  4 +++-
 4 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/BREEZE.rst b/BREEZE.rst
index 435b21e..c377ec0 100644
--- a/BREEZE.rst
+++ b/BREEZE.rst
@@ -413,8 +413,10 @@ way as when you enter the environment. You can do it multiple times and open as
 CLIs for cloud providers
 ------------------------
 
-Restarting Breeze environment
------------------------------
+For development convenience we installed simple wrappers for the most common cloud providers CLIs. Those
+CLIs are not installed when you build or pull the image - they will be downloaded as docker images
+the first time you attempt to use them. It is downloaded and executed in your host's docker engine so once
+it is downloaded, it will stay until you remove the downloaded images from your host container.
 
 For each of those CLI credentials are taken (automatically) from the credentials you have defined in
 your ${HOME} directory on host.
@@ -697,7 +699,7 @@ Generating requirements
 Whenever you modify and commit setup.py, you need to re-generate requirement files. Those requirement
 files ara stored separately for each python version in the ``requirements`` folder. Those are
 constraints rather than requirements as described in detail in the
-`CONTRIBUTING <CONTRIBUTING.rst#pinned-requirement-files>`_ documentation.
+`CONTRIBUTING <CONTRIBUTING.rst#pinned-requirement-files>`_ contributing documentation.
 
 In case you modify setup.py you need to update the requirements - for every python version supported.
 
@@ -1736,11 +1738,10 @@ This is the current syntax for  `./breeze <./breeze>`_:
   -i, --integration <INTEGRATION>
           Integration to start during tests - it determines which integrations are started
           for integration tests. There can be more than one integration started, or all to
-          }
           start all integrations. Selected integrations are not saved for future execution.
           One of:
 
-                 cassandra kerberos mongo openldap rabbitmq redis
+                 cassandra kerberos mongo openldap rabbitmq redis all
 
   ****************************************************************************************************
    Kind kubernetes and Kubernetes tests configuration(optional)
diff --git a/breeze b/breeze
index abb95b7..571d383 100755
--- a/breeze
+++ b/breeze
@@ -1411,7 +1411,6 @@ function flag_breeze_actions() {
 -i, --integration <INTEGRATION>
         Integration to start during tests - it determines which integrations are started
         for integration tests. There can be more than one integration started, or all to
-        }
         start all integrations. Selected integrations are not saved for future execution.
         One of:
 
diff --git a/breeze-complete b/breeze-complete
index c1d955c..73368b8 100644
--- a/breeze-complete
+++ b/breeze-complete
@@ -19,7 +19,7 @@
 
 _BREEZE_ALLOWED_PYTHON_MAJOR_MINOR_VERSIONS="2.7 3.5 3.6 3.7 3.8"
 _BREEZE_ALLOWED_BACKENDS="sqlite mysql postgres"
-_BREEZE_ALLOWED_INTEGRATIONS="cassandra kerberos mongo openldap rabbitmq redis"
+_BREEZE_ALLOWED_INTEGRATIONS="cassandra kerberos mongo openldap rabbitmq redis all"
 _BREEZE_ALLOWED_KUBERNETES_MODES="image git"
 _BREEZE_ALLOWED_KUBERNETES_VERSIONS="v1.18.2"
 _BREEZE_ALLOWED_HELM_VERSIONS="v3.2.4"
diff --git a/scripts/ci/pre_commit/pre_commit_check_integrations.sh b/scripts/ci/pre_commit/pre_commit_check_integrations.sh
index 6871941..a0b33c8 100755
--- a/scripts/ci/pre_commit/pre_commit_check_integrations.sh
+++ b/scripts/ci/pre_commit/pre_commit_check_integrations.sh
@@ -26,7 +26,7 @@ cd "${AIRFLOW_SOURCES}" || exit 1
 
 . breeze-complete
 
-if [[ ${AVAILABLE_INTEGRATIONS} != "${_BREEZE_ALLOWED_INTEGRATIONS}" ]]; then
+if [[ "${AVAILABLE_INTEGRATIONS} all" != "${_BREEZE_ALLOWED_INTEGRATIONS}" ]]; then
   echo
   echo "Error: Allowed integrations do not match!"
   echo
@@ -36,6 +36,8 @@ if [[ ${AVAILABLE_INTEGRATIONS} != "${_BREEZE_ALLOWED_INTEGRATIONS}" ]]; then
   echo "The ./breeze-complete integrations (_BREEZE_ALLOWED_INTEGRATIONS):"
   echo "${_BREEZE_ALLOWED_INTEGRATIONS}"
   echo
+  echo "_BREEZE_ALLOWED_INTEGRATIONS should match AVAILABLE_INTEGRATIONS plus 'all'"
+  echo
   echo "Please align the two!"
   echo
   exit 1


[airflow] 01/32: Update some dependencies (#9684)

Posted by ka...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

kaxilnaik pushed a commit to branch v1-10-test
in repository https://gitbox.apache.org/repos/asf/airflow.git

commit 0b5f0fc2c8717030a9de8555f3c8ccfa3875c9a0
Author: Hartorn <ha...@gmail.com>
AuthorDate: Mon Jul 6 13:04:35 2020 +0200

    Update some dependencies (#9684)
    
    (cherry picked from commit fd62b1c5262086597db2aa439a09d86794a33345)
---
 .github/workflows/ci.yml                |   5 +-
 breeze                                  |   4 -
 requirements/requirements-python2.7.txt | 132 +++++++++++++++++---------------
 requirements/requirements-python3.5.txt |  26 +++----
 requirements/requirements-python3.6.txt |  54 +++++++------
 requirements/requirements-python3.7.txt |  52 +++++++------
 requirements/requirements-python3.8.txt |  51 ++++++------
 requirements/setup-3.5.md5              |   2 +-
 requirements/setup-3.6.md5              |   2 +-
 requirements/setup-3.7.md5              |   2 +-
 scripts/ci/ci_check_license.sh          |   2 +-
 scripts/ci/ci_fix_ownership.sh          |   2 +-
 scripts/ci/ci_flake8.sh                 |   2 +-
 scripts/ci/ci_generate_requirements.sh  |   2 +-
 scripts/ci/ci_push_ci_image.sh          |   2 +-
 scripts/ci/ci_push_production_images.sh |   2 +-
 scripts/ci/ci_run_static_checks.sh      |   2 +-
 setup.py                                |   6 +-
 18 files changed, 180 insertions(+), 170 deletions(-)

diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml
index c37253f..029c341 100644
--- a/.github/workflows/ci.yml
+++ b/.github/workflows/ci.yml
@@ -32,7 +32,7 @@ env:
   DB_RESET: "true"
   VERBOSE: "true"
   UPGRADE_TO_LATEST_REQUIREMENTS: "false"
-  PYTHON_MAJOR_MINOR_VERSION: 3.5
+  PYTHON_MAJOR_MINOR_VERSION: 3.6
   USE_GITHUB_REGISTRY: "true"
   CACHE_IMAGE_PREFIX: ${{ github.repository }}
   CACHE_REGISTRY_USERNAME: ${{ github.actor }}
@@ -76,7 +76,6 @@ jobs:
     runs-on: ubuntu-latest
     env:
       CI_JOB_TYPE: "Documentation"
-      PYTHON_MAJOR_MINOR_VERSION: 3.6
     steps:
       - uses: actions/checkout@master
       - uses: actions/setup-python@v1
@@ -207,7 +206,7 @@ ${{ hashFiles('requirements/requirements-python${{matrix.python-version}}.txt')
       - uses: actions/checkout@master
       - uses: actions/setup-python@v1
         with:
-          python-version: '3.x'
+          python-version: '3.6'
       - name: "Free space"
         run: ./scripts/ci/ci_free_space_on_ci.sh
       - name: "Build CI image ${{ matrix.python-version }}"
diff --git a/breeze b/breeze
index 3e8d2c3..534bec3 100755
--- a/breeze
+++ b/breeze
@@ -827,10 +827,6 @@ function parse_arguments() {
         generate-requirements)
           LAST_SUBCOMMAND="${1}"
           COMMAND_TO_RUN="perform_generate_requirements"
-          # if you want to  generate requirement - you want to build the image too :)
-          export FORCE_ANSWER_TO_QUESTIONS="yes"
-          # and assume you want to build it no matter if it is needed
-          export FORCE_BUILD_IMAGES="true"
           shift ;;
         push-image)
           LAST_SUBCOMMAND="${1}"
diff --git a/requirements/requirements-python2.7.txt b/requirements/requirements-python2.7.txt
index 4f9242c..6973e5a 100644
--- a/requirements/requirements-python2.7.txt
+++ b/requirements/requirements-python2.7.txt
@@ -8,7 +8,7 @@ Flask-Caching==1.3.3
 Flask-JWT-Extended==3.24.1
 Flask-Login==0.4.1
 Flask-OpenID==1.2.5
-Flask-SQLAlchemy==2.4.4
+Flask-SQLAlchemy==2.4.3
 Flask-WTF==0.14.3
 Flask==1.1.2
 JPype1==0.7.1
@@ -23,9 +23,9 @@ PyNaCl==1.4.0
 PySmbClient==0.1.5
 PyYAML==5.3.1
 Pygments==2.5.2
-SQLAlchemy-JSONField==0.9.0
+SQLAlchemy-JSONField==0.8.0
 SQLAlchemy==1.3.18
-Sphinx==3.1.2
+Sphinx==1.8.5
 Unidecode==1.1.1
 WTForms==2.3.1
 Werkzeug==0.16.1
@@ -36,37 +36,44 @@ amqp==2.6.0
 analytics-python==1.2.9
 ansiwrap==0.8.4
 apipkg==1.5
-apispec==3.3.1
+apispec==2.0.2
 appdirs==1.4.4
-argcomplete==1.12.0
+argcomplete==1.11.1
 asn1crypto==1.3.0
 aspy.yaml==1.3.0
-astroid==2.4.2
+astroid==1.6.6
 atlasclient==1.0.0
 atomicwrites==1.4.0
 attrs==19.3.0
 aws-sam-translator==1.25.0
 aws-xray-sdk==2.6.0
 azure-common==1.1.25
-azure-cosmos==3.2.0
+azure-cosmos==3.1.2
 azure-datalake-store==0.0.48
 azure-mgmt-containerinstance==1.5.0
-azure-mgmt-resource==10.1.0
+azure-mgmt-nspkg==3.0.2
+azure-mgmt-resource==10.0.0
 azure-nspkg==3.0.2
 azure-storage-blob==2.1.0
 azure-storage-common==2.1.0
+azure-storage-nspkg==3.1.0
 azure-storage==0.36.0
-backcall==0.2.0
+backports-abc==0.5
+backports.functools-lru-cache==1.6.1
+backports.shutil-get-terminal-size==1.0.0
+backports.ssl-match-hostname==3.7.0.1
+backports.tempfile==1.0
+backports.weakref==1.0.post1
 bcrypt==3.1.7
 beautifulsoup4==4.7.1
 billiard==3.6.3.0
 bleach==3.1.5
 blinker==1.4
-boto3==1.14.20
+boto3==1.14.14
 boto==2.49.0
-botocore==1.17.20
+botocore==1.17.14
 cached-property==1.5.1
-cachetools==4.1.1
+cachetools==3.1.1
 cassandra-driver==3.20.2
 cattrs==1.0.0
 celery==4.4.6
@@ -81,11 +88,14 @@ cloudant==0.5.10
 colorama==0.4.3
 colorlog==4.0.2
 configparser==3.5.3
-coverage==5.2
+contextdecorator==0.10.0
+contextlib2==0.6.0.post1
+cookies==2.2.1
+coverage==5.1
 croniter==0.3.34
 cryptography==2.9.2
-cx-Oracle==8.0.0
-datadog==0.38.0
+cx-Oracle==7.3.0
+datadog==0.37.1
 decorator==4.4.2
 defusedxml==0.6.0
 dill==0.3.2
@@ -100,25 +110,27 @@ elasticsearch-dsl==5.4.0
 elasticsearch==5.5.3
 email-validator==1.1.1
 entrypoints==0.3
+enum34==1.1.10
 execnet==1.7.1
-fastavro==0.23.6
+fastavro==0.23.5
 filelock==3.0.12
 flake8-colors==0.1.6
 flake8==3.8.3
-flaky==3.7.0
+flaky==3.6.1
 flask-swagger==0.2.13
-flower==0.9.5
+flower==0.9.4
 freezegun==0.3.15
-fsspec==0.7.4
 funcsigs==1.0.2
+functools32==3.2.3.post2
 future-fstrings==1.2.0
 future==0.18.2
-gcsfs==0.6.2
+futures==3.3.0
+gcsfs==0.2.3
 google-api-core==1.21.0
 google-api-python-client==1.9.3
-google-auth-httplib2==0.0.4
+google-auth-httplib2==0.0.3
 google-auth-oauthlib==0.4.1
-google-auth==1.19.0
+google-auth==1.18.0
 google-cloud-bigquery==1.25.0
 google-cloud-bigtable==1.2.1
 google-cloud-container==1.0.1
@@ -135,7 +147,7 @@ google-cloud-videointelligence==1.15.0
 google-cloud-vision==1.0.0
 google-resumable-media==0.5.1
 googleapis-common-protos==1.52.0
-graphviz==0.14.1
+graphviz==0.14
 grpc-google-iam-v1==0.12.3
 grpcio-gcp==0.2.2
 grpcio==1.30.0
@@ -143,22 +155,22 @@ gunicorn==19.10.0
 hdfs==2.5.8
 hmsclient==0.1.1
 httplib2==0.18.1
-humanize==2.5.0
+humanize==0.5.1
 hvac==0.10.4
-identify==1.4.23
+identify==1.4.21
 idna==2.10
 ijson==2.6.1
 imagesize==1.2.0
 importlib-metadata==1.7.0
 importlib-resources==3.0.0
-inflection==0.5.0
+inflection==0.3.1
+ipaddress==1.0.23
 ipdb==0.13.3
 ipython-genutils==0.2.0
-ipython==7.9.0
+ipython==5.10.0
 iso8601==0.1.12
 isodate==0.6.0
 itsdangerous==1.1.0
-jedi==0.17.1
 jira==2.0.0
 jmespath==0.10.0
 json-merge-patch==0.2
@@ -168,12 +180,13 @@ jsonpickle==1.4.1
 jsonpointer==2.0
 jsonschema==3.2.0
 junit-xml==1.9
-jupyter-client==6.1.6
+jupyter-client==5.3.5
 jupyter-core==4.6.3
-kombu==4.6.11
+kombu==4.6.3
 kubernetes==11.0.0
 lazy-object-proxy==1.5.0
 ldap3==2.7
+linecache2==1.0.0
 lockfile==0.12.2
 marshmallow-enum==1.5.1
 marshmallow-sqlalchemy==0.18.0
@@ -182,33 +195,30 @@ mccabe==0.6.1
 mistune==0.8.4
 mock==3.0.5
 mongomock==3.19.0
-more-itertools==8.4.0
+monotonic==1.5
+more-itertools==5.0.0
 moto==1.3.14
 msrest==0.6.17
 msrestazure==0.6.4
 multi-key-dict==2.0.3
-mypy-extensions==0.4.3
-mypy==0.720
 mysqlclient==1.3.14
-natsort==7.0.1
-nbclient==0.1.0
+natsort==6.2.1
 nbconvert==5.6.1
-nbformat==5.0.7
-networkx==2.4
+nbformat==4.4.0
+networkx==2.2
 nodeenv==1.4.0
-nteract-scrapbook==0.4.1
+nteract-scrapbook==0.3.1
 ntlm-auth==1.5.0
-numpy==1.18.5
+numpy==1.16.6
 oauthlib==3.1.0
 oscrypto==1.2.0
 packaging==20.4
-pandas-gbq==0.13.2
-pandas==0.25.3
+pandas-gbq==0.13.1
+pandas==0.24.2
 pandocfilters==1.4.2
-papermill==2.0.0
+papermill==1.2.1
 parameterized==0.7.4
 paramiko==2.7.1
-parso==0.7.0
 pathlib2==2.3.5
 pathspec==0.8.0
 pbr==5.4.5
@@ -220,15 +230,13 @@ pluggy==0.13.1
 pre-commit==1.21.0
 presto-python-client==0.7.0
 prison==0.1.0
-prometheus-client==0.8.0
-prompt-toolkit==2.0.10
+prompt-toolkit==1.0.18
 protobuf==3.12.2
 psutil==5.7.0
 psycopg2-binary==2.8.5
 ptyprocess==0.6.0
 py==1.9.0
 pyOpenSSL==19.1.0
-pyarrow==0.17.1
 pyasn1-modules==0.2.8
 pyasn1==0.4.8
 pycodestyle==2.6.0
@@ -248,8 +256,8 @@ pytest-forked==1.2.0
 pytest-instafail==0.4.2
 pytest-rerunfailures==9.0
 pytest-timeout==1.4.1
-pytest-xdist==1.33.0
-pytest==5.4.3
+pytest-xdist==1.32.0
+pytest==4.6.11
 python-daemon==2.2.4
 python-dateutil==2.8.1
 python-editor==1.0.4
@@ -257,10 +265,10 @@ python-http-client==3.2.7
 python-jenkins==1.7.0
 python-jose==3.1.0
 python-nvd3==0.15.0
+python-openid==2.2.5
 python-slugify==4.0.1
-python3-openid==3.2.0
 pytz==2020.1
-pytzdata==2020.1
+pytzdata==2019.3
 pywinrm==0.4.1
 pyzmq==19.0.1
 qds-sdk==1.16.0
@@ -273,33 +281,30 @@ requests-oauthlib==1.3.0
 requests-toolbelt==0.9.1
 requests==2.24.0
 responses==0.10.15
-rsa==4.6
+rsa==4.0
 s3transfer==0.3.3
 sasl==0.2.1
+scandir==1.10.0
 sendgrid==5.6.0
 sentinels==1.0.0
-sentry-sdk==0.16.1
+sentry-sdk==0.15.1
 setproctitle==1.1.10
 simplegeneric==0.8.1
+singledispatch==3.4.0.3
 six==1.15.0
 slackclient==1.3.2
+snakebite==2.11.0
 snowballstemmer==2.0.0
-snowflake-connector-python==2.2.8
+snowflake-connector-python==2.1.3
 snowflake-sqlalchemy==1.2.3
-soupsieve==2.0.1
+soupsieve==1.9.6
 sphinx-argparse==0.2.5
 sphinx-autoapi==1.0.0
 sphinx-jinja==1.1.1
 sphinx-rtd-theme==0.5.0
-sphinxcontrib-applehelp==1.0.2
-sphinxcontrib-devhelp==1.0.2
 sphinxcontrib-dotnetdomain==0.4
 sphinxcontrib-golangdomain==0.2.0.dev0
-sphinxcontrib-htmlhelp==1.0.3
 sphinxcontrib-httpdomain==1.7.0
-sphinxcontrib-jsmath==1.0.1
-sphinxcontrib-qthelp==1.0.3
-sphinxcontrib-serializinghtml==1.1.4
 sphinxcontrib-websupport==1.1.2
 sshpubkeys==3.1.0
 sshtunnel==0.1.5
@@ -314,17 +319,18 @@ tokenize-rt==3.2.0
 toml==0.10.1
 tornado==5.1.1
 tqdm==4.47.0
+traceback2==1.4.0
 traitlets==4.3.3
-typed-ast==1.4.1
 typing-extensions==3.7.4.2
-typing==3.7.4.3
+typing==3.7.4.1
 tzlocal==1.5.1
 unicodecsv==0.14.1
+unittest2==1.1.0
 uritemplate==3.0.1
 urllib3==1.25.9
 vertica-python==0.10.4
 vine==1.3.0
-virtualenv==20.0.26
+virtualenv==20.0.25
 wcwidth==0.2.5
 webencodings==0.5.1
 websocket-client==0.57.0
diff --git a/requirements/requirements-python3.5.txt b/requirements/requirements-python3.5.txt
index 311c764..211b6e2 100644
--- a/requirements/requirements-python3.5.txt
+++ b/requirements/requirements-python3.5.txt
@@ -13,7 +13,7 @@ Flask-WTF==0.14.3
 Flask==1.1.2
 JPype1==0.7.1
 JayDeBeApi==1.2.3
-Jinja2==2.10.3
+Jinja2==2.11.2
 Mako==1.1.3
 Markdown==2.6.11
 MarkupSafe==1.1.1
@@ -60,9 +60,9 @@ bcrypt==3.1.7
 beautifulsoup4==4.7.1
 billiard==3.6.3.0
 blinker==1.4
-boto3==1.14.21
+boto3==1.14.25
 boto==2.49.0
-botocore==1.17.21
+botocore==1.17.25
 cached-property==1.5.1
 cachetools==4.1.1
 cassandra-driver==3.20.2
@@ -81,7 +81,7 @@ colorlog==4.0.2
 configparser==3.5.3
 coverage==5.2
 croniter==0.3.34
-cryptography==2.9.2
+cryptography==3.0
 cx-Oracle==8.0.0
 datadog==0.38.0
 decorator==4.4.2
@@ -104,7 +104,7 @@ filelock==3.0.12
 flake8-colors==0.1.6
 flake8==3.8.3
 flaky==3.7.0
-flask-swagger==0.2.13
+flask-swagger==0.2.14
 flower==0.9.5
 freezegun==0.3.15
 fsspec==0.7.4
@@ -112,13 +112,13 @@ funcsigs==1.0.2
 future-fstrings==1.2.0
 future==0.18.2
 gcsfs==0.6.2
-google-api-core==1.21.0
+google-api-core==1.22.0
 google-api-python-client==1.10.0
 google-auth-httplib2==0.0.4
 google-auth-oauthlib==0.4.1
-google-auth==1.19.1
-google-cloud-bigquery==1.25.0
-google-cloud-bigtable==1.2.1
+google-auth==1.19.2
+google-cloud-bigquery==1.26.0
+google-cloud-bigtable==1.3.0
 google-cloud-container==1.0.1
 google-cloud-core==1.3.0
 google-cloud-dlp==1.0.0
@@ -137,13 +137,13 @@ graphviz==0.14.1
 grpc-google-iam-v1==0.12.3
 grpcio-gcp==0.2.2
 grpcio==1.30.0
-gunicorn==19.10.0
+gunicorn==20.0.4
 hdfs==2.5.8
 hmsclient==0.1.1
 httplib2==0.18.1
 humanize==2.5.0
 hvac==0.10.4
-identify==1.4.23
+identify==1.4.25
 idna==2.10
 imagesize==1.2.0
 importlib-metadata==1.7.0
@@ -155,7 +155,7 @@ ipython==7.9.0
 iso8601==0.1.12
 isodate==0.6.0
 itsdangerous==1.1.0
-jedi==0.17.1
+jedi==0.17.2
 jira==2.0.0
 jmespath==0.10.0
 json-merge-patch==0.2
@@ -304,7 +304,7 @@ thrift==0.13.0
 tokenize-rt==3.2.0
 toml==0.10.1
 tornado==5.1.1
-tqdm==4.47.0
+tqdm==4.48.0
 traitlets==4.3.3
 typed-ast==1.4.1
 typing-extensions==3.7.4.2
diff --git a/requirements/requirements-python3.6.txt b/requirements/requirements-python3.6.txt
index cbfa08d..3429b87 100644
--- a/requirements/requirements-python3.6.txt
+++ b/requirements/requirements-python3.6.txt
@@ -13,7 +13,7 @@ Flask-WTF==0.14.3
 Flask==1.1.2
 JPype1==0.7.1
 JayDeBeApi==1.2.3
-Jinja2==2.10.3
+Jinja2==2.11.2
 Mako==1.1.3
 Markdown==2.6.11
 MarkupSafe==1.1.1
@@ -62,9 +62,9 @@ beautifulsoup4==4.7.1
 billiard==3.6.3.0
 black==19.10b0
 blinker==1.4
-boto3==1.14.20
+boto3==1.14.25
 boto==2.49.0
-botocore==1.17.20
+botocore==1.17.25
 cached-property==1.5.1
 cachetools==4.1.1
 cassandra-driver==3.20.2
@@ -73,7 +73,7 @@ celery==4.4.6
 certifi==2020.6.20
 cffi==1.14.0
 cfgv==3.1.0
-cfn-lint==0.33.2
+cfn-lint==0.34.0
 cgroupspy==0.1.6
 chardet==3.0.4
 click==6.7
@@ -83,8 +83,9 @@ colorlog==4.0.2
 configparser==3.5.3
 coverage==5.2
 croniter==0.3.34
-cryptography==2.9.2
+cryptography==3.0
 cx-Oracle==8.0.0
+dataclasses==0.7
 datadog==0.38.0
 decorator==4.4.2
 defusedxml==0.6.0
@@ -106,7 +107,7 @@ filelock==3.0.12
 flake8-colors==0.1.6
 flake8==3.8.3
 flaky==3.7.0
-flask-swagger==0.2.13
+flask-swagger==0.2.14
 flower==0.9.5
 freezegun==0.3.15
 fsspec==0.7.4
@@ -114,14 +115,14 @@ funcsigs==1.0.2
 future-fstrings==1.2.0
 future==0.18.2
 gcsfs==0.6.2
-google-api-core==1.21.0
-google-api-python-client==1.9.3
+google-api-core==1.22.0
+google-api-python-client==1.10.0
 google-auth-httplib2==0.0.4
 google-auth-oauthlib==0.4.1
-google-auth==1.19.0
-google-cloud-bigquery==1.25.0
-google-cloud-bigtable==1.2.1
-google-cloud-container==1.0.1
+google-auth==1.19.2
+google-cloud-bigquery==1.26.0
+google-cloud-bigtable==1.3.0
+google-cloud-container==2.0.0
 google-cloud-core==1.3.0
 google-cloud-dlp==1.0.0
 google-cloud-language==1.3.0
@@ -139,16 +140,17 @@ graphviz==0.14.1
 grpc-google-iam-v1==0.12.3
 grpcio-gcp==0.2.2
 grpcio==1.30.0
-gunicorn==19.10.0
+gunicorn==20.0.4
 hdfs==2.5.8
 hmsclient==0.1.1
 httplib2==0.18.1
 humanize==2.5.0
 hvac==0.10.4
-identify==1.4.23
+identify==1.4.25
 idna==2.10
 imagesize==1.2.0
 importlib-metadata==1.7.0
+importlib-resources==3.0.0
 inflection==0.5.0
 ipdb==0.13.3
 ipython-genutils==0.2.0
@@ -156,7 +158,7 @@ ipython==7.16.1
 iso8601==0.1.12
 isodate==0.6.0
 itsdangerous==1.1.0
-jedi==0.17.1
+jedi==0.17.2
 jira==2.0.0
 jmespath==0.10.0
 json-merge-patch==0.2
@@ -172,6 +174,7 @@ kombu==4.6.11
 kubernetes==11.0.0
 lazy-object-proxy==1.5.0
 ldap3==2.7
+libcst==0.3.7
 lockfile==0.12.2
 marshmallow-enum==1.5.1
 marshmallow-sqlalchemy==0.23.1
@@ -190,12 +193,12 @@ mysqlclient==1.3.14
 natsort==7.0.1
 nbclient==0.4.1
 nbformat==5.0.7
-nest-asyncio==1.3.3
+nest-asyncio==1.4.0
 networkx==2.4
 nodeenv==1.4.0
 nteract-scrapbook==0.4.1
 ntlm-auth==1.5.0
-numpy==1.19.0
+numpy==1.19.1
 oauthlib==3.1.0
 oscrypto==1.2.0
 packaging==20.4
@@ -217,8 +220,9 @@ presto-python-client==0.7.0
 prison==0.1.3
 prometheus-client==0.8.0
 prompt-toolkit==3.0.5
+proto-plus==1.3.2
 protobuf==3.12.2
-psutil==5.7.0
+psutil==5.7.2
 psycopg2-binary==2.8.5
 ptyprocess==0.6.0
 py==1.9.0
@@ -242,7 +246,7 @@ pytest-cov==2.10.0
 pytest-forked==1.2.0
 pytest-instafail==0.4.2
 pytest-rerunfailures==9.0
-pytest-timeout==1.4.1
+pytest-timeout==1.4.2
 pytest-xdist==1.33.0
 pytest==5.4.3
 python-daemon==2.2.4
@@ -260,7 +264,7 @@ pywinrm==0.4.1
 pyzmq==19.0.1
 qds-sdk==1.16.0
 redis==3.5.3
-regex==2020.6.8
+regex==2020.7.14
 requests-futures==0.9.4
 requests-kerberos==0.12.0
 requests-mock==1.8.0
@@ -279,7 +283,7 @@ setproctitle==1.1.10
 six==1.15.0
 slackclient==1.3.2
 snowballstemmer==2.0.0
-snowflake-connector-python==2.2.8
+snowflake-connector-python==2.2.9
 snowflake-sqlalchemy==1.2.3
 soupsieve==2.0.1
 sphinx-argparse==0.2.5
@@ -306,22 +310,24 @@ thrift-sasl==0.4.2
 thrift==0.13.0
 toml==0.10.1
 tornado==5.1.1
-tqdm==4.47.0
+tqdm==4.48.0
 traitlets==4.3.3
 typed-ast==1.4.1
 typing-extensions==3.7.4.2
+typing-inspect==0.6.0
+typing==3.7.4.3
 tzlocal==1.5.1
 unicodecsv==0.14.1
 uritemplate==3.0.1
 urllib3==1.25.9
 vertica-python==0.10.4
 vine==1.3.0
-virtualenv==20.0.26
+virtualenv==20.0.27
 wcwidth==0.2.5
 websocket-client==0.57.0
 wrapt==1.12.1
 xmltodict==0.12.0
-yamllint==1.23.0
+yamllint==1.24.2
 zdesk==2.7.1
 zipp==3.1.0
 zope.deprecation==4.4.0
diff --git a/requirements/requirements-python3.7.txt b/requirements/requirements-python3.7.txt
index f1c6c48..ff42b59 100644
--- a/requirements/requirements-python3.7.txt
+++ b/requirements/requirements-python3.7.txt
@@ -13,7 +13,7 @@ Flask-WTF==0.14.3
 Flask==1.1.2
 JPype1==0.7.1
 JayDeBeApi==1.2.3
-Jinja2==2.10.3
+Jinja2==2.11.2
 Mako==1.1.3
 Markdown==2.6.11
 MarkupSafe==1.1.1
@@ -62,9 +62,9 @@ beautifulsoup4==4.7.1
 billiard==3.6.3.0
 black==19.10b0
 blinker==1.4
-boto3==1.14.20
+boto3==1.14.25
 boto==2.49.0
-botocore==1.17.20
+botocore==1.17.25
 cached-property==1.5.1
 cachetools==4.1.1
 cassandra-driver==3.20.2
@@ -73,7 +73,7 @@ celery==4.4.6
 certifi==2020.6.20
 cffi==1.14.0
 cfgv==3.1.0
-cfn-lint==0.33.2
+cfn-lint==0.34.0
 cgroupspy==0.1.6
 chardet==3.0.4
 click==6.7
@@ -83,7 +83,7 @@ colorlog==4.0.2
 configparser==3.5.3
 coverage==5.2
 croniter==0.3.34
-cryptography==2.9.2
+cryptography==3.0
 cx-Oracle==8.0.0
 datadog==0.38.0
 decorator==4.4.2
@@ -106,7 +106,7 @@ filelock==3.0.12
 flake8-colors==0.1.6
 flake8==3.8.3
 flaky==3.7.0
-flask-swagger==0.2.13
+flask-swagger==0.2.14
 flower==0.9.5
 freezegun==0.3.15
 fsspec==0.7.4
@@ -114,14 +114,14 @@ funcsigs==1.0.2
 future-fstrings==1.2.0
 future==0.18.2
 gcsfs==0.6.2
-google-api-core==1.21.0
-google-api-python-client==1.9.3
+google-api-core==1.22.0
+google-api-python-client==1.10.0
 google-auth-httplib2==0.0.4
 google-auth-oauthlib==0.4.1
-google-auth==1.19.0
-google-cloud-bigquery==1.25.0
-google-cloud-bigtable==1.2.1
-google-cloud-container==1.0.1
+google-auth==1.19.2
+google-cloud-bigquery==1.26.0
+google-cloud-bigtable==1.3.0
+google-cloud-container==2.0.0
 google-cloud-core==1.3.0
 google-cloud-dlp==1.0.0
 google-cloud-language==1.3.0
@@ -139,13 +139,13 @@ graphviz==0.14.1
 grpc-google-iam-v1==0.12.3
 grpcio-gcp==0.2.2
 grpcio==1.30.0
-gunicorn==19.10.0
+gunicorn==20.0.4
 hdfs==2.5.8
 hmsclient==0.1.1
 httplib2==0.18.1
 humanize==2.5.0
 hvac==0.10.4
-identify==1.4.23
+identify==1.4.25
 idna==2.10
 imagesize==1.2.0
 importlib-metadata==1.7.0
@@ -156,7 +156,7 @@ ipython==7.16.1
 iso8601==0.1.12
 isodate==0.6.0
 itsdangerous==1.1.0
-jedi==0.17.1
+jedi==0.17.2
 jira==2.0.0
 jmespath==0.10.0
 json-merge-patch==0.2
@@ -172,6 +172,7 @@ kombu==4.6.11
 kubernetes==11.0.0
 lazy-object-proxy==1.5.0
 ldap3==2.7
+libcst==0.3.7
 lockfile==0.12.2
 marshmallow-enum==1.5.1
 marshmallow-sqlalchemy==0.23.1
@@ -190,12 +191,12 @@ mysqlclient==1.3.14
 natsort==7.0.1
 nbclient==0.4.1
 nbformat==5.0.7
-nest-asyncio==1.3.3
+nest-asyncio==1.4.0
 networkx==2.4
 nodeenv==1.4.0
 nteract-scrapbook==0.4.1
 ntlm-auth==1.5.0
-numpy==1.19.0
+numpy==1.19.1
 oauthlib==3.1.0
 oscrypto==1.2.0
 packaging==20.4
@@ -217,8 +218,9 @@ presto-python-client==0.7.0
 prison==0.1.3
 prometheus-client==0.8.0
 prompt-toolkit==3.0.5
+proto-plus==1.3.2
 protobuf==3.12.2
-psutil==5.7.0
+psutil==5.7.2
 psycopg2-binary==2.8.5
 ptyprocess==0.6.0
 py==1.9.0
@@ -234,6 +236,7 @@ pydruid==0.5.8
 pyflakes==2.2.0
 pykerberos==1.2.1
 pymongo==3.10.1
+pymssql==2.1.4
 pyparsing==2.4.7
 pyrsistent==0.16.0
 pysftp==0.2.9
@@ -241,7 +244,7 @@ pytest-cov==2.10.0
 pytest-forked==1.2.0
 pytest-instafail==0.4.2
 pytest-rerunfailures==9.0
-pytest-timeout==1.4.1
+pytest-timeout==1.4.2
 pytest-xdist==1.33.0
 pytest==5.4.3
 python-daemon==2.2.4
@@ -259,7 +262,7 @@ pywinrm==0.4.1
 pyzmq==19.0.1
 qds-sdk==1.16.0
 redis==3.5.3
-regex==2020.6.8
+regex==2020.7.14
 requests-futures==0.9.4
 requests-kerberos==0.12.0
 requests-mock==1.8.0
@@ -278,7 +281,7 @@ setproctitle==1.1.10
 six==1.15.0
 slackclient==1.3.2
 snowballstemmer==2.0.0
-snowflake-connector-python==2.2.8
+snowflake-connector-python==2.2.9
 snowflake-sqlalchemy==1.2.3
 soupsieve==2.0.1
 sphinx-argparse==0.2.5
@@ -305,22 +308,23 @@ thrift-sasl==0.4.2
 thrift==0.13.0
 toml==0.10.1
 tornado==5.1.1
-tqdm==4.47.0
+tqdm==4.48.0
 traitlets==4.3.3
 typed-ast==1.4.1
 typing-extensions==3.7.4.2
+typing-inspect==0.6.0
 tzlocal==1.5.1
 unicodecsv==0.14.1
 uritemplate==3.0.1
 urllib3==1.25.9
 vertica-python==0.10.4
 vine==1.3.0
-virtualenv==20.0.26
+virtualenv==20.0.27
 wcwidth==0.2.5
 websocket-client==0.57.0
 wrapt==1.12.1
 xmltodict==0.12.0
-yamllint==1.23.0
+yamllint==1.24.2
 zdesk==2.7.1
 zipp==3.1.0
 zope.deprecation==4.4.0
diff --git a/requirements/requirements-python3.8.txt b/requirements/requirements-python3.8.txt
index f1c6c48..747ae42 100644
--- a/requirements/requirements-python3.8.txt
+++ b/requirements/requirements-python3.8.txt
@@ -8,7 +8,7 @@ Flask-Caching==1.3.3
 Flask-JWT-Extended==3.24.1
 Flask-Login==0.4.1
 Flask-OpenID==1.2.5
-Flask-SQLAlchemy==2.4.4
+Flask-SQLAlchemy==2.4.3
 Flask-WTF==0.14.3
 Flask==1.1.2
 JPype1==0.7.1
@@ -24,9 +24,9 @@ PySmbClient==0.1.5
 PyYAML==5.3.1
 Pygments==2.6.1
 SQLAlchemy-JSONField==0.9.0
-SQLAlchemy-Utils==0.36.8
+SQLAlchemy-Utils==0.36.6
 SQLAlchemy==1.3.18
-Sphinx==3.1.2
+Sphinx==3.1.1
 Unidecode==1.1.1
 WTForms==2.3.1
 Werkzeug==0.16.1
@@ -39,7 +39,7 @@ ansiwrap==0.8.4
 apipkg==1.5
 apispec==1.3.3
 appdirs==1.4.4
-argcomplete==1.12.0
+argcomplete==1.11.1
 asn1crypto==1.3.0
 astroid==2.4.2
 async-generator==1.10
@@ -48,10 +48,10 @@ attrs==19.3.0
 aws-sam-translator==1.25.0
 aws-xray-sdk==2.6.0
 azure-common==1.1.25
-azure-cosmos==3.2.0
+azure-cosmos==3.1.2
 azure-datalake-store==0.0.48
 azure-mgmt-containerinstance==1.5.0
-azure-mgmt-resource==10.1.0
+azure-mgmt-resource==10.0.0
 azure-nspkg==3.0.2
 azure-storage-blob==2.1.0
 azure-storage-common==2.1.0
@@ -62,9 +62,9 @@ beautifulsoup4==4.7.1
 billiard==3.6.3.0
 black==19.10b0
 blinker==1.4
-boto3==1.14.20
+boto3==1.14.14
 boto==2.49.0
-botocore==1.17.20
+botocore==1.17.14
 cached-property==1.5.1
 cachetools==4.1.1
 cassandra-driver==3.20.2
@@ -81,11 +81,11 @@ cloudant==0.5.10
 colorama==0.4.3
 colorlog==4.0.2
 configparser==3.5.3
-coverage==5.2
+coverage==5.1
 croniter==0.3.34
 cryptography==2.9.2
 cx-Oracle==8.0.0
-datadog==0.38.0
+datadog==0.37.1
 decorator==4.4.2
 defusedxml==0.6.0
 dill==0.3.2
@@ -101,13 +101,13 @@ elasticsearch==5.5.3
 email-validator==1.1.1
 entrypoints==0.3
 execnet==1.7.1
-fastavro==0.23.6
+fastavro==0.23.5
 filelock==3.0.12
 flake8-colors==0.1.6
 flake8==3.8.3
-flaky==3.7.0
+flaky==3.6.1
 flask-swagger==0.2.13
-flower==0.9.5
+flower==0.9.4
 freezegun==0.3.15
 fsspec==0.7.4
 funcsigs==1.0.2
@@ -116,9 +116,9 @@ future==0.18.2
 gcsfs==0.6.2
 google-api-core==1.21.0
 google-api-python-client==1.9.3
-google-auth-httplib2==0.0.4
+google-auth-httplib2==0.0.3
 google-auth-oauthlib==0.4.1
-google-auth==1.19.0
+google-auth==1.18.0
 google-cloud-bigquery==1.25.0
 google-cloud-bigtable==1.2.1
 google-cloud-container==1.0.1
@@ -135,7 +135,7 @@ google-cloud-videointelligence==1.15.0
 google-cloud-vision==1.0.0
 google-resumable-media==0.5.1
 googleapis-common-protos==1.52.0
-graphviz==0.14.1
+graphviz==0.14
 grpc-google-iam-v1==0.12.3
 grpcio-gcp==0.2.2
 grpcio==1.30.0
@@ -143,9 +143,9 @@ gunicorn==19.10.0
 hdfs==2.5.8
 hmsclient==0.1.1
 httplib2==0.18.1
-humanize==2.5.0
+humanize==0.5.1
 hvac==0.10.4
-identify==1.4.23
+identify==1.4.21
 idna==2.10
 imagesize==1.2.0
 importlib-metadata==1.7.0
@@ -166,7 +166,7 @@ jsonpickle==1.4.1
 jsonpointer==2.0
 jsonschema==3.2.0
 junit-xml==1.9
-jupyter-client==6.1.6
+jupyter-client==6.1.5
 jupyter-core==4.6.3
 kombu==4.6.11
 kubernetes==11.0.0
@@ -188,7 +188,7 @@ mypy-extensions==0.4.3
 mypy==0.720
 mysqlclient==1.3.14
 natsort==7.0.1
-nbclient==0.4.1
+nbclient==0.4.0
 nbformat==5.0.7
 nest-asyncio==1.3.3
 networkx==2.4
@@ -212,10 +212,9 @@ pexpect==4.8.0
 pickleshare==0.7.5
 pinotdb==0.1.1
 pluggy==0.13.1
-pre-commit==2.6.0
+pre-commit==2.5.1
 presto-python-client==0.7.0
 prison==0.1.3
-prometheus-client==0.8.0
 prompt-toolkit==3.0.5
 protobuf==3.12.2
 psutil==5.7.0
@@ -242,7 +241,7 @@ pytest-forked==1.2.0
 pytest-instafail==0.4.2
 pytest-rerunfailures==9.0
 pytest-timeout==1.4.1
-pytest-xdist==1.33.0
+pytest-xdist==1.32.0
 pytest==5.4.3
 python-daemon==2.2.4
 python-dateutil==2.8.1
@@ -254,7 +253,7 @@ python-nvd3==0.15.0
 python-slugify==4.0.1
 python3-openid==3.2.0
 pytz==2020.1
-pytzdata==2020.1
+pytzdata==2019.3
 pywinrm==0.4.1
 pyzmq==19.0.1
 qds-sdk==1.16.0
@@ -273,7 +272,7 @@ s3transfer==0.3.3
 sasl==0.2.1
 sendgrid==5.6.0
 sentinels==1.0.0
-sentry-sdk==0.16.1
+sentry-sdk==0.15.1
 setproctitle==1.1.10
 six==1.15.0
 slackclient==1.3.2
@@ -315,7 +314,7 @@ uritemplate==3.0.1
 urllib3==1.25.9
 vertica-python==0.10.4
 vine==1.3.0
-virtualenv==20.0.26
+virtualenv==20.0.25
 wcwidth==0.2.5
 websocket-client==0.57.0
 wrapt==1.12.1
diff --git a/requirements/setup-3.5.md5 b/requirements/setup-3.5.md5
index 7302c51..d24fa17 100644
--- a/requirements/setup-3.5.md5
+++ b/requirements/setup-3.5.md5
@@ -1 +1 @@
-da591fb5f6ed08129068e227610706cb  /opt/airflow/setup.py
+52a5d9b968ee82e35b5b49ed02361377  /opt/airflow/setup.py
diff --git a/requirements/setup-3.6.md5 b/requirements/setup-3.6.md5
index 7302c51..d24fa17 100644
--- a/requirements/setup-3.6.md5
+++ b/requirements/setup-3.6.md5
@@ -1 +1 @@
-da591fb5f6ed08129068e227610706cb  /opt/airflow/setup.py
+52a5d9b968ee82e35b5b49ed02361377  /opt/airflow/setup.py
diff --git a/requirements/setup-3.7.md5 b/requirements/setup-3.7.md5
index 7302c51..d24fa17 100644
--- a/requirements/setup-3.7.md5
+++ b/requirements/setup-3.7.md5
@@ -1 +1 @@
-da591fb5f6ed08129068e227610706cb  /opt/airflow/setup.py
+52a5d9b968ee82e35b5b49ed02361377  /opt/airflow/setup.py
diff --git a/scripts/ci/ci_check_license.sh b/scripts/ci/ci_check_license.sh
index e1b54dd..da5aebd 100755
--- a/scripts/ci/ci_check_license.sh
+++ b/scripts/ci/ci_check_license.sh
@@ -16,7 +16,7 @@
 # specific language governing permissions and limitations
 # under the License.
 export MOUNT_SOURCE_DIR_FOR_STATIC_CHECKS="true"
-export PYTHON_MAJOR_MINOR_VERSION=${PYTHON_MAJOR_MINOR_VERSION:-3.5}
+export PYTHON_MAJOR_MINOR_VERSION=${PYTHON_MAJOR_MINOR_VERSION:-3.6}
 
 # shellcheck source=scripts/ci/_script_init.sh
 . "$( dirname "${BASH_SOURCE[0]}" )/_script_init.sh"
diff --git a/scripts/ci/ci_fix_ownership.sh b/scripts/ci/ci_fix_ownership.sh
index d230281..7e85152 100755
--- a/scripts/ci/ci_fix_ownership.sh
+++ b/scripts/ci/ci_fix_ownership.sh
@@ -19,7 +19,7 @@
 #
 # Fixes ownership for files created inside container (files owned by root will be owned by host user)
 #
-export PYTHON_MAJOR_MINOR_VERSION=${PYTHON_MAJOR_MINOR_VERSION:-3.5}
+export PYTHON_MAJOR_MINOR_VERSION=${PYTHON_MAJOR_MINOR_VERSION:-3.6}
 
 # shellcheck source=scripts/ci/_script_init.sh
 . "$( dirname "${BASH_SOURCE[0]}" )/_script_init.sh"
diff --git a/scripts/ci/ci_flake8.sh b/scripts/ci/ci_flake8.sh
index 15a7ccb..33504c0 100755
--- a/scripts/ci/ci_flake8.sh
+++ b/scripts/ci/ci_flake8.sh
@@ -15,7 +15,7 @@
 # KIND, either express or implied.  See the License for the
 # specific language governing permissions and limitations
 # under the License.
-export PYTHON_MAJOR_MINOR_VERSION=${PYTHON_MAJOR_MINOR_VERSION:-3.5}
+export PYTHON_MAJOR_MINOR_VERSION=${PYTHON_MAJOR_MINOR_VERSION:-3.6}
 
 # shellcheck source=scripts/ci/_script_init.sh
 . "$( dirname "${BASH_SOURCE[0]}" )/_script_init.sh"
diff --git a/scripts/ci/ci_generate_requirements.sh b/scripts/ci/ci_generate_requirements.sh
index 689695c..f55799f 100755
--- a/scripts/ci/ci_generate_requirements.sh
+++ b/scripts/ci/ci_generate_requirements.sh
@@ -15,7 +15,7 @@
 # KIND, either express or implied.  See the License for the
 # specific language governing permissions and limitations
 # under the License.
-export PYTHON_MAJOR_MINOR_VERSION=${PYTHON_MAJOR_MINOR_VERSION:-3.5}
+export PYTHON_MAJOR_MINOR_VERSION=${PYTHON_MAJOR_MINOR_VERSION:-3.6}
 
 # shellcheck source=scripts/ci/_script_init.sh
 . "$( dirname "${BASH_SOURCE[0]}" )/_script_init.sh"
diff --git a/scripts/ci/ci_push_ci_image.sh b/scripts/ci/ci_push_ci_image.sh
index e114e24..09e2a7a 100755
--- a/scripts/ci/ci_push_ci_image.sh
+++ b/scripts/ci/ci_push_ci_image.sh
@@ -15,7 +15,7 @@
 # KIND, either express or implied.  See the License for the
 # specific language governing permissions and limitations
 # under the License.
-export PYTHON_MAJOR_MINOR_VERSION=${PYTHON_MAJOR_MINOR_VERSION:-3.5}
+export PYTHON_MAJOR_MINOR_VERSION=${PYTHON_MAJOR_MINOR_VERSION:-3.6}
 
 # shellcheck source=scripts/ci/_script_init.sh
 . "$( dirname "${BASH_SOURCE[0]}" )/_script_init.sh"
diff --git a/scripts/ci/ci_push_production_images.sh b/scripts/ci/ci_push_production_images.sh
index 37916fe..d6c6e7d 100755
--- a/scripts/ci/ci_push_production_images.sh
+++ b/scripts/ci/ci_push_production_images.sh
@@ -15,7 +15,7 @@
 # KIND, either express or implied.  See the License for the
 # specific language governing permissions and limitations
 # under the License.
-export PYTHON_MAJOR_MINOR_VERSION=${PYTHON_MAJOR_MINOR_VERSION:-3.5}
+export PYTHON_MAJOR_MINOR_VERSION=${PYTHON_MAJOR_MINOR_VERSION:-3.6}
 
 # shellcheck source=scripts/ci/_script_init.sh
 . "$( dirname "${BASH_SOURCE[0]}" )/_script_init.sh"
diff --git a/scripts/ci/ci_run_static_checks.sh b/scripts/ci/ci_run_static_checks.sh
index 234ae61..bfdae1a 100755
--- a/scripts/ci/ci_run_static_checks.sh
+++ b/scripts/ci/ci_run_static_checks.sh
@@ -15,7 +15,7 @@
 # KIND, either express or implied.  See the License for the
 # specific language governing permissions and limitations
 # under the License.
-export PYTHON_MAJOR_MINOR_VERSION=${PYTHON_MAJOR_MINOR_VERSION:-3.5}
+export PYTHON_MAJOR_MINOR_VERSION=3.6
 
 # shellcheck source=scripts/ci/_script_init.sh
 . "$( dirname "${BASH_SOURCE[0]}" )/_script_init.sh"
diff --git a/setup.py b/setup.py
index 906c705..1fe821b 100644
--- a/setup.py
+++ b/setup.py
@@ -563,14 +563,14 @@ INSTALL_REQUIREMENTS = [
     'flask-appbuilder~=2.2;python_version>="3.6"',
     'flask-caching>=1.3.3, <1.4.0',
     'flask-login>=0.3, <0.5',
-    'flask-swagger==0.2.13',
+    'flask-swagger>=0.2.13, <0.3',
     'flask-wtf>=0.14.2, <0.15',
     'funcsigs>=1.0.0, <2.0.0',
     'future>=0.16.0, <0.19',
     'graphviz>=0.12',
-    'gunicorn>=19.5.0, <20.0',
+    'gunicorn>=19.5.0, <21.0',
     'iso8601>=0.1.12',
-    'jinja2>=2.10.1, <2.11.0',
+    'jinja2>=2.10.1, <2.12.0',
     'json-merge-patch==0.2',
     'jsonschema~=3.0',
     'lazy_object_proxy~=1.3',


[airflow] 02/32: Python base image version is retrieved in the right place (#9931)

Posted by ka...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

kaxilnaik pushed a commit to branch v1-10-test
in repository https://gitbox.apache.org/repos/asf/airflow.git

commit e2e68538962a8b894314d4522056f8f39840a474
Author: Jarek Potiuk <ja...@polidea.com>
AuthorDate: Wed Jul 22 15:58:39 2020 +0200

    Python base image version is retrieved in the right place (#9931)
    
    When quick-fixing Python 3.8.4 error #9820 PYTHON_BASE_IMAGE_VERSION
    variable was added but it was initialized too early in Breeze and
    it took the default version of Python rather than the one chosen
    by --python switch. This caused the generated requirements
    (locally by Breeze only) to generate wrong set of requirements
    and images built locally for different python versions were
    based on default Python version, not the one chosen by --python
    switch.
    
    (cherry picked from commit 7b9e8e0950f4b963b5cb13ed069d6132080fcd27)
---
 breeze                                  | 1 +
 scripts/ci/libraries/_build_images.sh   | 7 +++++++
 scripts/ci/libraries/_initialization.sh | 4 ----
 3 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/breeze b/breeze
index 534bec3..a10ba9c 100755
--- a/breeze
+++ b/breeze
@@ -508,6 +508,7 @@ function prepare_command_files() {
     export COMPOSE_CI_FILE
     export COMPOSE_PROD_FILE
 
+    get_base_image_version
     # Base python image for the build
     export PYTHON_BASE_IMAGE=python:${PYTHON_BASE_IMAGE_VERSION}-slim-buster
     export AIRFLOW_CI_IMAGE="${DOCKERHUB_USER}/${DOCKERHUB_REPO}:${BRANCH_NAME}-python${PYTHON_MAJOR_MINOR_VERSION}-ci"
diff --git a/scripts/ci/libraries/_build_images.sh b/scripts/ci/libraries/_build_images.sh
index ed894a1..352975b 100644
--- a/scripts/ci/libraries/_build_images.sh
+++ b/scripts/ci/libraries/_build_images.sh
@@ -311,11 +311,17 @@ function print_build_info() {
     print_info
 }
 
+function get_base_image_version() {
+    # python image version to use
+    PYTHON_BASE_IMAGE_VERSION=${PYTHON_BASE_IMAGE_VERSION:=${PYTHON_MAJOR_MINOR_VERSION}}
+}
+
 
 
 # Prepares all variables needed by the CI build. Depending on the configuration used (python version
 # DockerHub user etc. the variables are set so that other functions can use those variables.
 function prepare_ci_build() {
+    get_base_image_version
     # We use pulled docker image cache by default for CI images to  speed up the builds
     export DOCKER_CACHE=${DOCKER_CACHE:="pulled"}
     echo
@@ -591,6 +597,7 @@ Docker building ${AIRFLOW_CI_IMAGE}.
 # Prepares all variables needed by the CI build. Depending on the configuration used (python version
 # DockerHub user etc. the variables are set so that other functions can use those variables.
 function prepare_prod_build() {
+    get_base_image_version
     # We use local docker image cache by default for Production images
     export DOCKER_CACHE=${DOCKER_CACHE:="local"}
     echo
diff --git a/scripts/ci/libraries/_initialization.sh b/scripts/ci/libraries/_initialization.sh
index c41dff9..5f2a742 100644
--- a/scripts/ci/libraries/_initialization.sh
+++ b/scripts/ci/libraries/_initialization.sh
@@ -21,10 +21,6 @@ function initialize_common_environment {
     # default python Major/Minor version
     PYTHON_MAJOR_MINOR_VERSION=${PYTHON_MAJOR_MINOR_VERSION:="3.6"}
 
-    # python image version to use
-    # shellcheck disable=SC2034
-    PYTHON_BASE_IMAGE_VERSION=${PYTHON_MAJOR_MINOR_VERSION}
-
     # extra flags passed to docker run for CI image
     # shellcheck disable=SC2034
     EXTRA_DOCKER_FLAGS=()


[airflow] 24/32: Pin Pyarrow < 1.0

Posted by ka...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

kaxilnaik pushed a commit to branch v1-10-test
in repository https://gitbox.apache.org/repos/asf/airflow.git

commit 70a741601646dd68b5e951f7a46cc591fda52020
Author: Kaxil Naik <ka...@gmail.com>
AuthorDate: Sun Aug 2 12:19:42 2020 +0100

    Pin Pyarrow < 1.0
---
 setup.py | 1 +
 1 file changed, 1 insertion(+)

diff --git a/setup.py b/setup.py
index cc0f721..327e157 100644
--- a/setup.py
+++ b/setup.py
@@ -325,6 +325,7 @@ pagerduty = [
 papermill = [
     'papermill[all]>=1.0.0',
     'nteract-scrapbook[all]>=0.2.1',
+    'pyarrow<1.0.0'
 ]
 password = [
     'bcrypt>=2.0.0',


[airflow] 26/32: Allow to define custom XCom class (#8560)

Posted by ka...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

kaxilnaik pushed a commit to branch v1-10-test
in repository https://gitbox.apache.org/repos/asf/airflow.git

commit 64c89db14308c567ac424201ed38cc452ddc6afd
Author: Tomek Urbaszek <tu...@gmail.com>
AuthorDate: Tue Apr 28 16:55:05 2020 +0200

    Allow to define custom XCom class (#8560)
    
    * Allow to define custom XCom class
    
    closes: #8059
    (cherry picked from commit 6c6d6611d2aa112a947a9ebc7200446f51d0ac4c)
---
 airflow/config_templates/config.yml          |  7 ++++
 airflow/config_templates/default_airflow.cfg |  4 +++
 airflow/models/xcom.py                       | 34 ++++++++++++++++++-
 docs/concepts.rst                            |  9 +++++
 tests/models/test_xcom.py                    | 50 ++++++++++++++++++++++++++++
 5 files changed, 103 insertions(+), 1 deletion(-)

diff --git a/airflow/config_templates/config.yml b/airflow/config_templates/config.yml
index d1c2c90..f54255e 100644
--- a/airflow/config_templates/config.yml
+++ b/airflow/config_templates/config.yml
@@ -476,6 +476,13 @@
       type: string
       example: ~
       default: "True"
+    - name: xcom_backend
+      description: |
+        Path to custom XCom class that will be used to store and resolve operators results
+      version_added: 1.10.12
+      type: string
+      example: "path.to.CustomXCom"
+      default: "airflow.models.xcom.BaseXCom"
 
 - name: secrets
   description: ~
diff --git a/airflow/config_templates/default_airflow.cfg b/airflow/config_templates/default_airflow.cfg
index bf83b34..e18e538 100644
--- a/airflow/config_templates/default_airflow.cfg
+++ b/airflow/config_templates/default_airflow.cfg
@@ -252,6 +252,10 @@ max_num_rendered_ti_fields_per_task = 30
 # On each dagrun check against defined SLAs
 check_slas = True
 
+# Path to custom XCom class that will be used to store and resolve operators results
+# Example: xcom_backend = path.to.CustomXCom
+xcom_backend = airflow.models.xcom.BaseXCom
+
 [secrets]
 # Full class name of secrets backend to enable (will precede env vars and metastore in search path)
 # Example: backend = airflow.contrib.secrets.aws_systems_manager.SystemsManagerParameterStoreBackend
diff --git a/airflow/models/xcom.py b/airflow/models/xcom.py
index f4522b5..0b6a81d 100644
--- a/airflow/models/xcom.py
+++ b/airflow/models/xcom.py
@@ -40,7 +40,7 @@ MAX_XCOM_SIZE = 49344
 XCOM_RETURN_KEY = 'return_value'
 
 
-class XCom(Base, LoggingMixin):
+class BaseXCom(Base, LoggingMixin):
     """
     Base class for XCom objects.
     """
@@ -232,3 +232,35 @@ class XCom(Base, LoggingMixin):
                       "for XCOM, then you need to enable pickle "
                       "support for XCOM in your airflow config.")
             raise
+
+    @staticmethod
+    def deserialize_value(result) -> Any:
+        # TODO: "pickling" has been deprecated and JSON is preferred.
+        # "pickling" will be removed in Airflow 2.0.
+        enable_pickling = conf.getboolean('core', 'enable_xcom_pickling')
+        if enable_pickling:
+            return pickle.loads(result.value)
+
+        try:
+            return json.loads(result.value.decode('UTF-8'))
+        except ValueError:
+            log.error("Could not deserialize the XCOM value from JSON. "
+                      "If you are using pickles instead of JSON "
+                      "for XCOM, then you need to enable pickle "
+                      "support for XCOM in your airflow config.")
+            raise
+
+
+def resolve_xcom_backend():
+    """Resolves custom XCom class"""
+    clazz = conf.getimport("core", "xcom_backend", fallback=f"airflow.models.xcom.{BaseXCom.__name__}")
+    if clazz:
+        if not issubclass(clazz, BaseXCom):
+            raise TypeError(
+                f"Your custom XCom class `{clazz.__name__}` is not a subclass of `{BaseXCom.__name__}`."
+            )
+        return clazz
+    return BaseXCom
+
+
+XCom = resolve_xcom_backend()
diff --git a/docs/concepts.rst b/docs/concepts.rst
index e85c5b3..dd48003 100644
--- a/docs/concepts.rst
+++ b/docs/concepts.rst
@@ -660,6 +660,15 @@ of what this may look like:
 Note that XComs are similar to `Variables`_, but are specifically designed
 for inter-task communication rather than global settings.
 
+Custom XCom backend
+'''''''''''''''''''
+
+It is possible to change ``XCom`` behaviour os serialization and deserialization of tasks' result.
+To do this one have to change ``xcom_backend`` parameter in Airflow config. Provided value should point
+to a class that is subclass of :class:`~airflow.models.xcom.BaseXCom`. To alter the serialaization /
+deserialization mechanism the custom class should override ``serialize_value`` and ``deserialize_value``
+methods.
+
 .. _concepts:variables:
 
 Variables
diff --git a/tests/models/test_xcom.py b/tests/models/test_xcom.py
new file mode 100644
index 0000000..206b074
--- /dev/null
+++ b/tests/models/test_xcom.py
@@ -0,0 +1,50 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+from airflow.configuration import conf
+from airflow.models.xcom import BaseXCom, resolve_xcom_backend
+from tests.test_utils.config import conf_vars
+
+
+class CustomXCom(BaseXCom):
+    @staticmethod
+    def serialize_value(_):
+        return "custom_value"
+
+
+class TestXCom:
+    @conf_vars({("core", "xcom_backend"): "tests.models.test_xcom.CustomXCom"})
+    def test_resolve_xcom_class(self):
+        cls = resolve_xcom_backend()
+        assert issubclass(cls, CustomXCom)
+        assert cls().serialize_value(None) == "custom_value"
+
+    @conf_vars(
+        {("core", "xcom_backend"): "", ("core", "enable_xcom_pickling"): "False"}
+    )
+    def test_resolve_xcom_class_fallback_to_basexcom(self):
+        cls = resolve_xcom_backend()
+        assert issubclass(cls, BaseXCom)
+        assert cls().serialize_value([1]) == b"[1]"
+
+    @conf_vars({("core", "enable_xcom_pickling"): "False"})
+    def test_resolve_xcom_class_fallback_to_basexcom_no_config(self):
+        init = conf.get("core", "xcom_backend")
+        conf.remove_option("core", "xcom_backend")
+        cls = resolve_xcom_backend()
+        assert issubclass(cls, BaseXCom)
+        assert cls().serialize_value([1]) == b"[1]"
+        conf.set("core", "xcom_backend", init)


[airflow] 28/32: Add getimport for xcom change

Posted by ka...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

kaxilnaik pushed a commit to branch v1-10-test
in repository https://gitbox.apache.org/repos/asf/airflow.git

commit 1a8ba6a95696a19eb422a3d0a9553b3a59827052
Author: Daniel Imberman <da...@gmail.com>
AuthorDate: Mon Aug 3 15:59:40 2020 -0700

    Add getimport for xcom change
---
 airflow/configuration.py | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/airflow/configuration.py b/airflow/configuration.py
index d912898..01ee90f 100644
--- a/airflow/configuration.py
+++ b/airflow/configuration.py
@@ -42,6 +42,7 @@ import yaml
 from zope.deprecation import deprecated
 
 from airflow.exceptions import AirflowConfigException
+from airflow.utils.module_loading import import_string
 
 standard_library.install_aliases()
 
@@ -342,6 +343,26 @@ class AirflowConfigParser(ConfigParser):
                 "section/key [{section}/{key}] not found "
                 "in config".format(section=section, key=key))
 
+    def getimport(self, section, key, **kwargs):
+        """
+        Reads options, imports the full qualified name, and returns the object.
+        In case of failure, it throws an exception a clear message with the key aad the section names
+        :return: The object or None, if the option is empty
+        """
+        full_qualified_path = conf.get(section=section, key=key, **kwargs)
+        if not full_qualified_path:
+            return None
+
+        try:
+            return import_string(full_qualified_path)
+        except ImportError as e:
+            log.error(e)
+            raise AirflowConfigException(
+                'The object could not be loaded. Please check "{key}" key in "{section}" section. '
+                'Current value: "{full_qualified_path}".' .format(
+                    key=key, section=section, full_qualified_path=full_qualified_path)
+            )
+
     def getboolean(self, section, key, **kwargs):
         val = str(self.get(section, key, **kwargs)).lower().strip()
         if '#' in val:


[airflow] 21/32: Add pre 1.10.11 Kubernetes Paths back with Deprecation Warning (#10067)

Posted by ka...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

kaxilnaik pushed a commit to branch v1-10-test
in repository https://gitbox.apache.org/repos/asf/airflow.git

commit bfa089d4adff5a0892c7e2ef2c9639938e86cdb1
Author: Kaxil Naik <ka...@gmail.com>
AuthorDate: Fri Jul 31 09:34:18 2020 +0100

    Add pre 1.10.11 Kubernetes Paths back with Deprecation Warning (#10067)
---
 airflow/contrib/kubernetes/__init__.py                  |  2 --
 .../contrib/kubernetes/{__init__.py => kube_client.py}  | 14 ++++++++++----
 .../kubernetes}/pod.py                                  | 17 +++++++++++++++--
 .../kubernetes/{__init__.py => pod_runtime_info_env.py} | 14 ++++++++++----
 .../kubernetes/{__init__.py => refresh_config.py}       | 16 ++++++++++++----
 .../__init__.py => contrib/kubernetes/secret.py}        | 11 +++++++++++
 airflow/contrib/kubernetes/{__init__.py => volume.py}   | 14 ++++++++++----
 .../contrib/kubernetes/{__init__.py => volume_mount.py} | 14 ++++++++++----
 airflow/kubernetes/pod_launcher.py                      |  5 +++++
 airflow/kubernetes/pod_launcher_helper.py               |  2 +-
 airflow/kubernetes/pod_runtime_info_env.py              |  2 +-
 airflow/kubernetes/secret.py                            |  4 +++-
 airflow/kubernetes/volume_mount.py                      |  3 +--
 tests/kubernetes/models/test_pod.py                     |  2 +-
 tests/kubernetes/test_pod_launcher_helper.py            |  5 +++--
 15 files changed, 93 insertions(+), 32 deletions(-)

diff --git a/airflow/contrib/kubernetes/__init__.py b/airflow/contrib/kubernetes/__init__.py
index ef7074c..b7f8352 100644
--- a/airflow/contrib/kubernetes/__init__.py
+++ b/airflow/contrib/kubernetes/__init__.py
@@ -17,5 +17,3 @@
 # specific language governing permissions and limitations
 # under the License.
 #
-
-from airflow.kubernetes import *  # noqa
diff --git a/airflow/contrib/kubernetes/__init__.py b/airflow/contrib/kubernetes/kube_client.py
similarity index 71%
copy from airflow/contrib/kubernetes/__init__.py
copy to airflow/contrib/kubernetes/kube_client.py
index ef7074c..d785fac 100644
--- a/airflow/contrib/kubernetes/__init__.py
+++ b/airflow/contrib/kubernetes/kube_client.py
@@ -1,5 +1,3 @@
-# -*- coding: utf-8 -*-
-#
 # Licensed to the Apache Software Foundation (ASF) under one
 # or more contributor license agreements.  See the NOTICE file
 # distributed with this work for additional information
@@ -16,6 +14,14 @@
 # KIND, either express or implied.  See the License for the
 # specific language governing permissions and limitations
 # under the License.
-#
+"""This module is deprecated. Please use `airflow.kubernetes.kube_client`."""
+
+import warnings
+
+# pylint: disable=unused-import
+from airflow.kubernetes.kube_client import *   # noqa
 
-from airflow.kubernetes import *  # noqa
+warnings.warn(
+    "This module is deprecated. Please use `airflow.kubernetes.kube_client`.",
+    DeprecationWarning, stacklevel=2
+)
diff --git a/airflow/kubernetes_deprecated/pod.py b/airflow/contrib/kubernetes/pod.py
similarity index 92%
rename from airflow/kubernetes_deprecated/pod.py
rename to airflow/contrib/kubernetes/pod.py
index 22a8c12..0ab3616 100644
--- a/airflow/kubernetes_deprecated/pod.py
+++ b/airflow/contrib/kubernetes/pod.py
@@ -14,9 +14,17 @@
 # KIND, either express or implied.  See the License for the
 # specific language governing permissions and limitations
 # under the License.
+"""This module is deprecated. Please use `airflow.kubernetes.pod`."""
 
-import kubernetes.client.models as k8s
-from airflow.kubernetes.pod import Resources
+import warnings
+
+# pylint: disable=unused-import
+from airflow.kubernetes.pod import Port, Resources   # noqa
+
+warnings.warn(
+    "This module is deprecated. Please use `airflow.kubernetes.pod`.",
+    DeprecationWarning, stacklevel=2
+)
 
 
 class Pod(object):
@@ -86,6 +94,10 @@ class Pod(object):
             pod_runtime_info_envs=None,
             dnspolicy=None
     ):
+        warnings.warn(
+            "Using `airflow.contrib.kubernetes.pod.Pod` is deprecated. Please use `k8s.V1Pod`.",
+            DeprecationWarning, stacklevel=2
+        )
         self.image = image
         self.envs = envs or {}
         self.cmds = cmds
@@ -119,6 +131,7 @@ class Pod(object):
 
         :return: k8s.V1Pod
         """
+        import kubernetes.client.models as k8s
         meta = k8s.V1ObjectMeta(
             labels=self.labels,
             name=self.name,
diff --git a/airflow/contrib/kubernetes/__init__.py b/airflow/contrib/kubernetes/pod_runtime_info_env.py
similarity index 68%
copy from airflow/contrib/kubernetes/__init__.py
copy to airflow/contrib/kubernetes/pod_runtime_info_env.py
index ef7074c..0dc8aed 100644
--- a/airflow/contrib/kubernetes/__init__.py
+++ b/airflow/contrib/kubernetes/pod_runtime_info_env.py
@@ -1,5 +1,3 @@
-# -*- coding: utf-8 -*-
-#
 # Licensed to the Apache Software Foundation (ASF) under one
 # or more contributor license agreements.  See the NOTICE file
 # distributed with this work for additional information
@@ -16,6 +14,14 @@
 # KIND, either express or implied.  See the License for the
 # specific language governing permissions and limitations
 # under the License.
-#
+"""This module is deprecated. Please use `airflow.kubernetes.pod_runtime_info_env`."""
+
+import warnings
+
+# pylint: disable=unused-import
+from airflow.kubernetes.pod_runtime_info_env import PodRuntimeInfoEnv    # noqa
 
-from airflow.kubernetes import *  # noqa
+warnings.warn(
+    "This module is deprecated. Please use `airflow.kubernetes.pod_runtime_info_env`.",
+    DeprecationWarning, stacklevel=2
+)
diff --git a/airflow/contrib/kubernetes/__init__.py b/airflow/contrib/kubernetes/refresh_config.py
similarity index 66%
copy from airflow/contrib/kubernetes/__init__.py
copy to airflow/contrib/kubernetes/refresh_config.py
index ef7074c..f88069e 100644
--- a/airflow/contrib/kubernetes/__init__.py
+++ b/airflow/contrib/kubernetes/refresh_config.py
@@ -1,5 +1,3 @@
-# -*- coding: utf-8 -*-
-#
 # Licensed to the Apache Software Foundation (ASF) under one
 # or more contributor license agreements.  See the NOTICE file
 # distributed with this work for additional information
@@ -16,6 +14,16 @@
 # KIND, either express or implied.  See the License for the
 # specific language governing permissions and limitations
 # under the License.
-#
+"""This module is deprecated. Please use `airflow.kubernetes.refresh_config`."""
+
+import warnings
+
+# pylint: disable=unused-import
+from airflow.kubernetes.refresh_config import (   # noqa
+    RefreshConfiguration, RefreshKubeConfigLoader, load_kube_config
+)
 
-from airflow.kubernetes import *  # noqa
+warnings.warn(
+    "This module is deprecated. Please use `airflow.kubernetes.refresh_config`.",
+    DeprecationWarning, stacklevel=2
+)
diff --git a/airflow/kubernetes_deprecated/__init__.py b/airflow/contrib/kubernetes/secret.py
similarity index 71%
rename from airflow/kubernetes_deprecated/__init__.py
rename to airflow/contrib/kubernetes/secret.py
index 13a8339..ad41d4d 100644
--- a/airflow/kubernetes_deprecated/__init__.py
+++ b/airflow/contrib/kubernetes/secret.py
@@ -14,3 +14,14 @@
 # KIND, either express or implied.  See the License for the
 # specific language governing permissions and limitations
 # under the License.
+"""This module is deprecated. Please use `airflow.kubernetes.secret`."""
+
+import warnings
+
+# pylint: disable=unused-import
+from airflow.kubernetes.secret import Secret   # noqa
+
+warnings.warn(
+    "This module is deprecated. Please use `airflow.kubernetes.secret`.",
+    DeprecationWarning, stacklevel=2
+)
diff --git a/airflow/contrib/kubernetes/__init__.py b/airflow/contrib/kubernetes/volume.py
similarity index 72%
copy from airflow/contrib/kubernetes/__init__.py
copy to airflow/contrib/kubernetes/volume.py
index ef7074c..c72e208 100644
--- a/airflow/contrib/kubernetes/__init__.py
+++ b/airflow/contrib/kubernetes/volume.py
@@ -1,5 +1,3 @@
-# -*- coding: utf-8 -*-
-#
 # Licensed to the Apache Software Foundation (ASF) under one
 # or more contributor license agreements.  See the NOTICE file
 # distributed with this work for additional information
@@ -16,6 +14,14 @@
 # KIND, either express or implied.  See the License for the
 # specific language governing permissions and limitations
 # under the License.
-#
+"""This module is deprecated. Please use `airflow.kubernetes.volume`."""
+
+import warnings
+
+# pylint: disable=unused-import
+from airflow.kubernetes.volume import Volume   # noqa
 
-from airflow.kubernetes import *  # noqa
+warnings.warn(
+    "This module is deprecated. Please use `airflow.kubernetes.volume`.",
+    DeprecationWarning, stacklevel=2
+)
diff --git a/airflow/contrib/kubernetes/__init__.py b/airflow/contrib/kubernetes/volume_mount.py
similarity index 70%
copy from airflow/contrib/kubernetes/__init__.py
copy to airflow/contrib/kubernetes/volume_mount.py
index ef7074c..a474e3b 100644
--- a/airflow/contrib/kubernetes/__init__.py
+++ b/airflow/contrib/kubernetes/volume_mount.py
@@ -1,5 +1,3 @@
-# -*- coding: utf-8 -*-
-#
 # Licensed to the Apache Software Foundation (ASF) under one
 # or more contributor license agreements.  See the NOTICE file
 # distributed with this work for additional information
@@ -16,6 +14,14 @@
 # KIND, either express or implied.  See the License for the
 # specific language governing permissions and limitations
 # under the License.
-#
+"""This module is deprecated. Please use `airflow.kubernetes.volume_mount`."""
+
+import warnings
+
+# pylint: disable=unused-import
+from airflow.kubernetes.volume_mount import VolumeMount   # noqa
 
-from airflow.kubernetes import *  # noqa
+warnings.warn(
+    "This module is deprecated. Please use `airflow.kubernetes.volume_mount`.",
+    DeprecationWarning, stacklevel=2
+)
diff --git a/airflow/kubernetes/pod_launcher.py b/airflow/kubernetes/pod_launcher.py
index 05df204..d6507df 100644
--- a/airflow/kubernetes/pod_launcher.py
+++ b/airflow/kubernetes/pod_launcher.py
@@ -17,6 +17,7 @@
 """Launches PODs"""
 import json
 import time
+import warnings
 from datetime import datetime as dt
 
 import tenacity
@@ -93,6 +94,10 @@ class PodLauncher(LoggingMixin):
             # attempts to run pod_mutation_hook using k8s.V1Pod, if this
             # fails we attempt to run by converting pod to Old Pod
         except AttributeError:
+            warnings.warn(
+                "Using `airflow.contrib.kubernetes.pod.Pod` is deprecated. "
+                "Please use `k8s.V1Pod` instead.", DeprecationWarning, stacklevel=2
+            )
             dummy_pod = convert_to_airflow_pod(pod)
             settings.pod_mutation_hook(dummy_pod)
             dummy_pod = dummy_pod.to_v1_kubernetes_pod()
diff --git a/airflow/kubernetes/pod_launcher_helper.py b/airflow/kubernetes/pod_launcher_helper.py
index d8b2698..8c9fc6e 100644
--- a/airflow/kubernetes/pod_launcher_helper.py
+++ b/airflow/kubernetes/pod_launcher_helper.py
@@ -21,7 +21,7 @@ import kubernetes.client.models as k8s  # noqa
 from airflow.kubernetes.volume import Volume
 from airflow.kubernetes.volume_mount import VolumeMount
 from airflow.kubernetes.pod import Port
-from airflow.kubernetes_deprecated.pod import Pod
+from airflow.contrib.kubernetes.pod import Pod
 
 
 def convert_to_airflow_pod(pod):
diff --git a/airflow/kubernetes/pod_runtime_info_env.py b/airflow/kubernetes/pod_runtime_info_env.py
index 7d23a7e..72e2151 100644
--- a/airflow/kubernetes/pod_runtime_info_env.py
+++ b/airflow/kubernetes/pod_runtime_info_env.py
@@ -19,7 +19,6 @@ Classes for interacting with Kubernetes API
 """
 
 import copy
-import kubernetes.client.models as k8s
 from airflow.kubernetes.k8s_model import K8SModel
 
 
@@ -43,6 +42,7 @@ class PodRuntimeInfoEnv(K8SModel):
         """
         :return: kubernetes.client.models.V1EnvVar
         """
+        import kubernetes.client.models as k8s
         return k8s.V1EnvVar(
             name=self.name,
             value_from=k8s.V1EnvVarSource(
diff --git a/airflow/kubernetes/secret.py b/airflow/kubernetes/secret.py
index 8591a88..9ff1927 100644
--- a/airflow/kubernetes/secret.py
+++ b/airflow/kubernetes/secret.py
@@ -20,7 +20,6 @@ Classes for interacting with Kubernetes API
 
 import uuid
 import copy
-import kubernetes.client.models as k8s
 from airflow.exceptions import AirflowConfigException
 from airflow.kubernetes.k8s_model import K8SModel
 
@@ -65,6 +64,7 @@ class Secret(K8SModel):
         self.key = key
 
     def to_env_secret(self):
+        import kubernetes.client.models as k8s
         return k8s.V1EnvVar(
             name=self.deploy_target,
             value_from=k8s.V1EnvVarSource(
@@ -76,11 +76,13 @@ class Secret(K8SModel):
         )
 
     def to_env_from_secret(self):
+        import kubernetes.client.models as k8s
         return k8s.V1EnvFromSource(
             secret_ref=k8s.V1SecretEnvSource(name=self.secret)
         )
 
     def to_volume_secret(self):
+        import kubernetes.client.models as k8s
         vol_id = 'secretvol{}'.format(uuid.uuid4())
         return (
             k8s.V1Volume(
diff --git a/airflow/kubernetes/volume_mount.py b/airflow/kubernetes/volume_mount.py
index ab87ba9..ab9c34a 100644
--- a/airflow/kubernetes/volume_mount.py
+++ b/airflow/kubernetes/volume_mount.py
@@ -19,7 +19,6 @@ Classes for interacting with Kubernetes API
 """
 
 import copy
-import kubernetes.client.models as k8s
 from airflow.kubernetes.k8s_model import K8SModel
 
 
@@ -49,8 +48,8 @@ class VolumeMount(K8SModel):
         Converts to k8s object.
 
         :return Volume Mount k8s object
-
         """
+        import kubernetes.client.models as k8s
         return k8s.V1VolumeMount(
             name=self.name,
             mount_path=self.mount_path,
diff --git a/tests/kubernetes/models/test_pod.py b/tests/kubernetes/models/test_pod.py
index 096b5f0..2e53d60 100644
--- a/tests/kubernetes/models/test_pod.py
+++ b/tests/kubernetes/models/test_pod.py
@@ -76,7 +76,7 @@ class TestPod(unittest.TestCase):
         }, result)
 
     def test_to_v1_pod(self):
-        from airflow.kubernetes_deprecated.pod import Pod as DeprecatedPod
+        from airflow.contrib.kubernetes.pod import Pod as DeprecatedPod
         from airflow.kubernetes.volume import Volume
         from airflow.kubernetes.volume_mount import VolumeMount
         from airflow.kubernetes.pod import Resources
diff --git a/tests/kubernetes/test_pod_launcher_helper.py b/tests/kubernetes/test_pod_launcher_helper.py
index a308ac3..761d138 100644
--- a/tests/kubernetes/test_pod_launcher_helper.py
+++ b/tests/kubernetes/test_pod_launcher_helper.py
@@ -20,7 +20,7 @@ from airflow.kubernetes.pod import Port
 from airflow.kubernetes.volume_mount import VolumeMount
 from airflow.kubernetes.volume import Volume
 from airflow.kubernetes.pod_launcher_helper import convert_to_airflow_pod
-from airflow.kubernetes_deprecated.pod import Pod
+from airflow.contrib.kubernetes.pod import Pod
 import kubernetes.client.models as k8s
 
 
@@ -84,7 +84,8 @@ class TestPodLauncherHelper(unittest.TestCase):
 
         self.assertDictEqual(expected_dict, result_dict)
 
-    def pull_out_volumes(self, result_dict):
+    @staticmethod
+    def pull_out_volumes(result_dict):
         parsed_configs = []
         for volume in result_dict['volumes']:
             vol = {'name': volume['name']}


[airflow] 29/32: Pin fsspec<8.0.0 for Python <3.6 to fix Static Checks

Posted by ka...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

kaxilnaik pushed a commit to branch v1-10-test
in repository https://gitbox.apache.org/repos/asf/airflow.git

commit 3e1d88eb2aeaca5103cd7dce536750bd48a6c37f
Author: Kaxil Naik <ka...@gmail.com>
AuthorDate: Thu Aug 6 20:19:35 2020 +0100

    Pin fsspec<8.0.0 for Python <3.6 to fix Static Checks
---
 setup.py | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/setup.py b/setup.py
index 327e157..e16b1cd 100644
--- a/setup.py
+++ b/setup.py
@@ -325,7 +325,8 @@ pagerduty = [
 papermill = [
     'papermill[all]>=1.0.0',
     'nteract-scrapbook[all]>=0.2.1',
-    'pyarrow<1.0.0'
+    'pyarrow<1.0.0',
+    'fsspec<0.8.0;python_version=="3.5"'
 ]
 password = [
     'bcrypt>=2.0.0',


[airflow] 23/32: Set pytest version to be < 6.0.0 due to breaking changes (#10043)

Posted by ka...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

kaxilnaik pushed a commit to branch v1-10-test
in repository https://gitbox.apache.org/repos/asf/airflow.git

commit 6f8b0ccd88a8f69b7f6efbc68adee15791daae72
Author: Felix Uellendall <fe...@users.noreply.github.com>
AuthorDate: Wed Jul 29 10:45:34 2020 +0200

    Set pytest version to be < 6.0.0 due to breaking changes (#10043)
    
    The latest pytest version 6.0.0 released yesterday (2020-07-28)
    does not work in conjunction with the version of pylint (2.4.3) we
    are using.
    
    (cherry picked from commit 2e0d91d8eb9bcae1886358791917b953330a957f)
---
 setup.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/setup.py b/setup.py
index 35323d2..cc0f721 100644
--- a/setup.py
+++ b/setup.py
@@ -422,7 +422,7 @@ devel = [
     'paramiko',
     'pre-commit',
     'pysftp',
-    'pytest',
+    'pytest<6.0.0',  # FIXME: pylint complaining for pytest.mark.* on v6.0
     'pytest-cov',
     'pytest-instafail',
     'pytest-rerunfailures',


[airflow] 15/32: Pin github checkout action to v2 (#9938)

Posted by ka...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

kaxilnaik pushed a commit to branch v1-10-test
in repository https://gitbox.apache.org/repos/asf/airflow.git

commit c72ce92d54aa9011187de62b0caa7b2f970034f0
Author: Jarek Potiuk <ja...@polidea.com>
AuthorDate: Wed Jul 22 21:18:17 2020 +0200

    Pin github checkout action to v2 (#9938)
    
    (cherry picked from commit e86d753b4ba83dd3f27c613d48493c088abaa2b8)
---
 .github/workflows/ci.yml | 28 ++++++++++++++--------------
 1 file changed, 14 insertions(+), 14 deletions(-)

diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml
index 8cb1efa..d0e7ab2 100644
--- a/.github/workflows/ci.yml
+++ b/.github/workflows/ci.yml
@@ -45,7 +45,7 @@ jobs:
     name: "Cancel previous workflow run"
     runs-on: ubuntu-latest
     steps:
-      - uses: actions/checkout@master
+      - uses: actions/checkout@v2
       - name: Get ci workflow id
         run: "scripts/ci/cancel/get_workflow_id.sh"
         env:
@@ -67,7 +67,7 @@ jobs:
     env:
       MOUNT_SOURCE_DIR_FOR_STATIC_CHECKS: "true"
     steps:
-      - uses: actions/checkout@master
+      - uses: actions/checkout@v2
       - uses: actions/setup-python@v1
         with:
           python-version: '3.x'
@@ -94,7 +94,7 @@ jobs:
     needs:
       - cancel-previous-workflow-run
     steps:
-      - uses: actions/checkout@master
+      - uses: actions/checkout@v2
       - uses: actions/setup-python@v1
         with:
           python-version: '3.6'
@@ -115,7 +115,7 @@ jobs:
     outputs:
       run-tests: ${{ steps.trigger-tests.outputs.run-tests }}
     steps:
-      - uses: actions/checkout@master
+      - uses: actions/checkout@v2
       - name: "Check if tests should be run"
         run: "./scripts/ci/tools/ci_check_if_tests_should_be_run.sh"
         id: trigger-tests
@@ -151,7 +151,7 @@ jobs:
       HELM_VERSION: "${{ matrix.helm-version }}"
     if: needs.trigger-tests.outputs.run-tests == 'true' || github.event_name != 'pull_request'
     steps:
-      - uses: actions/checkout@master
+      - uses: actions/checkout@v2
       - uses: actions/setup-python@v1
         with:
           python-version: '3.6'
@@ -201,7 +201,7 @@ jobs:
       TEST_TYPE: ${{ matrix.test-type }}
     if: needs.trigger-tests.outputs.run-tests == 'true' || github.event_name != 'pull_request'
     steps:
-      - uses: actions/checkout@master
+      - uses: actions/checkout@v2
       - uses: actions/setup-python@v1
         with:
           python-version: '3.6'
@@ -231,7 +231,7 @@ jobs:
       TEST_TYPE: ${{ matrix.test-type }}
     if: needs.trigger-tests.outputs.run-tests == 'true' || github.event_name != 'pull_request'
     steps:
-      - uses: actions/checkout@master
+      - uses: actions/checkout@v2
       - uses: actions/setup-python@v1
         with:
           python-version: '3.x'
@@ -259,7 +259,7 @@ jobs:
       RUN_TESTS: "true"
     if: needs.trigger-tests.outputs.run-tests == 'true' || github.event_name != 'pull_request'
     steps:
-      - uses: actions/checkout@master
+      - uses: actions/checkout@v2
       - uses: actions/setup-python@v1
         with:
           python-version: '3.x'
@@ -290,7 +290,7 @@ jobs:
       TEST_TYPE: ${{ matrix.test-type }}
     if: needs.trigger-tests.outputs.run-tests == 'true' || github.event_name != 'pull_request'
     steps:
-      - uses: actions/checkout@master
+      - uses: actions/checkout@v2
       - uses: actions/setup-python@v1
         with:
           python-version: '3.x'
@@ -308,7 +308,7 @@ jobs:
     needs:
       - cancel-previous-workflow-run
     steps:
-      - uses: actions/checkout@master
+      - uses: actions/checkout@v2
       - name: "Helm Tests"
         run: ./scripts/ci/kubernetes/ci_run_helm_testing.sh
       - name: "Cancel workflow on helm-tests failure"
@@ -325,7 +325,7 @@ jobs:
     env:
       PYTHON_MAJOR_MINOR_VERSION: ${{ matrix.python-version }}
     steps:
-      - uses: actions/checkout@master
+      - uses: actions/checkout@v2
       - name: "Build PROD image ${{ matrix.python-version }}"
         run: ./scripts/ci/images/ci_prepare_prod_image_on_ci.sh
       - name: "Cancel workflow on build prod image failure"
@@ -351,7 +351,7 @@ jobs:
     env:
       PYTHON_MAJOR_MINOR_VERSION: ${{ matrix.python-version }}
     steps:
-      - uses: actions/checkout@master
+      - uses: actions/checkout@v2
       - name: "Free space"
         run: ./scripts/ci/tools/ci_free_space_on_ci.sh
       - name: "Build PROD images ${{ matrix.python-version }}"
@@ -378,7 +378,7 @@ jobs:
       PULL_PYTHON_BASE_IMAGES_FROM_CACHE: "false"
       PYTHON_MAJOR_MINOR_VERSION: ${{ matrix.python-version }}
     steps:
-      - uses: actions/checkout@master
+      - uses: actions/checkout@v2
       - name: "Free space"
         run: ./scripts/ci/tools/ci_free_space_on_ci.sh
       - name: "Build CI image"
@@ -406,7 +406,7 @@ jobs:
       github.event_name == 'push' &&
       (github.ref == 'refs/heads/master' || github.ref == 'refs/heads/v1-10-test')
     steps:
-      - uses: actions/checkout@master
+      - uses: actions/checkout@v2
       - uses: actions/setup-python@v1
       - name: "Free space"
         run: ./scripts/ci/tools/ci_free_space_on_ci.sh


[airflow] 07/32: Remove package.json and yarn.lock from the prod image (#9814)

Posted by ka...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

kaxilnaik pushed a commit to branch v1-10-test
in repository https://gitbox.apache.org/repos/asf/airflow.git

commit 1d4782e4e2061d8b8368afc698ef97ec67ca360c
Author: Jarek Potiuk <ja...@polidea.com>
AuthorDate: Tue Jul 14 16:34:21 2020 +0200

    Remove package.json and yarn.lock from the prod image (#9814)
    
    Closes #9810
    
    (cherry picked from commit 593a0ddaae2deaa283c260a32187cf3c27ec3e7d)
---
 Dockerfile | 1 +
 1 file changed, 1 insertion(+)

diff --git a/Dockerfile b/Dockerfile
index a882178..c06105d 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -225,6 +225,7 @@ RUN AIRFLOW_SITE_PACKAGE="/root/.local/lib/python${PYTHON_MAJOR_MINOR_VERSION}/s
         yarn --cwd "${WWW_DIR}" install --frozen-lockfile --no-cache; \
         yarn --cwd "${WWW_DIR}" run prod; \
         rm -rf "${WWW_DIR}/node_modules"; \
+        rm -vf "${WWW_DIR}"/{package.json,yarn.lock,.eslintignore,.eslintrc,.stylelintignore,.stylelintrc,compile_assets.sh,webpack.config.js} ;\
     fi
 
 # make sure that all directories and files in .local are also group accessible


[airflow] 09/32: Group CI scripts in subdirectories (#9653)

Posted by ka...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

kaxilnaik pushed a commit to branch v1-10-test
in repository https://gitbox.apache.org/repos/asf/airflow.git

commit 7ec2b3ace1a1a4982b0c313b89eab6aee1eb9620
Author: Jarek Potiuk <ja...@polidea.com>
AuthorDate: Thu Jul 16 18:05:35 2020 +0200

    Group CI scripts in subdirectories (#9653)
    
    Reviewed the scripts and removed some of the old unused ones.
    
    (cherry picked from commit faec41ec9a05a037b88fd0213b1936cde2b5c454)
---
 .github/workflows/ci.yml                           |  62 +++++-----
 .pre-commit-config.yaml                            |  34 +++---
 .rat-excludes                                      |   1 +
 BREEZE.rst                                         |   2 +-
 STATIC_CODE_CHECKS.rst                             |  15 ++-
 TESTING.rst                                        |   8 +-
 breeze                                             |   4 +-
 docs/start_doc_server.sh                           |   4 +-
 hooks/build                                        |   4 +-
 hooks/push                                         |   5 -
 scripts/ci/ci_load_image_to_kind.sh                |  33 -----
 scripts/ci/ci_perform_kind_cluster_operation.sh    |  32 -----
 scripts/ci/{ => docs}/ci_docs.sh                   |   4 +-
 scripts/ci/{ => images}/ci_build_dockerhub.sh      |   4 +-
 .../ci/{ => images}/ci_prepare_ci_image_on_ci.sh   |   4 +-
 .../ci/{ => images}/ci_prepare_prod_image_on_ci.sh |   4 +-
 scripts/ci/{ => images}/ci_push_ci_image.sh        |   4 +-
 .../ci/{ => images}/ci_push_production_images.sh   |   4 +-
 .../ci/in_container/_in_container_script_init.sh   |   4 +-
 scripts/ci/in_container/_in_container_utils.sh     |   2 +-
 .../in_container/deploy_airflow_to_kubernetes.sh   |  23 ----
 scripts/ci/in_container/entrypoint_ci.sh           |  16 +--
 scripts/ci/{ => in_container}/run_cli_tool.sh      |   0
 scripts/ci/in_container/run_system_tests.sh        |   4 +-
 .../ci_deploy_app_to_kubernetes.sh                 |   4 +-
 .../ci/{ => kubernetes}/ci_run_kubernetes_tests.sh |   4 +-
 scripts/ci/{ => libraries}/_all_libs.sh            |  29 +++--
 scripts/ci/libraries/_build_images.sh              |   2 +-
 scripts/ci/libraries/_initialization.sh            |   6 +-
 scripts/ci/libraries/_kind.sh                      |  40 +++---
 scripts/ci/{ => libraries}/_script_init.sh         |  13 +-
 scripts/ci/minikdc.properties                      |  27 -----
 .../ci/{ => pre_commit}/pre_commit_bat_tests.sh    |   4 +-
 .../{ => pre_commit}/pre_commit_breeze_cmd_line.sh |  10 +-
 .../pre_commit_check_integrations.sh               |  10 +-
 .../{ => pre_commit}/pre_commit_check_license.sh   |   2 +-
 .../ci/pre_commit/pre_commit_check_order_setup.py  | 135 +++++++++++++++++++++
 scripts/ci/{ => pre_commit}/pre_commit_ci_build.sh |   4 +-
 scripts/ci/{ => pre_commit}/pre_commit_flake8.sh   |   4 +-
 .../pre_commit_generate_requirements.sh            |   4 +-
 .../ci/pre_commit/pre_commit_insert_extras.py      |   4 +-
 .../{ => pre_commit}/pre_commit_lint_dockerfile.sh |   4 +-
 .../pre_commit_local_yml_mounts.sh                 |   8 +-
 scripts/ci/{ => pre_commit}/pre_commit_mypy.sh     |   4 +-
 .../ci/{ => pre_commit}/pre_commit_yaml_to_cfg.py  |   7 +-
 scripts/ci/pre_commit_update_extras.sh             |  31 -----
 .../{ => requirements}/ci_generate_requirements.sh |   4 +-
 scripts/ci/{ => static_checks}/ci_bat_tests.sh     |   3 +
 scripts/ci/{ => static_checks}/ci_check_license.sh |   4 +-
 scripts/ci/{ => static_checks}/ci_flake8.sh        |   4 +-
 .../ci/{ => static_checks}/ci_lint_dockerfile.sh   |   4 +-
 scripts/ci/{ => static_checks}/ci_mypy.sh          |   4 +-
 .../ci/{ => static_checks}/ci_run_static_checks.sh |   4 +-
 scripts/ci/{ => testing}/ci_run_airflow_testing.sh |  33 ++---
 scripts/ci/{ => tools}/ci_count_changed_files.sh   |   4 +-
 scripts/ci/{ => tools}/ci_fix_ownership.sh         |  10 +-
 scripts/ci/{ => tools}/ci_free_space_on_ci.sh      |   4 +-
 tests/bats/bats_utils.bash                         |   4 +-
 tests/test_order_setup.py                          | 134 --------------------
 59 files changed, 355 insertions(+), 494 deletions(-)

diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml
index 134bc1f..a849d58 100644
--- a/.github/workflows/ci.yml
+++ b/.github/workflows/ci.yml
@@ -60,15 +60,15 @@ jobs:
           path: ~/.cache/pre-commit
           key: ${{ env.cache-name }}-${{ github.job }}-${{ hashFiles('.pre-commit-config.yaml') }}
       - name: "Free space"
-        run: ./scripts/ci/ci_free_space_on_ci.sh
+        run: ./scripts/ci/tools/ci_free_space_on_ci.sh
       - name: "Build CI image"
-        run: ./scripts/ci/ci_prepare_ci_image_on_ci.sh
+        run: ./scripts/ci/images/ci_prepare_ci_image_on_ci.sh
       - name: "Static checks"
         if: success()
         run: |
           python -m pip install pre-commit \
               --constraint requirements/requirements-python${PYTHON_MAJOR_MINOR_VERSION}.txt
-          ./scripts/ci/ci_run_static_checks.sh
+          ./scripts/ci/static_checks/ci_run_static_checks.sh
 
   docs:
     timeout-minutes: 60
@@ -82,9 +82,9 @@ jobs:
         with:
           python-version: '3.6'
       - name: "Build CI image ${{ matrix.python-version }}"
-        run: ./scripts/ci/ci_prepare_ci_image_on_ci.sh
+        run: ./scripts/ci/images/ci_prepare_ci_image_on_ci.sh
       - name: "Build docs"
-        run: ./scripts/ci/ci_docs.sh
+        run: ./scripts/ci/docs/ci_docs.sh
 
   build-prod-image:
     timeout-minutes: 60
@@ -99,7 +99,7 @@ jobs:
     steps:
       - uses: actions/checkout@master
       - name: "Build PROD image ${{ matrix.python-version }}"
-        run: ./scripts/ci/ci_prepare_prod_image_on_ci.sh
+        run: ./scripts/ci/images/ci_prepare_prod_image_on_ci.sh
 
   trigger-tests:
     timeout-minutes: 10
@@ -112,7 +112,7 @@ jobs:
       - name: "Get count of changed python files"
         run: |
           set +e
-          ./scripts/ci/ci_count_changed_files.sh ${GITHUB_SHA} \
+          ./scripts/ci/tools/ci_count_changed_files.sh ${GITHUB_SHA} \
               '^airflow|.github/workflows/|^Dockerfile|^scripts|^chart|^setup.py|^requirements|^tests|^kubernetes_tests'
           echo "::set-output name=count::$?"
         id: trigger-tests
@@ -155,7 +155,7 @@ jobs:
         with:
           python-version: '3.6'
       - name: "Free space"
-        run: ./scripts/ci/ci_free_space_on_ci.sh
+        run: ./scripts/ci/tools/ci_free_space_on_ci.sh
       - uses: engineerd/setup-kind@v0.4.0
         name: Setup Kind Cluster
         with:
@@ -163,7 +163,7 @@ jobs:
           name: airflow-python-${{matrix.python-version}}-${{matrix.kubernetes-version}}
           config: "scripts/ci/kubernetes/kind-cluster-conf.yaml"
       - name: "Deploy app to cluster"
-        run: ./scripts/ci/ci_deploy_app_to_kubernetes.sh
+        run: ./scripts/ci/kubernetes/ci_deploy_app_to_kubernetes.sh
       - name: Cache virtualenv for kubernetes testing
         uses: actions/cache@v2
         env:
@@ -173,7 +173,7 @@ jobs:
           key: "${{ env.cache-name }}-${{ github.job }}-\
 ${{ hashFiles('requirements/requirements-python${{matrix.python-version}}.txt') }}"
       - name: "Tests"
-        run: ./scripts/ci/ci_run_kubernetes_tests.sh
+        run: ./scripts/ci/kubernetes/ci_run_kubernetes_tests.sh
       - uses: actions/upload-artifact@v2
         name: Upload KinD logs
         # Always run this, even if one of th previous steps failed.
@@ -208,11 +208,11 @@ ${{ hashFiles('requirements/requirements-python${{matrix.python-version}}.txt')
         with:
           python-version: '3.6'
       - name: "Free space"
-        run: ./scripts/ci/ci_free_space_on_ci.sh
+        run: ./scripts/ci/tools/ci_free_space_on_ci.sh
       - name: "Build CI image ${{ matrix.python-version }}"
-        run: ./scripts/ci/ci_prepare_ci_image_on_ci.sh
+        run: ./scripts/ci/images/ci_prepare_ci_image_on_ci.sh
       - name: "Tests"
-        run: ./scripts/ci/ci_run_airflow_testing.sh
+        run: ./scripts/ci/testing/ci_run_airflow_testing.sh
 
   tests-mysql:
     timeout-minutes: 80
@@ -240,11 +240,11 @@ ${{ hashFiles('requirements/requirements-python${{matrix.python-version}}.txt')
         with:
           python-version: '3.x'
       - name: "Free space"
-        run: ./scripts/ci/ci_free_space_on_ci.sh
+        run: ./scripts/ci/tools/ci_free_space_on_ci.sh
       - name: "Build CI image ${{ matrix.python-version }}"
-        run: ./scripts/ci/ci_prepare_ci_image_on_ci.sh
+        run: ./scripts/ci/images/ci_prepare_ci_image_on_ci.sh
       - name: "Tests"
-        run: ./scripts/ci/ci_run_airflow_testing.sh
+        run: ./scripts/ci/testing/ci_run_airflow_testing.sh
 
   tests-sqlite:
     timeout-minutes: 80
@@ -270,11 +270,11 @@ ${{ hashFiles('requirements/requirements-python${{matrix.python-version}}.txt')
         with:
           python-version: '3.x'
       - name: "Free space"
-        run: ./scripts/ci/ci_free_space_on_ci.sh
+        run: ./scripts/ci/tools/ci_free_space_on_ci.sh
       - name: "Build CI image ${{ matrix.python-version }}"
-        run: ./scripts/ci/ci_prepare_ci_image_on_ci.sh
+        run: ./scripts/ci/images/ci_prepare_ci_image_on_ci.sh
       - name: "Tests"
-        run: ./scripts/ci/ci_run_airflow_testing.sh
+        run: ./scripts/ci/testing/ci_run_airflow_testing.sh
 
   tests-quarantined:
     timeout-minutes: 80
@@ -303,11 +303,11 @@ ${{ hashFiles('requirements/requirements-python${{matrix.python-version}}.txt')
         with:
           python-version: '3.x'
       - name: "Free space"
-        run: ./scripts/ci/ci_free_space_on_ci.sh
+        run: ./scripts/ci/tools/ci_free_space_on_ci.sh
       - name: "Build CI image ${{ matrix.python-version }}"
-        run: ./scripts/ci/ci_prepare_ci_image_on_ci.sh
+        run: ./scripts/ci/images/ci_prepare_ci_image_on_ci.sh
       - name: "Tests"
-        run: ./scripts/ci/ci_run_airflow_testing.sh
+        run: ./scripts/ci/testing/ci_run_airflow_testing.sh
 
   helm-tests:
     timeout-minutes: 5
@@ -337,11 +337,11 @@ ${{ hashFiles('requirements/requirements-python${{matrix.python-version}}.txt')
       - uses: actions/checkout@master
       - uses: actions/setup-python@v1
       - name: "Free space"
-        run: ./scripts/ci/ci_free_space_on_ci.sh
+        run: ./scripts/ci/tools/ci_free_space_on_ci.sh
       - name: "Build CI image ${{ matrix.python-version }}"
-        run: ./scripts/ci/ci_prepare_ci_image_on_ci.sh
+        run: ./scripts/ci/images/ci_prepare_ci_image_on_ci.sh
       - name: "Generate requirements"
-        run: ./scripts/ci/ci_generate_requirements.sh
+        run: ./scripts/ci/requirements/ci_generate_requirements.sh
 
   push-prod-images-to-github-cache:
     timeout-minutes: 80
@@ -364,11 +364,11 @@ ${{ hashFiles('requirements/requirements-python${{matrix.python-version}}.txt')
     steps:
       - uses: actions/checkout@master
       - name: "Free space"
-        run: ./scripts/ci/ci_free_space_on_ci.sh
+        run: ./scripts/ci/tools/ci_free_space_on_ci.sh
       - name: "Build PROD images ${{ matrix.python-version }}"
-        run: ./scripts/ci/ci_prepare_prod_image_on_ci.sh
+        run: ./scripts/ci/images/ci_prepare_prod_image_on_ci.sh
       - name: "Push PROD images ${{ matrix.python-version }}"
-        run: ./scripts/ci/ci_push_production_images.sh
+        run: ./scripts/ci/images/ci_push_production_images.sh
 
   push-ci-images-to-github-cache:
     timeout-minutes: 40
@@ -395,8 +395,8 @@ ${{ hashFiles('requirements/requirements-python${{matrix.python-version}}.txt')
     steps:
       - uses: actions/checkout@master
       - name: "Free space"
-        run: ./scripts/ci/ci_free_space_on_ci.sh
+        run: ./scripts/ci/tools/ci_free_space_on_ci.sh
       - name: "Build CI image"
-        run: ./scripts/ci/ci_prepare_ci_image_on_ci.sh
+        run: ./scripts/ci/images/ci_prepare_ci_image_on_ci.sh
       - name: "Push CI image ${{ matrix.python-version }}"
-        run: ./scripts/ci/ci_push_ci_image.sh
+        run: ./scripts/ci/images/ci_push_ci_image.sh
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 99dbbe9..eb91828 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -198,7 +198,7 @@ repos:
       - id: lint-dockerfile
         name: Lint dockerfile
         language: system
-        entry: "./scripts/ci/pre_commit_lint_dockerfile.sh"
+        entry: "./scripts/ci/pre_commit/pre_commit_lint_dockerfile.sh"
         files: ^Dockerfile.*$
         pass_filenames: true
       - id: setup-order
@@ -207,25 +207,25 @@ repos:
         files: ^setup.py$
         pass_filenames: false
         require_serial: true
-        entry: tests/test_order_setup.py
+        entry: ./scripts/ci/pre_commit/pre_commit_check_order_setup.py
       - id: update-breeze-file
         name: Update output of breeze command in BREEZE.rst
-        entry: "./scripts/ci/pre_commit_breeze_cmd_line.sh"
+        entry: "./scripts/ci/pre_commit/pre_commit_breeze_cmd_line.sh"
         language: system
         files: ^BREEZE.rst$|^breeze$|^breeze-complete$
         pass_filenames: false
         require_serial: true
       - id: update-local-yml-file
         name: Update mounts in the local yml file
-        entry: "./scripts/ci/pre_commit_local_yml_mounts.sh"
+        entry: "./scripts/ci/pre_commit/pre_commit_local_yml_mounts.sh"
         language: system
         files: ^scripts/ci/libraries/_local_mounts.sh$|s^scripts/ci/docker_compose/local.yml"
         pass_filenames: false
         require_serial: true
       - id: update-extras
         name: Update extras in documentation
-        entry: "./scripts/ci/pre_commit_update_extras.sh"
-        language: system
+        entry: ./scripts/ci/pre_commit/pre_commit_insert_extras.py
+        language: python
         files: ^setup.py$|^INSTALL$|^CONTRIBUTING.rst$
         pass_filenames: false
         require_serial: true
@@ -265,14 +265,14 @@ repos:
                 ^\sdef\s*\S*\(.*\):\s*\-\>\s*\S*.*  # Matches -> return value syntax from Python3
             )$
         files: \.py$
-        exclude: ^dev/
+        exclude: ^dev|^scripts
         pass_filenames: true
       - id: python2-compile
         name: Compile code using python2
         language: system
         entry: python2.7 -m py_compile
         files: \.py$
-        exclude: ^dev/
+        exclude: ^dev|^scripts
         pass_filenames: true
         require_serial: true
       - id: pydevd
@@ -283,7 +283,7 @@ repos:
         pass_filenames: true
       - id: check-integrations
         name: Check if integration list is aligned
-        entry: ./scripts/ci/pre_commit_check_integrations.sh
+        entry: ./scripts/ci/pre_commit/pre_commit_check_integrations.sh
         language: system
         pass_filenames: false
         files: ^common/_common_values.sh$|^breeze-complete$
@@ -295,13 +295,13 @@ repos:
         pass_filenames: true
       - id: build
         name: Check if image build is needed
-        entry: ./scripts/ci/pre_commit_ci_build.sh 3.5 false
+        entry: ./scripts/ci/pre_commit/pre_commit_ci_build.sh 3.6 false
         language: system
         always_run: true
         pass_filenames: false
       - id: check-apache-license
         name: Check if licenses are OK for Apache
-        entry: "./scripts/ci/pre_commit_check_license.sh"
+        entry: "./scripts/ci/pre_commit/pre_commit_check_license.sh"
         language: system
         files: ^.*LICENSE.*$|^.*LICENCE.*$
         pass_filenames: false
@@ -309,28 +309,28 @@ repos:
       - id: airflow-config-yaml
         name: Checks for consistency between config.yml and default_config.cfg
         language: python
+        entry: ./scripts/ci/pre_commit/pre_commit_yaml_to_cfg.py
         files: "^airflow/config_templates/config.yml$|^airflow/config_templates/default_airflow.cfg$"
         pass_filenames: false
-        require_serial: false
-        entry: scripts/ci/pre_commit_yaml_to_cfg.py
+        require_serial: true
         additional_dependencies: ['pyyaml']
       - id: mypy
         name: Run mypy
         language: system
-        entry: "./scripts/ci/pre_commit_mypy.sh"
+        entry: "./scripts/ci/pre_commit/pre_commit_mypy.sh"
         files: \.py$
         exclude: ^dev
         require_serial: true
       - id: flake8
         name: Run flake8
         language: system
-        entry: "./scripts/ci/pre_commit_flake8.sh"
+        entry: "./scripts/ci/pre_commit/pre_commit_flake8.sh"
         files: \.py$
-        exclude: ^dev/
+        exclude: ^dev
         pass_filenames: true
       - id: bat-tests
         name: Run BATS bash tests for changed bash files
         language: system
-        entry: "./scripts/ci/pre_commit_bat_tests.sh"
+        entry: "./scripts/ci/pre_commit/pre_commit_bat_tests.sh"
         files: ^breeze$|^breeze-complete$|\.sh$|\.bash$
         pass_filenames: false
diff --git a/.rat-excludes b/.rat-excludes
index 6bef964..497d7ed 100644
--- a/.rat-excludes
+++ b/.rat-excludes
@@ -86,3 +86,4 @@ input_notebook.ipynb
 
 # .git might be a file in case of worktree
 .git
+tmp
diff --git a/BREEZE.rst b/BREEZE.rst
index 338777b..435b21e 100644
--- a/BREEZE.rst
+++ b/BREEZE.rst
@@ -920,7 +920,7 @@ by the root user, you can fix the ownership of those files by running this scrip
 
 .. code-block::
 
-  ./scripts/ci/ci_fix_ownership.sh
+  ./scripts/ci/tools/ci_fix_ownership.sh
 
 Mounting Local Sources to Breeze
 --------------------------------
diff --git a/STATIC_CODE_CHECKS.rst b/STATIC_CODE_CHECKS.rst
index 3cb2c5e..b3b5978 100644
--- a/STATIC_CODE_CHECKS.rst
+++ b/STATIC_CODE_CHECKS.rst
@@ -265,13 +265,12 @@ Running Static Code Checks via Scripts from the Host
 ....................................................
 
 You can trigger the static checks from the host environment, without entering the Docker container. To do
-this, run the following scripts (the same is done in the CI builds):
+this, run the following scripts:
 
-* `<scripts/ci/ci_check_license.sh>`_ - checks the licenses.
-* `<scripts/ci/ci_docs.sh>`_ - checks that documentation can be built without warnings.
-* `<scripts/ci/ci_flake8.sh>`_ - runs Flake8 source code style enforcement tool.
-* `<scripts/ci/ci_lint_dockerfile.sh>`_ - runs lint checker for the dockerfiles.
-* `<scripts/ci/ci_mypy.sh>`_ - runs a check for mypy type annotation consistency.
+* `<scripts/ci/static_checks/ci_check_license.sh>`_ - checks the licenses.
+* `<scripts/ci/static_checks/ci_flake8.sh>`_ - runs Flake8 source code style enforcement tool.
+* `<scripts/ci/static_checks/ci_lint_dockerfile.sh>`_ - runs lint checker for the dockerfiles.
+* `<scripts/ci/static_checks/ci_mypy.sh>`_ - runs a check for mypy type annotation consistency.
 
 The scripts may ask you to rebuild the images, if needed.
 
@@ -314,8 +313,8 @@ On the host:
 
 .. code-block::
 
-  ./scripts/ci/ci_mypy.sh ./airflow/example_dags/
+  ./scripts/ci/static_checks/ci_mypy.sh ./airflow/example_dags/
 
 .. code-block::
 
-  ./scripts/ci/ci_mypy.sh ./airflow/example_dags/test_utils.py
+  ./scripts/ci/static_checks/ci_mypy.sh ./airflow/example_dags/test_utils.py
diff --git a/TESTING.rst b/TESTING.rst
index 7c761e8..02163fe 100644
--- a/TESTING.rst
+++ b/TESTING.rst
@@ -378,10 +378,10 @@ to run the tests manually one by one.
 
     Running kubernetes tests
 
-      ./scripts/ci/ci_run_kubernetes_tests.sh                      - runs all kubernetes tests
-      ./scripts/ci/ci_run_kubernetes_tests.sh TEST [TEST ...]      - runs selected kubernetes tests (from kubernetes_tests folder)
-      ./scripts/ci/ci_run_kubernetes_tests.sh [-i|--interactive]   - Activates virtual environment ready to run tests and drops you in
-      ./scripts/ci/ci_run_kubernetes_tests.sh [--help]             - Prints this help message
+      ./scripts/ci/kubernetes/ci_run_kubernetes_tests.sh                      - runs all kubernetes tests
+      ./scripts/ci/kubernetes/ci_run_kubernetes_tests.sh TEST [TEST ...]      - runs selected kubernetes tests (from kubernetes_tests folder)
+      ./scripts/ci/kubernetes/ci_run_kubernetes_tests.sh [-i|--interactive]   - Activates virtual environment ready to run tests and drops you in
+      ./scripts/ci/kubernetes/ci_run_kubernetes_tests.sh [--help]             - Prints this help message
 
 
 You can also run the same tests command with Breeze, using ``kind-cluster test`` command (to run all
diff --git a/breeze b/breeze
index a10ba9c..abb95b7 100755
--- a/breeze
+++ b/breeze
@@ -62,8 +62,8 @@ function setup_default_breeze_variables() {
 
     # load all the common functions here - those are the functions that are shared between Breeze
     # and CI scripts (CI scripts do not use Breeze as execution environment)
-    # shellcheck source=scripts/ci/_all_libs.sh
-    . "${SCRIPTS_CI_DIR}/_all_libs.sh"
+    # shellcheck source=scripts/ci/libraries/_all_libs.sh
+    . "${SCRIPTS_CI_DIR}/libraries/_all_libs.sh"
 
     # We have different versions of images depending on the python version used. We keep up with the
     # Latest patch-level changes in Python (this is done automatically during CI builds) so we have
diff --git a/docs/start_doc_server.sh b/docs/start_doc_server.sh
index 26248ec..51cc7a77 100755
--- a/docs/start_doc_server.sh
+++ b/docs/start_doc_server.sh
@@ -16,8 +16,8 @@
 # specific language governing permissions and limitations
 # under the License.
 
-MY_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
-(cd "${MY_DIR}"/_build/html || exit;
+DOCS_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
+(cd "${DOCS_DIR}"/_build/html || exit;
 # The below command works on both Python 2 and Python 3
 python -m http.server 8000 && python -m SimpleHTTPServer 8000
 )
diff --git a/hooks/build b/hooks/build
index a98a923..d3d3efb 100755
--- a/hooks/build
+++ b/hooks/build
@@ -20,9 +20,9 @@
 # on Travis CI to potentially rebuild (and refresh layers that
 # are not cached) Docker images that are used to run CI jobs
 
-MY_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
+_HOOK_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
 
 # Dockerhub builds are run inside Docker container
 export SKIP_IN_CONTAINER_CHECK="true"
 
-exec "${MY_DIR}/../scripts/ci/ci_build_dockerhub.sh"
+exec "${_HOOK_DIR}/../scripts/ci/images/ci_build_dockerhub.sh"
diff --git a/hooks/push b/hooks/push
index 91cf096..84805b0 100755
--- a/hooks/push
+++ b/hooks/push
@@ -20,11 +20,6 @@
 # and it is difficult to pass list of the built images from the build to push phase
 set -euo pipefail
 
-MY_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
-
-echo "My dir: ${MY_DIR}"
-
-
 echo
 echo "Skip pushing the image. All images were built and pushed in the build hook already!"
 echo
diff --git a/scripts/ci/ci_load_image_to_kind.sh b/scripts/ci/ci_load_image_to_kind.sh
deleted file mode 100755
index dda1e38..0000000
--- a/scripts/ci/ci_load_image_to_kind.sh
+++ /dev/null
@@ -1,33 +0,0 @@
-#!/usr/bin/env bash
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-#   http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
-# shellcheck source=scripts/ci/_script_init.sh
-. "$( dirname "${BASH_SOURCE[0]}" )/_script_init.sh"
-
-cd "${AIRFLOW_SOURCES}" || exit 1
-
-export PYTHON_MAJOR_MINOR_VERSION=${PYTHON_MAJOR_MINOR_VERSION:="3.6"}
-export KIND_CLUSTER_NAME=${KIND_CLUSTER_NAME:="airflow-python-${PYTHON_MAJOR_MINOR_VERSION}-${KUBERNETES_VERSION}"}
-
-prepare_prod_build
-echo
-echo "Loading the ${AIRFLOW_PROD_IMAGE} to cluster ${KIND_CLUSTER_NAME} from docker"
-echo
-"${AIRFLOW_SOURCES}/.build/bin/kind" load docker-image --name "${KIND_CLUSTER_NAME}" "${AIRFLOW_PROD_IMAGE}"
-echo
-echo "Loaded the ${AIRFLOW_PROD_IMAGE} to cluster ${KIND_CLUSTER_NAME}"
-echo
diff --git a/scripts/ci/ci_perform_kind_cluster_operation.sh b/scripts/ci/ci_perform_kind_cluster_operation.sh
deleted file mode 100755
index 4d3ddd2..0000000
--- a/scripts/ci/ci_perform_kind_cluster_operation.sh
+++ /dev/null
@@ -1,32 +0,0 @@
-#!/usr/bin/env bash
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-#   http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
-
-# shellcheck source=scripts/ci/_script_init.sh
-. "$( dirname "${BASH_SOURCE[0]}" )/_script_init.sh"
-
-# adding trap to exiting trap
-HANDLERS="$( trap -p EXIT | cut -f2 -d \' )"
-# shellcheck disable=SC2064
-trap "${HANDLERS}${HANDLERS:+;}dump_kind_logs" EXIT
-
-get_environment_for_builds_on_ci
-make_sure_kubernetes_tools_are_installed
-initialize_kind_variables
-perform_kind_cluster_operation "${@}"
-
-check_cluster_ready_for_airflow
diff --git a/scripts/ci/ci_docs.sh b/scripts/ci/docs/ci_docs.sh
similarity index 92%
rename from scripts/ci/ci_docs.sh
rename to scripts/ci/docs/ci_docs.sh
index 761e9a0..847cb49 100755
--- a/scripts/ci/ci_docs.sh
+++ b/scripts/ci/docs/ci_docs.sh
@@ -17,8 +17,8 @@
 # under the License.
 export PYTHON_MAJOR_MINOR_VERSION=${PYTHON_MAJOR_MINOR_VERSION:-3.6}
 
-# shellcheck source=scripts/ci/_script_init.sh
-. "$( dirname "${BASH_SOURCE[0]}" )/_script_init.sh"
+# shellcheck source=scripts/ci/libraries/_script_init.sh
+. "$( dirname "${BASH_SOURCE[0]}" )/../libraries/_script_init.sh"
 
 function run_docs() {
     docker run "${EXTRA_DOCKER_FLAGS[@]}" -t \
diff --git a/scripts/ci/ci_build_dockerhub.sh b/scripts/ci/images/ci_build_dockerhub.sh
similarity index 95%
rename from scripts/ci/ci_build_dockerhub.sh
rename to scripts/ci/images/ci_build_dockerhub.sh
index 97f9b8c..99612a2 100755
--- a/scripts/ci/ci_build_dockerhub.sh
+++ b/scripts/ci/images/ci_build_dockerhub.sh
@@ -47,8 +47,8 @@ echo "DOCKER_TAG=${DOCKER_TAG}"
 echo "Detected PYTHON_MAJOR_MINOR_VERSION=${PYTHON_MAJOR_MINOR_VERSION}"
 echo
 
-# shellcheck source=scripts/ci/_script_init.sh
-. "$( dirname "${BASH_SOURCE[0]}" )/_script_init.sh"
+# shellcheck source=scripts/ci/libraries/_script_init.sh
+. "$( dirname "${BASH_SOURCE[0]}" )/../libraries/_script_init.sh"
 
 if [[ ${DOCKER_TAG} == *python*-ci ]]; then
     echo
diff --git a/scripts/ci/ci_prepare_ci_image_on_ci.sh b/scripts/ci/images/ci_prepare_ci_image_on_ci.sh
similarity index 87%
rename from scripts/ci/ci_prepare_ci_image_on_ci.sh
rename to scripts/ci/images/ci_prepare_ci_image_on_ci.sh
index 8ced220..5fd7913 100755
--- a/scripts/ci/ci_prepare_ci_image_on_ci.sh
+++ b/scripts/ci/images/ci_prepare_ci_image_on_ci.sh
@@ -15,7 +15,7 @@
 # KIND, either express or implied.  See the License for the
 # specific language governing permissions and limitations
 # under the License.
-# shellcheck source=scripts/ci/_script_init.sh
-. "$( dirname "${BASH_SOURCE[0]}" )/_script_init.sh"
+# shellcheck source=scripts/ci/libraries/_script_init.sh
+. "$( dirname "${BASH_SOURCE[0]}" )/../libraries/_script_init.sh"
 
 build_ci_image_on_ci
diff --git a/scripts/ci/ci_prepare_prod_image_on_ci.sh b/scripts/ci/images/ci_prepare_prod_image_on_ci.sh
similarity index 87%
rename from scripts/ci/ci_prepare_prod_image_on_ci.sh
rename to scripts/ci/images/ci_prepare_prod_image_on_ci.sh
index 066a43b..ab4e7c0 100755
--- a/scripts/ci/ci_prepare_prod_image_on_ci.sh
+++ b/scripts/ci/images/ci_prepare_prod_image_on_ci.sh
@@ -15,7 +15,7 @@
 # KIND, either express or implied.  See the License for the
 # specific language governing permissions and limitations
 # under the License.
-# shellcheck source=scripts/ci/_script_init.sh
-. "$( dirname "${BASH_SOURCE[0]}" )/_script_init.sh"
+# shellcheck source=scripts/ci/libraries/_script_init.sh
+. "$( dirname "${BASH_SOURCE[0]}" )/../libraries/_script_init.sh"
 
 build_prod_image_on_ci
diff --git a/scripts/ci/ci_push_ci_image.sh b/scripts/ci/images/ci_push_ci_image.sh
similarity index 88%
rename from scripts/ci/ci_push_ci_image.sh
rename to scripts/ci/images/ci_push_ci_image.sh
index 09e2a7a..664d81a 100755
--- a/scripts/ci/ci_push_ci_image.sh
+++ b/scripts/ci/images/ci_push_ci_image.sh
@@ -17,8 +17,8 @@
 # under the License.
 export PYTHON_MAJOR_MINOR_VERSION=${PYTHON_MAJOR_MINOR_VERSION:-3.6}
 
-# shellcheck source=scripts/ci/_script_init.sh
-. "$( dirname "${BASH_SOURCE[0]}" )/_script_init.sh"
+# shellcheck source=scripts/ci/libraries/_script_init.sh
+. "$( dirname "${BASH_SOURCE[0]}" )/../libraries/_script_init.sh"
 
 prepare_ci_build
 
diff --git a/scripts/ci/ci_push_production_images.sh b/scripts/ci/images/ci_push_production_images.sh
similarity index 88%
rename from scripts/ci/ci_push_production_images.sh
rename to scripts/ci/images/ci_push_production_images.sh
index d6c6e7d..db439d8 100755
--- a/scripts/ci/ci_push_production_images.sh
+++ b/scripts/ci/images/ci_push_production_images.sh
@@ -17,8 +17,8 @@
 # under the License.
 export PYTHON_MAJOR_MINOR_VERSION=${PYTHON_MAJOR_MINOR_VERSION:-3.6}
 
-# shellcheck source=scripts/ci/_script_init.sh
-. "$( dirname "${BASH_SOURCE[0]}" )/_script_init.sh"
+# shellcheck source=scripts/ci/libraries/_script_init.sh
+. "$( dirname "${BASH_SOURCE[0]}" )/../libraries/_script_init.sh"
 
 prepare_prod_build
 
diff --git a/scripts/ci/in_container/_in_container_script_init.sh b/scripts/ci/in_container/_in_container_script_init.sh
index 3540016..50a558e 100755
--- a/scripts/ci/in_container/_in_container_script_init.sh
+++ b/scripts/ci/in_container/_in_container_script_init.sh
@@ -19,10 +19,10 @@
 set -euo pipefail
 
 # This should only be sourced from in_container directory!
-MY_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
+IN_CONTAINER_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
 
 # shellcheck source=scripts/ci/in_container/_in_container_utils.sh
-. "${MY_DIR}/_in_container_utils.sh"
+. "${IN_CONTAINER_DIR}/_in_container_utils.sh"
 
 in_container_basic_sanity_check
 
diff --git a/scripts/ci/in_container/_in_container_utils.sh b/scripts/ci/in_container/_in_container_utils.sh
index 0eb3a8a..f2e94d4 100644
--- a/scripts/ci/in_container/_in_container_utils.sh
+++ b/scripts/ci/in_container/_in_container_utils.sh
@@ -155,7 +155,7 @@ function setup_kerberos() {
     PASS="airflow"
     KRB5_KTNAME=/etc/airflow.keytab
 
-    sudo cp "${MY_DIR}/krb5/krb5.conf" /etc/krb5.conf
+    sudo cp "${AIRFLOW_SOURCES}/scripts/ci/in_container/krb5/krb5.conf" /etc/krb5.conf
 
     echo -e "${PASS}\n${PASS}" | \
         sudo kadmin -p "${ADMIN}/admin" -w "${PASS}" -q "addprinc -randkey airflow/${FQDN}" 2>&1 \
diff --git a/scripts/ci/in_container/deploy_airflow_to_kubernetes.sh b/scripts/ci/in_container/deploy_airflow_to_kubernetes.sh
deleted file mode 100755
index a124fba..0000000
--- a/scripts/ci/in_container/deploy_airflow_to_kubernetes.sh
+++ /dev/null
@@ -1,23 +0,0 @@
-#!/usr/bin/env bash
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-#   http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
-# Script to run flake8 on all code. Can be started from any working directory
-# shellcheck source=scripts/ci/in_container/_in_container_script_init.sh
-. "$( dirname "${BASH_SOURCE[0]}" )/_in_container_script_init.sh"
-
-"${MY_DIR}/kubernetes/docker/rebuild_airflow_image.sh"
-"${MY_DIR}/kubernetes/app/deploy_app.sh"
diff --git a/scripts/ci/in_container/entrypoint_ci.sh b/scripts/ci/in_container/entrypoint_ci.sh
index 4d1bf0c..eb1ff51 100755
--- a/scripts/ci/in_container/entrypoint_ci.sh
+++ b/scripts/ci/in_container/entrypoint_ci.sh
@@ -22,7 +22,7 @@ fi
 # shellcheck source=scripts/ci/in_container/_in_container_script_init.sh
 . /opt/airflow/scripts/ci/in_container/_in_container_script_init.sh
 
-AIRFLOW_SOURCES=$(cd "${MY_DIR}/../../.." || exit 1; pwd)
+AIRFLOW_SOURCES=$(cd "${IN_CONTAINER_DIR}/../../.." || exit 1; pwd)
 
 PYTHON_MAJOR_MINOR_VERSION=${PYTHON_MAJOR_MINOR_VERSION:=3.6}
 BACKEND=${BACKEND:=sqlite}
@@ -47,8 +47,8 @@ INSTALL_AIRFLOW_VERSION="${INSTALL_AIRFLOW_VERSION:=""}"
 
 if [[ ${CI} == "false" ]]; then
     # Create links for useful CLI tools
-    # shellcheck source=scripts/ci/run_cli_tool.sh
-    source <(bash scripts/ci/run_cli_tool.sh)
+    # shellcheck source=scripts/ci/in_container/run_cli_tool.sh
+    source <(bash scripts/ci/in_container/run_cli_tool.sh)
 fi
 
 if [[ ${AIRFLOW_VERSION} == *1.10* || ${INSTALL_AIRFLOW_VERSION} == *1.10* ]]; then
@@ -98,10 +98,10 @@ export PATH=${PATH}:${AIRFLOW_SOURCES}
 unset AIRFLOW__CORE__UNIT_TEST_MODE
 
 mkdir -pv "${AIRFLOW_HOME}/logs/"
-cp -f "${MY_DIR}/airflow_ci.cfg" "${AIRFLOW_HOME}/unittests.cfg"
+cp -f "${IN_CONTAINER_DIR}/airflow_ci.cfg" "${AIRFLOW_HOME}/unittests.cfg"
 
 set +e
-"${MY_DIR}/check_environment.sh"
+"${IN_CONTAINER_DIR}/check_environment.sh"
 ENVIRONMENT_EXIT_CODE=$?
 set -e
 if [[ ${ENVIRONMENT_EXIT_CODE} != 0 ]]; then
@@ -150,7 +150,7 @@ done
 ssh-keyscan -H localhost >> ~/.ssh/known_hosts 2>/dev/null
 
 # shellcheck source=scripts/ci/in_container/configure_environment.sh
-. "${MY_DIR}/configure_environment.sh"
+. "${IN_CONTAINER_DIR}/configure_environment.sh"
 
 cd "${AIRFLOW_SOURCES}"
 
@@ -211,7 +211,7 @@ fi
 ARGS=("${EXTRA_PYTEST_ARGS[@]}" "${TESTS_TO_RUN[@]}")
 
 if [[ ${RUN_SYSTEM_TESTS:="false"} == "true" ]]; then
-    "${MY_DIR}/run_system_tests.sh" "${ARGS[@]}"
+    "${IN_CONTAINER_DIR}/run_system_tests.sh" "${ARGS[@]}"
 else
-    "${MY_DIR}/run_ci_tests.sh" "${ARGS[@]}"
+    "${IN_CONTAINER_DIR}/run_ci_tests.sh" "${ARGS[@]}"
 fi
diff --git a/scripts/ci/run_cli_tool.sh b/scripts/ci/in_container/run_cli_tool.sh
similarity index 100%
rename from scripts/ci/run_cli_tool.sh
rename to scripts/ci/in_container/run_cli_tool.sh
diff --git a/scripts/ci/in_container/run_system_tests.sh b/scripts/ci/in_container/run_system_tests.sh
index 11dcf06..8cb3c3e 100755
--- a/scripts/ci/in_container/run_system_tests.sh
+++ b/scripts/ci/in_container/run_system_tests.sh
@@ -20,10 +20,10 @@
 # Bash sanity settings (error on exit, complain for undefined vars, error when pipe fails)
 set -euo pipefail
 
-MY_DIR=$(cd "$(dirname "$0")" || exit 1; pwd)
+IN_CONTAINER_DIR=$(cd "$(dirname "$0")" || exit 1; pwd)
 
 # shellcheck source=scripts/ci/in_container/_in_container_utils.sh
-. "${MY_DIR}/_in_container_utils.sh"
+. "${IN_CONTAINER_DIR}/_in_container_utils.sh"
 
 in_container_basic_sanity_check
 
diff --git a/scripts/ci/ci_deploy_app_to_kubernetes.sh b/scripts/ci/kubernetes/ci_deploy_app_to_kubernetes.sh
similarity index 92%
rename from scripts/ci/ci_deploy_app_to_kubernetes.sh
rename to scripts/ci/kubernetes/ci_deploy_app_to_kubernetes.sh
index 307bf00..d4bb4b1 100755
--- a/scripts/ci/ci_deploy_app_to_kubernetes.sh
+++ b/scripts/ci/kubernetes/ci_deploy_app_to_kubernetes.sh
@@ -15,8 +15,8 @@
 # KIND, either express or implied.  See the License for the
 # specific language governing permissions and limitations
 # under the License.
-# shellcheck source=scripts/ci/_script_init.sh
-. "$( dirname "${BASH_SOURCE[0]}" )/_script_init.sh"
+# shellcheck source=scripts/ci/libraries/_script_init.sh
+. "$( dirname "${BASH_SOURCE[0]}" )/../libraries/_script_init.sh"
 
 set -euo pipefail
 
diff --git a/scripts/ci/ci_run_kubernetes_tests.sh b/scripts/ci/kubernetes/ci_run_kubernetes_tests.sh
similarity index 96%
rename from scripts/ci/ci_run_kubernetes_tests.sh
rename to scripts/ci/kubernetes/ci_run_kubernetes_tests.sh
index 4d49e9e..3d8194a 100755
--- a/scripts/ci/ci_run_kubernetes_tests.sh
+++ b/scripts/ci/kubernetes/ci_run_kubernetes_tests.sh
@@ -15,8 +15,8 @@
 # KIND, either express or implied.  See the License for the
 # specific language governing permissions and limitations
 # under the License.
-# shellcheck source=scripts/ci/_script_init.sh
-. "$( dirname "${BASH_SOURCE[0]}" )/_script_init.sh"
+# shellcheck source=scripts/ci/libraries/_script_init.sh
+. "$( dirname "${BASH_SOURCE[0]}" )/../libraries/_script_init.sh"
 
 # adding trap to exiting trap
 HANDLERS="$( trap -p EXIT | cut -f2 -d \' )"
diff --git a/scripts/ci/_all_libs.sh b/scripts/ci/libraries/_all_libs.sh
similarity index 68%
rename from scripts/ci/_all_libs.sh
rename to scripts/ci/libraries/_all_libs.sh
index 9869598..f70bbde 100755
--- a/scripts/ci/_all_libs.sh
+++ b/scripts/ci/libraries/_all_libs.sh
@@ -16,34 +16,33 @@
 # specific language governing permissions and limitations
 # under the License.
 
-SCRIPTS_CI_DIR=$(dirname "${BASH_SOURCE[0]}")
+LIBRARIES_DIR=$(dirname "${BASH_SOURCE[0]}")
 
-# must be first to initialize arrays TODO: For sure?
 # shellcheck source=scripts/ci/libraries/_initialization.sh
-. "${SCRIPTS_CI_DIR}"/libraries/_initialization.sh
+. "${LIBRARIES_DIR}"/_initialization.sh
 
 
 # shellcheck source=scripts/ci/libraries/_sanity_checks.sh
-. "${SCRIPTS_CI_DIR}"/libraries/_sanity_checks.sh
+. "${LIBRARIES_DIR}"/_sanity_checks.sh
 # shellcheck source=scripts/ci/libraries/_build_images.sh
-. "${SCRIPTS_CI_DIR}"/libraries/_build_images.sh
+. "${LIBRARIES_DIR}"/_build_images.sh
 # shellcheck source=scripts/ci/libraries/_kind.sh
-. "${SCRIPTS_CI_DIR}"/libraries/_kind.sh
+. "${LIBRARIES_DIR}"/_kind.sh
 # shellcheck source=scripts/ci/libraries/_local_mounts.sh
-. "${SCRIPTS_CI_DIR}"/libraries/_local_mounts.sh
+. "${LIBRARIES_DIR}"/_local_mounts.sh
 # shellcheck source=scripts/ci/libraries/_md5sum.sh
-. "${SCRIPTS_CI_DIR}"/libraries/_md5sum.sh
+. "${LIBRARIES_DIR}"/_md5sum.sh
 # shellcheck source=scripts/ci/libraries/_parameters.sh
-. "${SCRIPTS_CI_DIR}"/libraries/_parameters.sh
+. "${LIBRARIES_DIR}"/_parameters.sh
 # shellcheck source=scripts/ci/libraries/_permissions.sh
-. "${SCRIPTS_CI_DIR}"/libraries/_permissions.sh
+. "${LIBRARIES_DIR}"/_permissions.sh
 # shellcheck source=scripts/ci/libraries/_push_pull_remove_images.sh
-. "${SCRIPTS_CI_DIR}"/libraries/_push_pull_remove_images.sh
+. "${LIBRARIES_DIR}"/_push_pull_remove_images.sh
 # shellcheck source=scripts/ci/libraries/_runs.sh
-. "${SCRIPTS_CI_DIR}"/libraries/_runs.sh
+. "${LIBRARIES_DIR}"/_runs.sh
 # shellcheck source=scripts/ci/libraries/_spinner.sh
-. "${SCRIPTS_CI_DIR}"/libraries/_spinner.sh
+. "${LIBRARIES_DIR}"/_spinner.sh
 # shellcheck source=scripts/ci/libraries/_start_end.sh
-. "${SCRIPTS_CI_DIR}"/libraries/_start_end.sh
+. "${LIBRARIES_DIR}"/_start_end.sh
 # shellcheck source=scripts/ci/libraries/_verbosity.sh
-. "${SCRIPTS_CI_DIR}"/libraries/_verbosity.sh
+. "${LIBRARIES_DIR}"/_verbosity.sh
diff --git a/scripts/ci/libraries/_build_images.sh b/scripts/ci/libraries/_build_images.sh
index 352975b..01eac17 100644
--- a/scripts/ci/libraries/_build_images.sh
+++ b/scripts/ci/libraries/_build_images.sh
@@ -421,7 +421,7 @@ function rebuild_ci_image_if_needed() {
             if [[ ${SYSTEM} != "Darwin" ]]; then
                 ROOT_FILES_COUNT=$(find "airflow" "tests" -user root | wc -l | xargs)
                 if [[ ${ROOT_FILES_COUNT} != "0" ]]; then
-                    ./scripts/ci/ci_fix_ownership.sh
+                    ./scripts/ci/tools/ci_fix_ownership.sh
                 fi
             fi
             print_info
diff --git a/scripts/ci/libraries/_initialization.sh b/scripts/ci/libraries/_initialization.sh
index 5f2a742..d0b14a7 100644
--- a/scripts/ci/libraries/_initialization.sh
+++ b/scripts/ci/libraries/_initialization.sh
@@ -33,10 +33,6 @@ function initialize_common_environment {
     # shellcheck disable=SC2034
     FILES_TO_CLEANUP_ON_EXIT=()
 
-    # Sets to where airflow sources are located
-    AIRFLOW_SOURCES=${AIRFLOW_SOURCES:=$(cd "${MY_DIR}/../../" && pwd)}
-    export AIRFLOW_SOURCES
-
     # Sets to the build cache directory - status of build and convenience scripts are stored there
     BUILD_CACHE_DIR="${AIRFLOW_SOURCES}/.build"
     export BUILD_CACHE_DIR
@@ -172,7 +168,7 @@ function initialize_common_environment {
     fi
 
     # Read airflow version from the version.py
-    AIRFLOW_VERSION=$(grep version "airflow/version.py" | awk '{print $3}' | sed "s/['+]//g")
+    AIRFLOW_VERSION=$(grep version "${AIRFLOW_SOURCES}/airflow/version.py" | awk '{print $3}' | sed "s/['+]//g")
     export AIRFLOW_VERSION
 
     # default version of python used to tag the "master" and "latest" images in DockerHub
diff --git a/scripts/ci/libraries/_kind.sh b/scripts/ci/libraries/_kind.sh
index f24b3aa..2cf43c3 100644
--- a/scripts/ci/libraries/_kind.sh
+++ b/scripts/ci/libraries/_kind.sh
@@ -125,6 +125,16 @@ function delete_cluster() {
 }
 
 function perform_kind_cluster_operation() {
+    ALLOWED_KIND_OPERATIONS="[ start restart stop deploy test shell ]"
+
+    set +u
+    if [[ -z "${1}" ]]; then
+        echo >&2
+        echo >&2 "Operation must be provided as first parameter. One of: ${ALLOWED_KIND_OPERATIONS}"
+        echo >&2
+        exit 1
+    fi
+    set -u
     OPERATION="${1}"
     ALL_CLUSTERS=$(kind get clusters || true)
 
@@ -181,17 +191,16 @@ function perform_kind_cluster_operation() {
             echo
             echo "Testing with KinD"
             echo
-            "${AIRFLOW_SOURCES}/scripts/ci/ci_run_kubernetes_tests.sh"
+            "${AIRFLOW_SOURCES}/scripts/ci/kubernetes/ci_run_kubernetes_tests.sh"
         elif [[ ${OPERATION} == "shell" ]]; then
             echo
             echo "Entering an interactive shell for kubernetes testing"
             echo
-            "${AIRFLOW_SOURCES}/scripts/ci/ci_run_kubernetes_tests.sh" "-i"
+            "${AIRFLOW_SOURCES}/scripts/ci/kubernetes/ci_run_kubernetes_tests.sh" "-i"
         else
-            echo
-            echo "Wrong cluster operation: ${OPERATION}. Should be one of:"
-            echo "${FORMATTED_KIND_OPERATIONS}"
-            echo
+            echo >&2
+            echo >&2 "Wrong cluster operation: ${OPERATION}. Should be one of: ${ALLOWED_KIND_OPERATIONS}"
+            echo >&2
             exit 1
         fi
     else
@@ -208,15 +217,14 @@ function perform_kind_cluster_operation() {
             create_cluster
         elif [[ ${OPERATION} == "stop" || ${OEPRATON} == "deploy" || \
                 ${OPERATION} == "test" || ${OPERATION} == "shell" ]]; then
-            echo
-            echo "Cluster ${KIND_CLUSTER_NAME} does not exist. It should exist for ${OPERATION} operation"
-            echo
+            echo >&2
+            echo >&2 "Cluster ${KIND_CLUSTER_NAME} does not exist. It should exist for ${OPERATION} operation"
+            echo >&2
             exit 1
         else
-            echo
-            echo "Wrong cluster operation: ${OPERATION}. Should be one of:"
-            echo "${FORMATTED_KIND_OPERATIONS}"
-            echo
+            echo >&2
+            echo >&2 "Wrong cluster operation: ${OPERATION}. Should be one of ${ALLOWED_KIND_OPERATIONS}"
+            echo >&2
             exit 1
         fi
     fi
@@ -262,9 +270,9 @@ function forward_port_to_kind_webserver() {
     set +e
     while ! curl http://localhost:8080/health -s | grep -q healthy; do
         if [[ ${num_tries} == 6 ]]; then
-            echo
-            echo "ERROR! Could not setup a forward port to Airflow's webserver after ${num_tries}! Exiting."
-            echo
+            echo >&2
+            echo >&2 "ERROR! Could not setup a forward port to Airflow's webserver after ${num_tries}! Exiting."
+            echo >&2
             exit 1
         fi
         echo
diff --git a/scripts/ci/_script_init.sh b/scripts/ci/libraries/_script_init.sh
similarity index 73%
rename from scripts/ci/_script_init.sh
rename to scripts/ci/libraries/_script_init.sh
index 2acbf10..804037e 100755
--- a/scripts/ci/_script_init.sh
+++ b/scripts/ci/libraries/_script_init.sh
@@ -18,17 +18,18 @@
 
 set -euo pipefail
 
-# This should only be sourced from CI directory!
+_CURRENT_DIR=$(dirname "${BASH_SOURCE[0]}")
 
-SCRIPTS_CI_DIR="$( dirname "${BASH_SOURCE[0]}" )"
+SCRIPTS_CI_DIR="$(cd "${_CURRENT_DIR}"/.. && pwd)"
 export SCRIPTS_CI_DIR
 
-# shellcheck source=scripts/ci/_all_libs.sh
-. "${SCRIPTS_CI_DIR}"/_all_libs.sh
+# Sets to where airflow sources are located
+AIRFLOW_SOURCES=${AIRFLOW_SOURCES:=$(cd "${SCRIPTS_CI_DIR}/../../" && pwd)}
+export AIRFLOW_SOURCES
 
 
-MY_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
-export MY_DIR
+# shellcheck source=scripts/ci/libraries/_all_libs.sh
+. "${SCRIPTS_CI_DIR}"/libraries/_all_libs.sh
 
 initialize_common_environment
 
diff --git a/scripts/ci/minikdc.properties b/scripts/ci/minikdc.properties
deleted file mode 100644
index c70ff84..0000000
--- a/scripts/ci/minikdc.properties
+++ /dev/null
@@ -1,27 +0,0 @@
-#
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-#   http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
-
-org.name=TEST
-org.domain=LOCAL
-kdc.bind.address=localhost
-kdc.port=8888
-instance=DefaultKrbServer
-max.ticket.lifetime=86400000
-max.renewable.lifetime=604800000
-transport=TCP
-debug=true
diff --git a/scripts/ci/pre_commit_bat_tests.sh b/scripts/ci/pre_commit/pre_commit_bat_tests.sh
similarity index 87%
rename from scripts/ci/pre_commit_bat_tests.sh
rename to scripts/ci/pre_commit/pre_commit_bat_tests.sh
index 4ae72c3..b820c7e 100755
--- a/scripts/ci/pre_commit_bat_tests.sh
+++ b/scripts/ci/pre_commit/pre_commit_bat_tests.sh
@@ -24,5 +24,5 @@ else
     PARAMS=("${@}")
 fi
 
-# shellcheck source=scripts/ci/ci_bat_tests.sh
-. "$( dirname "${BASH_SOURCE[0]}" )/ci_bat_tests.sh" "${PARAMS[@]}"
+# shellcheck source=scripts/ci/static_checks/ci_bat_tests.sh
+. "$( dirname "${BASH_SOURCE[0]}" )/../static_checks/ci_bat_tests.sh" "${PARAMS[@]}"
diff --git a/scripts/ci/pre_commit_breeze_cmd_line.sh b/scripts/ci/pre_commit/pre_commit_breeze_cmd_line.sh
similarity index 89%
rename from scripts/ci/pre_commit_breeze_cmd_line.sh
rename to scripts/ci/pre_commit/pre_commit_breeze_cmd_line.sh
index eeac2e6..4d62cc1 100755
--- a/scripts/ci/pre_commit_breeze_cmd_line.sh
+++ b/scripts/ci/pre_commit/pre_commit_breeze_cmd_line.sh
@@ -18,12 +18,14 @@
 
 set -euo pipefail
 
-MY_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
+PRE_COMMIT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
+AIRFLOW_SOURCES=$(cd "${PRE_COMMIT_DIR}/../../../" && pwd);
+cd "${AIRFLOW_SOURCES}" || exit 1
+
+
 TMP_FILE=$(mktemp)
 TMP_OUTPUT=$(mktemp)
 
-cd "${MY_DIR}/../../" || exit;
-
 echo "
 .. code-block:: text
 " >"${TMP_FILE}"
@@ -46,7 +48,7 @@ if (( MAX_LEN > MAX_SCREEN_WIDTH + 2 )); then
     exit 1
 fi
 
-BREEZE_RST_FILE="${MY_DIR}/../../BREEZE.rst"
+BREEZE_RST_FILE="${AIRFLOW_SOURCES}/BREEZE.rst"
 
 LEAD='^ \.\. START BREEZE HELP MARKER$'
 TAIL='^ \.\. END BREEZE HELP MARKER$'
diff --git a/scripts/ci/pre_commit_check_integrations.sh b/scripts/ci/pre_commit/pre_commit_check_integrations.sh
similarity index 81%
rename from scripts/ci/pre_commit_check_integrations.sh
rename to scripts/ci/pre_commit/pre_commit_check_integrations.sh
index 69ace38..6871941 100755
--- a/scripts/ci/pre_commit_check_integrations.sh
+++ b/scripts/ci/pre_commit/pre_commit_check_integrations.sh
@@ -17,12 +17,12 @@
 # under the License.
 set -euo pipefail
 
-MY_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
+PRE_COMMIT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
+AIRFLOW_SOURCES=$(cd "${PRE_COMMIT_DIR}/../../../" && pwd);
+cd "${AIRFLOW_SOURCES}" || exit 1
 
-cd "${MY_DIR}/../../" || exit;
-
-# shellcheck source=scripts/ci/_script_init.sh
-. "$( dirname "${BASH_SOURCE[0]}" )/_script_init.sh"
+# shellcheck source=scripts/ci/libraries/_script_init.sh
+. "$( dirname "${BASH_SOURCE[0]}" )/../libraries/_script_init.sh"
 
 . breeze-complete
 
diff --git a/scripts/ci/pre_commit_check_license.sh b/scripts/ci/pre_commit/pre_commit_check_license.sh
similarity index 92%
rename from scripts/ci/pre_commit_check_license.sh
rename to scripts/ci/pre_commit/pre_commit_check_license.sh
index 4ab6964..109df31 100755
--- a/scripts/ci/pre_commit_check_license.sh
+++ b/scripts/ci/pre_commit/pre_commit_check_license.sh
@@ -21,5 +21,5 @@ export FORCE_ANSWER_TO_QUESTIONS=${FORCE_ANSWER_TO_QUESTIONS:="quit"}
 export REMEMBER_LAST_ANSWER="true"
 
 # Hide lines between ****/**** (detailed list of files)
-"$( dirname "${BASH_SOURCE[0]}" )/ci_check_license.sh" 2>&1 | \
+"$( dirname "${BASH_SOURCE[0]}" )/../static_checks/ci_check_license.sh" 2>&1 | \
     (sed "/Files with Apache License headers will be marked AL.*$/,/^\**$/d" || true)
diff --git a/scripts/ci/pre_commit/pre_commit_check_order_setup.py b/scripts/ci/pre_commit/pre_commit_check_order_setup.py
new file mode 100755
index 0000000..c833e22
--- /dev/null
+++ b/scripts/ci/pre_commit/pre_commit_check_order_setup.py
@@ -0,0 +1,135 @@
+#!/usr/bin/env python3
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+Test for an order of dependencies in setup.py
+"""
+
+import os
+import re
+import sys
+from os.path import abspath, dirname
+from typing import List
+
+errors = []
+
+
+def _check_list_sorted(the_list: List[str], message: str) -> None:
+    sorted_list = sorted(the_list)
+    if the_list == sorted_list:
+        print(f"{message} is ok")
+        return
+    i = 0
+    while sorted_list[i] == the_list[i]:
+        i += 1
+    print(f"{message} NOK")
+    errors.append(f"ERROR in {message}. First wrongly sorted element"
+                  f" {the_list[i]}. Should be {sorted_list[i]}")
+
+
+def setup() -> str:
+    setup_py_file_path = abspath(os.path.join(dirname(__file__), os.pardir, os.pardir, os.pardir, 'setup.py'))
+    with open(setup_py_file_path) as setup_file:
+        setup_context = setup_file.read()
+    return setup_context
+
+
+def check_main_dependent_group(setup_context: str) -> None:
+    """
+    Test for an order of dependencies groups between mark
+    '# Start dependencies group' and '# End dependencies group' in setup.py
+    """
+    pattern_main_dependent_group = re.compile(
+        '# Start dependencies group\n(.*)# End dependencies group', re.DOTALL)
+    main_dependent_group = pattern_main_dependent_group.findall(setup_context)[0]
+
+    pattern_sub_dependent = re.compile(' = \\[.*?\\]\n', re.DOTALL)
+    main_dependent = pattern_sub_dependent.sub(',', main_dependent_group)
+
+    src = main_dependent.strip(',').split(',')
+    _check_list_sorted(src, "Order of dependencies")
+
+
+def check_sub_dependent_group(setup_context: str) -> None:
+    r"""
+    Test for an order of each dependencies groups declare like
+    `^dependent_group_name = [.*?]\n` in setup.py
+    """
+    pattern_dependent_group_name = re.compile('^(\\w+) = \\[', re.MULTILINE)
+    dependent_group_names = pattern_dependent_group_name.findall(setup_context)
+
+    pattern_dependent_version = re.compile('[~|><=;].*')
+
+    for group_name in dependent_group_names:
+        pattern_sub_dependent = re.compile(
+            '{group_name} = \\[(.*?)\\]'.format(group_name=group_name), re.DOTALL)
+        sub_dependent = pattern_sub_dependent.findall(setup_context)[0]
+        pattern_dependent = re.compile('\'(.*?)\'')
+        dependent = pattern_dependent.findall(sub_dependent)
+
+        src = [pattern_dependent_version.sub('', p) for p in dependent]
+        _check_list_sorted(src, f"Order of sub-dependencies group: {group_name}")
+
+
+def check_alias_dependent_group(setup_context: str) -> None:
+    """
+    Test for an order of each dependencies groups declare like
+    `alias_dependent_group = dependent_group_1 + ... + dependent_group_n` in setup.py
+    """
+    pattern = re.compile('^\\w+ = (\\w+ \\+.*)', re.MULTILINE)
+    dependents = pattern.findall(setup_context)
+
+    for dependent in dependents:
+        src = dependent.split(' + ')
+        _check_list_sorted(src, f"Order of alias dependencies group: {dependent}")
+
+
+def check_install_and_setup_requires(setup_context: str) -> None:
+    """
+    Test for an order of dependencies in function do_setup section
+    install_requires and setup_requires in setup.py
+    """
+    pattern_install_and_setup_requires = re.compile(
+        '(setup_requires) ?= ?\\[(.*?)\\]', re.DOTALL)
+    install_and_setup_requires = pattern_install_and_setup_requires.findall(setup_context)
+
+    for dependent_requires in install_and_setup_requires:
+        pattern_dependent = re.compile('\'(.*?)\'')
+        dependent = pattern_dependent.findall(dependent_requires[1])
+        pattern_dependent_version = re.compile('[~|><=;].*')
+
+        src = [pattern_dependent_version.sub('', p) for p in dependent]
+        _check_list_sorted(src, f"Order of dependencies in do_setup section: {dependent_requires[0]}")
+
+
+if __name__ == '__main__':
+    setup_context_main = setup()
+    check_main_dependent_group(setup_context_main)
+    check_alias_dependent_group(setup_context_main)
+    check_sub_dependent_group(setup_context_main)
+    check_install_and_setup_requires(setup_context_main)
+
+    print()
+    print()
+    for error in errors:
+        print(error)
+
+    print()
+
+    if errors:
+        sys.exit(1)
diff --git a/scripts/ci/pre_commit_ci_build.sh b/scripts/ci/pre_commit/pre_commit_ci_build.sh
similarity index 88%
rename from scripts/ci/pre_commit_ci_build.sh
rename to scripts/ci/pre_commit/pre_commit_ci_build.sh
index 6f25c39..861785b 100755
--- a/scripts/ci/pre_commit_ci_build.sh
+++ b/scripts/ci/pre_commit/pre_commit_ci_build.sh
@@ -18,8 +18,8 @@
 export PYTHON_MAJOR_MINOR_VERSION="${1}"
 export REMEMBER_LAST_ANSWER="${2}"
 
-# shellcheck source=scripts/ci/_script_init.sh
-. "$( dirname "${BASH_SOURCE[0]}" )/_script_init.sh"
+# shellcheck source=scripts/ci/libraries/_script_init.sh
+. "$( dirname "${BASH_SOURCE[0]}" )/../libraries/_script_init.sh"
 
 forget_last_answer
 
diff --git a/scripts/ci/pre_commit_flake8.sh b/scripts/ci/pre_commit/pre_commit_flake8.sh
similarity index 87%
rename from scripts/ci/pre_commit_flake8.sh
rename to scripts/ci/pre_commit/pre_commit_flake8.sh
index dd2a3da..95f1b98 100755
--- a/scripts/ci/pre_commit_flake8.sh
+++ b/scripts/ci/pre_commit/pre_commit_flake8.sh
@@ -18,5 +18,5 @@
 export FORCE_ANSWER_TO_QUESTIONS=${FORCE_ANSWER_TO_QUESTIONS:="quit"}
 export REMEMBER_LAST_ANSWER="true"
 
-# shellcheck source=scripts/ci/ci_flake8.sh
-. "$( dirname "${BASH_SOURCE[0]}" )/ci_flake8.sh" "${@}"
+# shellcheck source=scripts/ci/static_checks/ci_flake8.sh
+. "$( dirname "${BASH_SOURCE[0]}" )/../static_checks/ci_flake8.sh" "${@}"
diff --git a/scripts/ci/pre_commit_generate_requirements.sh b/scripts/ci/pre_commit/pre_commit_generate_requirements.sh
similarity index 85%
rename from scripts/ci/pre_commit_generate_requirements.sh
rename to scripts/ci/pre_commit/pre_commit_generate_requirements.sh
index f9e952e..d0c2deb 100755
--- a/scripts/ci/pre_commit_generate_requirements.sh
+++ b/scripts/ci/pre_commit/pre_commit_generate_requirements.sh
@@ -20,5 +20,5 @@ export REMEMBER_LAST_ANSWER="true"
 
 export PYTHON_MAJOR_MINOR_VERSION="${1}"
 
-# shellcheck source=scripts/ci/ci_generate_requirements.sh
-. "$( dirname "${BASH_SOURCE[0]}" )/ci_generate_requirements.sh"
+# shellcheck source=scripts/ci/requirements/ci_generate_requirements.sh
+. "$( dirname "${BASH_SOURCE[0]}" )/../generate_requirements/ci_generate_requirements.sh"
diff --git a/tests/insert_extras.py b/scripts/ci/pre_commit/pre_commit_insert_extras.py
old mode 100644
new mode 100755
similarity index 93%
rename from tests/insert_extras.py
rename to scripts/ci/pre_commit/pre_commit_insert_extras.py
index b38b543..9c441df
--- a/tests/insert_extras.py
+++ b/scripts/ci/pre_commit/pre_commit_insert_extras.py
@@ -1,3 +1,4 @@
+#!/usr/bin/env python3
 # Licensed to the Apache Software Foundation (ASF) under one
 # or more contributor license agreements.  See the NOTICE file
 # distributed with this work for additional information
@@ -20,8 +21,9 @@ from os.path import dirname
 from textwrap import wrap
 from typing import List
 
-AIRFLOW_SOURCES_DIR = os.path.join(dirname(__file__), os.pardir)
+AIRFLOW_SOURCES_DIR = os.path.join(dirname(__file__), os.pardir, os.pardir, os.pardir)
 
+sys.path.insert(0, AIRFLOW_SOURCES_DIR)
 # flake8: noqa: F401
 # pylint: disable=wrong-import-position
 from setup import EXTRAS_REQUIREMENTS  # isort:skip
diff --git a/scripts/ci/pre_commit_lint_dockerfile.sh b/scripts/ci/pre_commit/pre_commit_lint_dockerfile.sh
similarity index 84%
rename from scripts/ci/pre_commit_lint_dockerfile.sh
rename to scripts/ci/pre_commit/pre_commit_lint_dockerfile.sh
index bce941b..857c27e 100755
--- a/scripts/ci/pre_commit_lint_dockerfile.sh
+++ b/scripts/ci/pre_commit/pre_commit_lint_dockerfile.sh
@@ -17,5 +17,5 @@
 # under the License.
 export REMEMBER_LAST_ANSWER="true"
 
-# shellcheck source=scripts/ci/ci_lint_dockerfile.sh
-. "$( dirname "${BASH_SOURCE[0]}" )/ci_lint_dockerfile.sh" "${@}"
+# shellcheck source=scripts/ci/static_checks/ci_lint_dockerfile.sh
+. "$( dirname "${BASH_SOURCE[0]}" )/../static_checks/ci_lint_dockerfile.sh" "${@}"
diff --git a/scripts/ci/pre_commit_local_yml_mounts.sh b/scripts/ci/pre_commit/pre_commit_local_yml_mounts.sh
similarity index 86%
rename from scripts/ci/pre_commit_local_yml_mounts.sh
rename to scripts/ci/pre_commit/pre_commit_local_yml_mounts.sh
index b638a08..43ece7e 100755
--- a/scripts/ci/pre_commit_local_yml_mounts.sh
+++ b/scripts/ci/pre_commit/pre_commit_local_yml_mounts.sh
@@ -18,17 +18,15 @@
 
 set -euo pipefail
 
-MY_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
-
-# shellcheck source=scripts/ci/_script_init.sh
-. "$( dirname "${BASH_SOURCE[0]}" )/_script_init.sh"
+# shellcheck source=scripts/ci/libraries/_script_init.sh
+. "$( dirname "${BASH_SOURCE[0]}" )/../libraries/_script_init.sh"
 
 TMP_OUTPUT=$(mktemp)
 
 # Remove temp file if it's hanging around
 trap 'rm -rf -- "${TMP_OUTPUT}" 2>/dev/null' EXIT
 
-LOCAL_YML_FILE="${MY_DIR}/docker-compose/local.yml"
+LOCAL_YML_FILE="${AIRFLOW_SOURCES}/scripts/ci/docker-compose/local.yml"
 
 LEAD='      # START automatically generated volumes from LOCAL_MOUNTS in _local_mounts.sh'
 TAIL='      # END automatically generated volumes from LOCAL_MOUNTS in _local_mounts.sh'
diff --git a/scripts/ci/pre_commit_mypy.sh b/scripts/ci/pre_commit/pre_commit_mypy.sh
similarity index 87%
rename from scripts/ci/pre_commit_mypy.sh
rename to scripts/ci/pre_commit/pre_commit_mypy.sh
index 6f0025f..f5ab254 100755
--- a/scripts/ci/pre_commit_mypy.sh
+++ b/scripts/ci/pre_commit/pre_commit_mypy.sh
@@ -18,5 +18,5 @@
 export FORCE_ANSWER_TO_QUESTIONS=${FORCE_ANSWER_TO_QUESTIONS:="quit"}
 export REMEMBER_LAST_ANSWER="true"
 
-# shellcheck source=scripts/ci/ci_mypy.sh
-. "$( dirname "${BASH_SOURCE[0]}" )/ci_mypy.sh" "${@}"
+# shellcheck source=scripts/ci/static_checks/ci_mypy.sh
+. "$( dirname "${BASH_SOURCE[0]}" )/../static_checks/ci_mypy.sh" "${@}"
diff --git a/scripts/ci/pre_commit_yaml_to_cfg.py b/scripts/ci/pre_commit/pre_commit_yaml_to_cfg.py
similarity index 97%
rename from scripts/ci/pre_commit_yaml_to_cfg.py
rename to scripts/ci/pre_commit/pre_commit_yaml_to_cfg.py
index b826864..a578689 100755
--- a/scripts/ci/pre_commit_yaml_to_cfg.py
+++ b/scripts/ci/pre_commit/pre_commit_yaml_to_cfg.py
@@ -1,5 +1,4 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
+#!/usr/bin/env python3
 #
 # Licensed to the Apache Software Foundation (ASF) under one
 # or more contributor license agreements.  See the NOTICE file
@@ -127,7 +126,9 @@ def write_config(yaml_config_file_path, default_cfg_file_path):
 
 if __name__ == '__main__':
     airflow_config_dir = os.path.join(
-        os.path.dirname(__file__), "../../airflow/config_templates")
+        os.path.dirname(__file__),
+        os.pardir, os.pardir, os.pardir,
+        "airflow", "config_templates")
     airflow_default_config_path = os.path.join(airflow_config_dir, "default_airflow.cfg")
     airflow_config_yaml_file_path = os.path.join(airflow_config_dir, "config.yml")
 
diff --git a/scripts/ci/pre_commit_update_extras.sh b/scripts/ci/pre_commit_update_extras.sh
deleted file mode 100755
index be43a91..0000000
--- a/scripts/ci/pre_commit_update_extras.sh
+++ /dev/null
@@ -1,31 +0,0 @@
-#!/usr/bin/env bash
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-#   http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
-
-set -euo pipefail
-
-MY_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
-
-cd "${MY_DIR}/../../" || exit;
-
-# shellcheck source=scripts/ci/_script_init.sh
-. "$( dirname "${BASH_SOURCE[0]}" )/_script_init.sh"
-
-PYTHONPATH="$(pwd)"
-export PYTHONPATH
-
-python3 tests/insert_extras.py
diff --git a/scripts/ci/ci_generate_requirements.sh b/scripts/ci/requirements/ci_generate_requirements.sh
similarity index 88%
rename from scripts/ci/ci_generate_requirements.sh
rename to scripts/ci/requirements/ci_generate_requirements.sh
index f55799f..5cc4a0e 100755
--- a/scripts/ci/ci_generate_requirements.sh
+++ b/scripts/ci/requirements/ci_generate_requirements.sh
@@ -17,8 +17,8 @@
 # under the License.
 export PYTHON_MAJOR_MINOR_VERSION=${PYTHON_MAJOR_MINOR_VERSION:-3.6}
 
-# shellcheck source=scripts/ci/_script_init.sh
-. "$( dirname "${BASH_SOURCE[0]}" )/_script_init.sh"
+# shellcheck source=scripts/ci/libraries/_script_init.sh
+. "$( dirname "${BASH_SOURCE[0]}" )/../libraries/_script_init.sh"
 
 get_environment_for_builds_on_ci
 
diff --git a/scripts/ci/ci_bat_tests.sh b/scripts/ci/static_checks/ci_bat_tests.sh
similarity index 90%
rename from scripts/ci/ci_bat_tests.sh
rename to scripts/ci/static_checks/ci_bat_tests.sh
index 25011ce..e50c559 100755
--- a/scripts/ci/ci_bat_tests.sh
+++ b/scripts/ci/static_checks/ci_bat_tests.sh
@@ -16,6 +16,9 @@
 # specific language governing permissions and limitations
 # under the License.
 
+# shellcheck source=scripts/ci/libraries/_script_init.sh
+. "$( dirname "${BASH_SOURCE[0]}" )/../libraries/_script_init.sh"
+
 function run_bats_tests() {
     FILES=("$@")
     if [[ "${#FILES[@]}" == "0" ]]; then
diff --git a/scripts/ci/ci_check_license.sh b/scripts/ci/static_checks/ci_check_license.sh
similarity index 94%
rename from scripts/ci/ci_check_license.sh
rename to scripts/ci/static_checks/ci_check_license.sh
index da5aebd..3d887c4 100755
--- a/scripts/ci/ci_check_license.sh
+++ b/scripts/ci/static_checks/ci_check_license.sh
@@ -18,8 +18,8 @@
 export MOUNT_SOURCE_DIR_FOR_STATIC_CHECKS="true"
 export PYTHON_MAJOR_MINOR_VERSION=${PYTHON_MAJOR_MINOR_VERSION:-3.6}
 
-# shellcheck source=scripts/ci/_script_init.sh
-. "$( dirname "${BASH_SOURCE[0]}" )/_script_init.sh"
+# shellcheck source=scripts/ci/libraries/_script_init.sh
+. "$( dirname "${BASH_SOURCE[0]}" )/../libraries/_script_init.sh"
 
 function run_check_license() {
     echo
diff --git a/scripts/ci/ci_flake8.sh b/scripts/ci/static_checks/ci_flake8.sh
similarity index 94%
rename from scripts/ci/ci_flake8.sh
rename to scripts/ci/static_checks/ci_flake8.sh
index 33504c0..4ebd060 100755
--- a/scripts/ci/ci_flake8.sh
+++ b/scripts/ci/static_checks/ci_flake8.sh
@@ -17,8 +17,8 @@
 # under the License.
 export PYTHON_MAJOR_MINOR_VERSION=${PYTHON_MAJOR_MINOR_VERSION:-3.6}
 
-# shellcheck source=scripts/ci/_script_init.sh
-. "$( dirname "${BASH_SOURCE[0]}" )/_script_init.sh"
+# shellcheck source=scripts/ci/libraries/_script_init.sh
+. "$( dirname "${BASH_SOURCE[0]}" )/../libraries/_script_init.sh"
 
 function run_flake8() {
     FILES=("$@")
diff --git a/scripts/ci/ci_lint_dockerfile.sh b/scripts/ci/static_checks/ci_lint_dockerfile.sh
similarity index 92%
rename from scripts/ci/ci_lint_dockerfile.sh
rename to scripts/ci/static_checks/ci_lint_dockerfile.sh
index 29a5f68..2e48043 100755
--- a/scripts/ci/ci_lint_dockerfile.sh
+++ b/scripts/ci/static_checks/ci_lint_dockerfile.sh
@@ -15,8 +15,8 @@
 # KIND, either express or implied.  See the License for the
 # specific language governing permissions and limitations
 # under the License.
-# shellcheck source=scripts/ci/_script_init.sh
-. "$( dirname "${BASH_SOURCE[0]}" )/_script_init.sh"
+# shellcheck source=scripts/ci/libraries/_script_init.sh
+. "$( dirname "${BASH_SOURCE[0]}" )/../libraries/_script_init.sh"
 
 function run_docker_lint() {
     FILES=("$@")
diff --git a/scripts/ci/ci_mypy.sh b/scripts/ci/static_checks/ci_mypy.sh
similarity index 93%
rename from scripts/ci/ci_mypy.sh
rename to scripts/ci/static_checks/ci_mypy.sh
index 8cc3028..b6fc56e 100755
--- a/scripts/ci/ci_mypy.sh
+++ b/scripts/ci/static_checks/ci_mypy.sh
@@ -17,8 +17,8 @@
 # under the License.
 export PYTHON_MAJOR_MINOR_VERSION=3.6
 
-# shellcheck source=scripts/ci/_script_init.sh
-. "$( dirname "${BASH_SOURCE[0]}" )/_script_init.sh"
+# shellcheck source=scripts/ci/libraries/_script_init.sh
+. "$( dirname "${BASH_SOURCE[0]}" )/../libraries/_script_init.sh"
 
 function run_mypy() {
     FILES=("$@")
diff --git a/scripts/ci/ci_run_static_checks.sh b/scripts/ci/static_checks/ci_run_static_checks.sh
similarity index 91%
rename from scripts/ci/ci_run_static_checks.sh
rename to scripts/ci/static_checks/ci_run_static_checks.sh
index bfdae1a..6b7f124 100755
--- a/scripts/ci/ci_run_static_checks.sh
+++ b/scripts/ci/static_checks/ci_run_static_checks.sh
@@ -17,8 +17,8 @@
 # under the License.
 export PYTHON_MAJOR_MINOR_VERSION=3.6
 
-# shellcheck source=scripts/ci/_script_init.sh
-. "$( dirname "${BASH_SOURCE[0]}" )/_script_init.sh"
+# shellcheck source=scripts/ci/libraries/_script_init.sh
+. "$( dirname "${BASH_SOURCE[0]}" )/../libraries/_script_init.sh"
 
 if [[ -f ${BUILD_CACHE_DIR}/.skip_tests ]]; then
     echo
diff --git a/scripts/ci/ci_run_airflow_testing.sh b/scripts/ci/testing/ci_run_airflow_testing.sh
similarity index 82%
rename from scripts/ci/ci_run_airflow_testing.sh
rename to scripts/ci/testing/ci_run_airflow_testing.sh
index 09dfb7d..7949996 100755
--- a/scripts/ci/ci_run_airflow_testing.sh
+++ b/scripts/ci/testing/ci_run_airflow_testing.sh
@@ -17,28 +17,29 @@
 # under the License.
 export VERBOSE=${VERBOSE:="false"}
 
+# shellcheck source=scripts/ci/libraries/_script_init.sh
+. "$( dirname "${BASH_SOURCE[0]}" )/../libraries/_script_init.sh"
+
+if [[ -f ${BUILD_CACHE_DIR}/.skip_tests ]]; then
+    echo
+    echo "Skipping running tests !!!!!"
+    echo
+    exit
+fi
+
+
 function run_airflow_testing_in_docker() {
     set +u
     # shellcheck disable=SC2016
     docker-compose --log-level INFO \
-      -f "${MY_DIR}/docker-compose/base.yml" \
-      -f "${MY_DIR}/docker-compose/backend-${BACKEND}.yml" \
+      -f "${SCRIPTS_CI_DIR}/docker-compose/base.yml" \
+      -f "${SCRIPTS_CI_DIR}/docker-compose/backend-${BACKEND}.yml" \
       "${INTEGRATIONS[@]}" \
       "${DOCKER_COMPOSE_LOCAL[@]}" \
          run airflow "${@}"
     set -u
 }
 
-# shellcheck source=scripts/ci/_script_init.sh
-. "$( dirname "${BASH_SOURCE[0]}" )/_script_init.sh"
-
-if [[ -f ${BUILD_CACHE_DIR}/.skip_tests ]]; then
-    echo
-    echo "Skipping running tests !!!!!"
-    echo
-    exit
-fi
-
 get_environment_for_builds_on_ci
 
 prepare_ci_build
@@ -66,17 +67,17 @@ export FORWARD_CREDENTIALS=${FORWARD_CREDENTIALS:="false"}
 export INSTALL_AIRFLOW_VERSION=${INSTALL_AIRFLOW_VERSION:=""}
 
 if [[ ${MOUNT_LOCAL_SOURCES} == "true" ]]; then
-    DOCKER_COMPOSE_LOCAL=("-f" "${MY_DIR}/docker-compose/local.yml")
+    DOCKER_COMPOSE_LOCAL=("-f" "${SCRIPTS_CI_DIR}/docker-compose/local.yml")
 else
     DOCKER_COMPOSE_LOCAL=()
 fi
 
 if [[ ${FORWARD_CREDENTIALS} == "true" ]]; then
-    DOCKER_COMPOSE_LOCAL+=("-f" "${MY_DIR}/docker-compose/forward-credentials.yml")
+    DOCKER_COMPOSE_LOCAL+=("-f" "${SCRIPTS_CI_DIR}/docker-compose/forward-credentials.yml")
 fi
 
 if [[ ${INSTALL_AIRFLOW_VERSION} != "" || ${INSTALL_AIRFLOW_REFERENCE} != "" ]]; then
-    DOCKER_COMPOSE_LOCAL+=("-f" "${MY_DIR}/docker-compose/remove-sources.yml")
+    DOCKER_COMPOSE_LOCAL+=("-f" "${SCRIPTS_CI_DIR}/docker-compose/remove-sources.yml")
 fi
 
 echo
@@ -99,7 +100,7 @@ fi
 for _INT in ${ENABLED_INTEGRATIONS}
 do
     INTEGRATIONS+=("-f")
-    INTEGRATIONS+=("${MY_DIR}/docker-compose/integration-${_INT}.yml")
+    INTEGRATIONS+=("${SCRIPTS_CI_DIR}/docker-compose/integration-${_INT}.yml")
 done
 
 RUN_INTEGRATION_TESTS=${RUN_INTEGRATION_TESTS:=""}
diff --git a/scripts/ci/ci_count_changed_files.sh b/scripts/ci/tools/ci_count_changed_files.sh
similarity index 92%
rename from scripts/ci/ci_count_changed_files.sh
rename to scripts/ci/tools/ci_count_changed_files.sh
index d1ccd4b..e3d2c9e 100755
--- a/scripts/ci/ci_count_changed_files.sh
+++ b/scripts/ci/tools/ci_count_changed_files.sh
@@ -22,8 +22,8 @@
 #  $1: Revision to compare
 #  $2: Pattern to match
 
-# shellcheck source=scripts/ci/_script_init.sh
-. "$( dirname "${BASH_SOURCE[0]}" )/_script_init.sh"
+# shellcheck source=scripts/ci/libraries/_script_init.sh
+. "$( dirname "${BASH_SOURCE[0]}" )/../libraries/_script_init.sh"
 
 get_environment_for_builds_on_ci
 
diff --git a/scripts/ci/ci_fix_ownership.sh b/scripts/ci/tools/ci_fix_ownership.sh
similarity index 82%
rename from scripts/ci/ci_fix_ownership.sh
rename to scripts/ci/tools/ci_fix_ownership.sh
index 7e85152..8cde42d 100755
--- a/scripts/ci/ci_fix_ownership.sh
+++ b/scripts/ci/tools/ci_fix_ownership.sh
@@ -21,8 +21,8 @@
 #
 export PYTHON_MAJOR_MINOR_VERSION=${PYTHON_MAJOR_MINOR_VERSION:-3.6}
 
-# shellcheck source=scripts/ci/_script_init.sh
-. "$( dirname "${BASH_SOURCE[0]}" )/_script_init.sh"
+# shellcheck source=scripts/ci/libraries/_script_init.sh
+. "$( dirname "${BASH_SOURCE[0]}" )/../libraries/_script_init.sh"
 
 export AIRFLOW_CI_IMAGE=\
 ${DOCKERHUB_USER}/${DOCKERHUB_REPO}:${BRANCH_NAME}-python${PYTHON_MAJOR_MINOR_VERSION}-ci
@@ -39,7 +39,7 @@ export HOST_OS
 export BACKEND="sqlite"
 
 docker-compose \
-    -f "${MY_DIR}/docker-compose/base.yml" \
-    -f "${MY_DIR}/docker-compose/local.yml" \
-    -f "${MY_DIR}/docker-compose/forward-credentials.yml" \
+    -f "${SCRIPTS_CI_DIR}/docker-compose/base.yml" \
+    -f "${SCRIPTS_CI_DIR}/docker-compose/local.yml" \
+    -f "${SCRIPTS_CI_DIR}/docker-compose/forward-credentials.yml" \
     run airflow /opt/airflow/scripts/ci/in_container/run_fix_ownership.sh
diff --git a/scripts/ci/ci_free_space_on_ci.sh b/scripts/ci/tools/ci_free_space_on_ci.sh
similarity index 87%
rename from scripts/ci/ci_free_space_on_ci.sh
rename to scripts/ci/tools/ci_free_space_on_ci.sh
index 5d8f851..a50add7 100755
--- a/scripts/ci/ci_free_space_on_ci.sh
+++ b/scripts/ci/tools/ci_free_space_on_ci.sh
@@ -15,8 +15,8 @@
 # KIND, either express or implied.  See the License for the
 # specific language governing permissions and limitations
 # under the License.
-# shellcheck source=scripts/ci/_script_init.sh
-. "$( dirname "${BASH_SOURCE[0]}" )/_script_init.sh"
+# shellcheck source=scripts/ci/libraries/_script_init.sh
+. "$( dirname "${BASH_SOURCE[0]}" )/../libraries/_script_init.sh"
 
 sudo swapoff -a
 sudo rm -f /swapfile
diff --git a/tests/bats/bats_utils.bash b/tests/bats/bats_utils.bash
index 7a29da9..c06b182 100644
--- a/tests/bats/bats_utils.bash
+++ b/tests/bats/bats_utils.bash
@@ -18,5 +18,5 @@
 AIRFLOW_SOURCES=$(pwd)
 export AIRFLOW_SOURCES
 export SCRIPTS_CI_DIR=${AIRFLOW_SOURCES}/scripts/ci
-# shellcheck source=scripts/ci/_all_libs.sh
-source "${SCRIPTS_CI_DIR}/_all_libs.sh"
+# shellcheck source=scripts/ci/libraries/_all_libs.sh
+source "${SCRIPTS_CI_DIR}/libraries/_all_libs.sh"
diff --git a/tests/test_order_setup.py b/tests/test_order_setup.py
deleted file mode 100755
index 5c40ea3..0000000
--- a/tests/test_order_setup.py
+++ /dev/null
@@ -1,134 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-#
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-#   http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
-"""
-Test for an order of dependencies in setup.py
-"""
-
-import os
-import re
-import unittest
-
-
-class TestOrderSetup(unittest.TestCase):
-
-    def setUp(self):
-        current_dir = os.path.dirname(os.path.abspath(__file__))
-        parent_dir = os.path.dirname(current_dir)
-        self.setup_file = open('{parent_dir}/setup.py'.format(parent_dir=parent_dir))
-        self.setup_context = self.setup_file.read()
-
-    def tearDown(self):
-        self.setup_file.close()
-
-    def test_main_dependent_group(self):
-        """
-        Test for an order of dependencies groups between mark
-        '# Start dependencies group' and '# End dependencies group' in setup.py
-        """
-        pattern_main_dependent_group = re.compile(
-            '# Start dependencies group\n(.*)# End dependencies group', re.DOTALL)
-        main_dependent_group = pattern_main_dependent_group.findall(self.setup_context)[0]
-
-        pattern_sub_dependent = re.compile(' = \\[.*?\\]\n', re.DOTALL)
-        main_dependent = pattern_sub_dependent.sub(',', main_dependent_group)
-
-        src = main_dependent.strip(',').split(',')
-        alphabetical = sorted(src)
-        self.assertListEqual(alphabetical, src)
-
-    def test_sub_dependent_group(self):
-        """
-        Test for an order of each dependencies groups declare like
-        `^dependent_group_name = [.*?]\n` in setup.py
-        """
-        pattern_dependent_group_name = re.compile('^(\\w+) = \\[', re.MULTILINE)
-        dependent_group_names = pattern_dependent_group_name.findall(self.setup_context)
-
-        pattern_dependent_version = re.compile('[~|>|<|=|;].*')
-        for group_name in dependent_group_names:
-            pattern_sub_dependent = re.compile(
-                '{group_name} = \\[(.*?)\\]'.format(group_name=group_name), re.DOTALL)
-            sub_dependent = pattern_sub_dependent.findall(self.setup_context)[0]
-            pattern_dependent = re.compile('\'(.*?)\'')
-            dependent = pattern_dependent.findall(sub_dependent)
-
-            src = [pattern_dependent_version.sub('', p) for p in dependent]
-            alphabetical = sorted(src)
-            self.assertListEqual(alphabetical, src)
-
-    def test_alias_dependent_group(self):
-        """
-        Test for an order of each dependencies groups declare like
-        `alias_dependent_group = dependent_group_1 + ... + dependent_group_n` in setup.py
-        """
-        pattern = re.compile('^\\w+ = (\\w+ \\+.*)', re.MULTILINE)
-        dependents = pattern.findall(self.setup_context)
-        for dependent in dependents:
-            src = dependent.split(' + ')
-            alphabetical = sorted(src)
-            self.assertListEqual(alphabetical, src)
-
-    def test_devel_all(self):
-        """
-        Test for an order of dependencies groups
-        devel_all = (dependent_group_1 + ... + dependent_group_n) in setup.py
-        """
-        pattern = re.compile('devel_all = \\((.*?)\\)', re.DOTALL)
-        dependent = pattern.findall(self.setup_context)[0]
-        pattern_new_line = re.compile('\\n *')
-
-        src = pattern_new_line.sub(' ', dependent).split(' + ')
-        alphabetical = sorted(src)
-        self.assertListEqual(alphabetical, src)
-
-    def test_install_and_setup_requires(self):
-        """
-        Test for an order of dependencies in function do_setup section
-        install_requires and setup_requires in setup.py
-        """
-        pattern_install_and_setup_requires = re.compile(
-            '(INSTALL_REQUIREMENTS|setup_requires) ?= ?\\[(.*?)\\]', re.DOTALL)
-        install_and_setup_requires = pattern_install_and_setup_requires.findall(self.setup_context)
-
-        for dependent_requires in install_and_setup_requires:
-            pattern_dependent = re.compile('\'(.*?)\'')
-            dependent = pattern_dependent.findall(dependent_requires[1])
-            pattern_dependent_version = re.compile('[~|>|<|=|;].*')
-
-            src = [pattern_dependent_version.sub('', p) for p in dependent]
-            alphabetical = sorted(src)
-            self.assertListEqual(alphabetical, src)
-
-    def test_extras_require(self):
-        """
-        Test for an order of dependencies in function do_setup section
-        extras_require in setup.py
-        """
-        pattern_extras_requires = re.compile('EXTRAS_REQUIREMENTS = \\{(.*?)\\}', re.DOTALL)
-        extras_requires = pattern_extras_requires.findall(self.setup_context)[0]
-
-        pattern_dependent = re.compile('\'(.*?)\'')
-        src = pattern_dependent.findall(extras_requires)
-        alphabetical = sorted(src)
-        self.assertListEqual(alphabetical, src)
-
-
-if __name__ == '__main__':
-    unittest.main(verbosity=2)


[airflow] 06/32: The group of embedded DAGs should be root to be OpenShift compatible (#9794)

Posted by ka...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

kaxilnaik pushed a commit to branch v1-10-test
in repository https://gitbox.apache.org/repos/asf/airflow.git

commit d61c33db6118c1f0e16341e2dd8b20557c041677
Author: Jarek Potiuk <ja...@polidea.com>
AuthorDate: Mon Jul 13 20:47:55 2020 +0200

    The group of embedded DAGs should be root to be OpenShift compatible (#9794)
    
    (cherry picked from commit 8f6b8378aa46c8226b8dd56c509affe7f2b5a4bc)
---
 Dockerfile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Dockerfile b/Dockerfile
index 89225b8..a882178 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -358,7 +358,7 @@ COPY scripts/prod/clean-logs.sh /clean-logs
 
 ARG EMBEDDED_DAGS="empty"
 
-COPY --chown=airflow:airflow ${EMBEDDED_DAGS}/ ${AIRFLOW_HOME}/dags/
+COPY --chown=airflow:root ${EMBEDDED_DAGS}/ ${AIRFLOW_HOME}/dags/
 
 RUN chmod a+x /entrypoint /clean-logs
 


[airflow] 10/32: Reorganizing of CI tests (#9654)

Posted by ka...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

kaxilnaik pushed a commit to branch v1-10-test
in repository https://gitbox.apache.org/repos/asf/airflow.git

commit f6c8f51d982ae50c8524a740b9e44a974d5c8748
Author: Jarek Potiuk <ja...@polidea.com>
AuthorDate: Fri Jul 17 10:30:56 2020 +0200

    Reorganizing of CI tests (#9654)
    
    * we come back to idea of having one CI workflow
    * cancel and openapi are incorporated into that CI workflow
    * cancel retrieves workflow id automatically (works for forks)
    * static checks are now merged into one job
    * less dependencies between jobs so that waiting is minimised
    * better name for check if tests should be run
    * separated out script for tests should be run check
    
    (cherry picked from commit 496ed6f1b279dc5ca560dec0044c8373531565f4)
---
 .github/workflows/ci.yml                           | 102 ++++++++++++---------
 .../get_workflow_id.sh}                            |  18 ++--
 scripts/ci/{ => kubernetes}/ci_run_helm_testing.sh |   2 +-
 scripts/ci/libraries/_initialization.sh            |   2 +-
 ...files.sh => ci_check_if_tests_should_be_run.sh} |  69 +++++++-------
 scripts/ci/tools/ci_count_changed_files.sh         |  19 +++-
 6 files changed, 119 insertions(+), 93 deletions(-)

diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml
index a849d58..604fa0d 100644
--- a/.github/workflows/ci.yml
+++ b/.github/workflows/ci.yml
@@ -40,10 +40,30 @@ env:
 
 jobs:
 
+  cancel-previous-workflow-run:
+    timeout-minutes: 60
+    name: "Cancel previous workflow run"
+    runs-on: ubuntu-latest
+    steps:
+      - uses: actions/checkout@master
+      - name: Get ci workflow id
+        run: "scripts/ci/cancel/get_workflow_id.sh"
+        env:
+          WORKFLOW: ci
+          GITHUB_TOKEN: ${{ github.token }}
+          GITHUB_REPOSITORY: ${{ github.repositoru }}
+      - name: Cancel workflow ${{ github.workflow }}
+        uses: styfle/cancel-workflow-action@0.3.2
+        with:
+          workflow_id: ${{ env.WORKFLOW_ID }}
+          access_token: ${{ github.token }}
+
   static-checks:
     timeout-minutes: 60
     name: "Checks"
     runs-on: ubuntu-latest
+    needs:
+      - cancel-previous-workflow-run
     env:
       MOUNT_SOURCE_DIR_FOR_STATIC_CHECKS: "true"
       CI_JOB_TYPE: "Static checks"
@@ -69,11 +89,12 @@ jobs:
           python -m pip install pre-commit \
               --constraint requirements/requirements-python${PYTHON_MAJOR_MINOR_VERSION}.txt
           ./scripts/ci/static_checks/ci_run_static_checks.sh
-
   docs:
     timeout-minutes: 60
-    name: Build docs
+    name: "Build docs"
     runs-on: ubuntu-latest
+    needs:
+      - cancel-previous-workflow-run
     env:
       CI_JOB_TYPE: "Documentation"
     steps:
@@ -86,42 +107,25 @@ jobs:
       - name: "Build docs"
         run: ./scripts/ci/docs/ci_docs.sh
 
-  build-prod-image:
-    timeout-minutes: 60
-    name: "Build prod image Py${{ matrix.python-version }}"
-    runs-on: ubuntu-latest
-    strategy:
-      matrix:
-        python-version: [2.7, 3.5, 3.6, 3.7, 3.8]
-    env:
-      PYTHON_MAJOR_MINOR_VERSION: ${{ matrix.python-version }}
-      CI_JOB_TYPE: "Prod image"
-    steps:
-      - uses: actions/checkout@master
-      - name: "Build PROD image ${{ matrix.python-version }}"
-        run: ./scripts/ci/images/ci_prepare_prod_image_on_ci.sh
-
   trigger-tests:
-    timeout-minutes: 10
-    name: "Count changed important files"
+    timeout-minutes: 5
+    name: "Checks if tests should be run"
     runs-on: ubuntu-latest
+    needs:
+      - cancel-previous-workflow-run
     outputs:
-      count: ${{ steps.trigger-tests.outputs.count }}
+      run-tests: ${{ steps.trigger-tests.outputs.run-tests }}
     steps:
       - uses: actions/checkout@master
-      - name: "Get count of changed python files"
-        run: |
-          set +e
-          ./scripts/ci/tools/ci_count_changed_files.sh ${GITHUB_SHA} \
-              '^airflow|.github/workflows/|^Dockerfile|^scripts|^chart|^setup.py|^requirements|^tests|^kubernetes_tests'
-          echo "::set-output name=count::$?"
+      - name: "Check if tests should be run"
+        run: "./scripts/ci/tools/ci_check_if_tests_should_be_run.sh"
         id: trigger-tests
 
   tests-kubernetes:
     timeout-minutes: 80
     name: "K8s: ${{matrix.kube-mode}} ${{matrix.python-version}} ${{matrix.kubernetes-version}}"
     runs-on: ubuntu-latest
-    needs: [static-checks, trigger-tests]
+    needs: [trigger-tests]
     strategy:
       matrix:
         python-version: [3.6, 3.7]
@@ -147,8 +151,7 @@ jobs:
       KUBERNETES_VERSION: "${{ matrix.kubernetes-version }}"
       KIND_VERSION: "${{ matrix.kind-version }}"
       HELM_VERSION: "${{ matrix.helm-version }}"
-    # For pull requests only run tests when important files changed
-    if: needs.trigger-tests.outputs.count != '0' || github.event_name != 'pull_request'
+    if: needs.trigger-tests.outputs.run-tests == 'true' || github.event_name != 'pull_request'
     steps:
       - uses: actions/checkout@master
       - uses: actions/setup-python@v1
@@ -186,7 +189,7 @@ ${{ hashFiles('requirements/requirements-python${{matrix.python-version}}.txt')
     timeout-minutes: 80
     name: "${{matrix.test-type}}:Pg${{matrix.postgres-version}},Py${{matrix.python-version}}"
     runs-on: ubuntu-latest
-    needs: [static-checks, trigger-tests]
+    needs: [trigger-tests]
     strategy:
       matrix:
         python-version: [2.7, 3.5, 3.6, 3.7, 3.8]
@@ -200,8 +203,7 @@ ${{ hashFiles('requirements/requirements-python${{matrix.python-version}}.txt')
       RUN_TESTS: "true"
       CI_JOB_TYPE: "Tests"
       TEST_TYPE: ${{ matrix.test-type }}
-    # For pull requests only run tests when important files changed
-    if: needs.trigger-tests.outputs.count != '0' || github.event_name != 'pull_request'
+    if: needs.trigger-tests.outputs.run-tests == 'true' || github.event_name != 'pull_request'
     steps:
       - uses: actions/checkout@master
       - uses: actions/setup-python@v1
@@ -218,7 +220,7 @@ ${{ hashFiles('requirements/requirements-python${{matrix.python-version}}.txt')
     timeout-minutes: 80
     name: "${{matrix.test-type}}:MySQL${{matrix.mysql-version}}, Py${{matrix.python-version}}"
     runs-on: ubuntu-latest
-    needs: [static-checks, trigger-tests]
+    needs: [trigger-tests]
     strategy:
       matrix:
         python-version: [2.7, 3.5, 3.6, 3.7, 3.8]
@@ -232,8 +234,7 @@ ${{ hashFiles('requirements/requirements-python${{matrix.python-version}}.txt')
       RUN_TESTS: "true"
       CI_JOB_TYPE: "Tests"
       TEST_TYPE: ${{ matrix.test-type }}
-    # For pull requests only run tests when important files changed
-    if: needs.trigger-tests.outputs.count != '0' || github.event_name != 'pull_request'
+    if: needs.trigger-tests.outputs.run-tests == 'true' || github.event_name != 'pull_request'
     steps:
       - uses: actions/checkout@master
       - uses: actions/setup-python@v1
@@ -250,7 +251,7 @@ ${{ hashFiles('requirements/requirements-python${{matrix.python-version}}.txt')
     timeout-minutes: 80
     name: "${{matrix.test-type}}:Sqlite Py${{matrix.python-version}}"
     runs-on: ubuntu-latest
-    needs: [static-checks, trigger-tests]
+    needs: [trigger-tests]
     strategy:
       matrix:
         python-version: [2.7, 3.5, 3.6, 3.7, 3.8]
@@ -262,8 +263,7 @@ ${{ hashFiles('requirements/requirements-python${{matrix.python-version}}.txt')
       TEST_TYPE: ${{ matrix.test-type }}
       RUN_TESTS: "true"
       CI_JOB_TYPE: "Tests"
-    # For pull requests only run tests when python files changed
-    if: needs.trigger-tests.outputs.count != '0' || github.event_name != 'pull_request'
+    if: needs.trigger-tests.outputs.run-tests == 'true' || github.event_name != 'pull_request'
     steps:
       - uses: actions/checkout@master
       - uses: actions/setup-python@v1
@@ -281,7 +281,7 @@ ${{ hashFiles('requirements/requirements-python${{matrix.python-version}}.txt')
     name: "${{matrix.test-type}}:Pg${{matrix.postgres-version}},Py${{matrix.python-version}}"
     runs-on: ubuntu-latest
     continue-on-error: true
-    needs: [static-checks, trigger-tests]
+    needs: [trigger-tests]
     strategy:
       matrix:
         python-version: [3.6]
@@ -295,8 +295,7 @@ ${{ hashFiles('requirements/requirements-python${{matrix.python-version}}.txt')
       RUN_TESTS: "true"
       CI_JOB_TYPE: "Tests"
       TEST_TYPE: ${{ matrix.test-type }}
-    # For pull requests only run tests when important files changed
-    if: needs.trigger-tests.outputs.count != '0' || github.event_name != 'pull_request'
+    if: needs.trigger-tests.outputs.run-tests == 'true' || github.event_name != 'pull_request'
     steps:
       - uses: actions/checkout@master
       - uses: actions/setup-python@v1
@@ -313,12 +312,14 @@ ${{ hashFiles('requirements/requirements-python${{matrix.python-version}}.txt')
     timeout-minutes: 5
     name: "Checks: Helm tests"
     runs-on: ubuntu-latest
+    needs:
+      - cancel-previous-workflow-run
     env:
       CI_JOB_TYPE: "Tests"
     steps:
       - uses: actions/checkout@master
       - name: "Helm Tests"
-        run: ./scripts/ci/ci_run_helm_testing.sh
+        run: ./scripts/ci/kubernetes/ci_run_helm_testing.sh
 
   requirements:
     timeout-minutes: 80
@@ -328,6 +329,8 @@ ${{ hashFiles('requirements/requirements-python${{matrix.python-version}}.txt')
       matrix:
         python-version: [2.7, 3.5, 3.6, 3.7, 3.8]
       fail-fast: false
+    needs:
+      - cancel-previous-workflow-run
     env:
       PYTHON_MAJOR_MINOR_VERSION: ${{ matrix.python-version }}
       CHECK_REQUIREMENTS_ONLY: true
@@ -343,6 +346,21 @@ ${{ hashFiles('requirements/requirements-python${{matrix.python-version}}.txt')
       - name: "Generate requirements"
         run: ./scripts/ci/requirements/ci_generate_requirements.sh
 
+  build-prod-image:
+    timeout-minutes: 60
+    name: "Build prod image Py${{ matrix.python-version }}"
+    runs-on: ubuntu-latest
+    strategy:
+      matrix:
+        python-version: [2.7, 3.5, 3.6, 3.7, 3.8]
+    env:
+      PYTHON_MAJOR_MINOR_VERSION: ${{ matrix.python-version }}
+      CI_JOB_TYPE: "Prod image"
+    steps:
+      - uses: actions/checkout@master
+      - name: "Build PROD image ${{ matrix.python-version }}"
+        run: ./scripts/ci/images/ci_prepare_prod_image_on_ci.sh
+
   push-prod-images-to-github-cache:
     timeout-minutes: 80
     name: "Push PROD images"
diff --git a/scripts/ci/ci_run_helm_testing.sh b/scripts/ci/cancel/get_workflow_id.sh
similarity index 67%
copy from scripts/ci/ci_run_helm_testing.sh
copy to scripts/ci/cancel/get_workflow_id.sh
index 0a267d4..4fa6187 100755
--- a/scripts/ci/ci_run_helm_testing.sh
+++ b/scripts/ci/cancel/get_workflow_id.sh
@@ -15,14 +15,10 @@
 # KIND, either express or implied.  See the License for the
 # specific language governing permissions and limitations
 # under the License.
-
-echo "Running helm tests"
-
-CHART_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/../../chart/"
-
-echo "Chart directory is $CHART_DIR"
-
-docker run -w /airflow-chart -v "$CHART_DIR":/airflow-chart \
-  --entrypoint /bin/sh \
-  aneeshkj/helm-unittest \
-  -c "helm repo add stable https://kubernetes-charts.storage.googleapis.com; helm dependency update ; helm unittest ."
+set -euo pipefail
+echo "Getting workflow id for ${WORKFLOW}. Github Repo: ${GITHUB_REPOSITORY}"
+URL="https://api.github.com/repos/${GITHUB_REPOSITORY}/actions/workflows/${WORKFLOW}.yml"
+echo "Calling URL: ${URL}"
+WORKFLOW_ID=$(curl "Authorization: token ${GITHUB_TOKEN}" "${URL}" | jq '.id')
+echo "Workflow id for ${WORKFLOW}: ${WORKFLOW_ID}"
+echo "::set-env name=WORKFLOW_ID::${WORKFLOW_ID}"
diff --git a/scripts/ci/ci_run_helm_testing.sh b/scripts/ci/kubernetes/ci_run_helm_testing.sh
similarity index 98%
rename from scripts/ci/ci_run_helm_testing.sh
rename to scripts/ci/kubernetes/ci_run_helm_testing.sh
index 0a267d4..d48e298 100755
--- a/scripts/ci/ci_run_helm_testing.sh
+++ b/scripts/ci/kubernetes/ci_run_helm_testing.sh
@@ -18,7 +18,7 @@
 
 echo "Running helm tests"
 
-CHART_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/../../chart/"
+CHART_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/../../../chart/"
 
 echo "Chart directory is $CHART_DIR"
 
diff --git a/scripts/ci/libraries/_initialization.sh b/scripts/ci/libraries/_initialization.sh
index d0b14a7..e759b03 100644
--- a/scripts/ci/libraries/_initialization.sh
+++ b/scripts/ci/libraries/_initialization.sh
@@ -280,7 +280,7 @@ function get_environment_for_builds_on_ci() {
             fi
         elif [[ ${GITHUB_ACTIONS:=} == "true" ]]; then
             export CI_TARGET_REPO="${GITHUB_REPOSITORY}"
-            export CI_TARGET_BRANCH="${GITHUB_BASE_REF}"
+            export CI_TARGET_BRANCH="${GITHUB_BASE_REF:=${CI_TARGET_BRANCH}}"
             export CI_BUILD_ID="${GITHUB_RUN_ID}"
             export CI_JOB_ID="${GITHUB_JOB}"
             if [[ ${GITHUB_EVENT_NAME:=} == "pull_request" ]]; then
diff --git a/scripts/ci/tools/ci_count_changed_files.sh b/scripts/ci/tools/ci_check_if_tests_should_be_run.sh
similarity index 53%
copy from scripts/ci/tools/ci_count_changed_files.sh
copy to scripts/ci/tools/ci_check_if_tests_should_be_run.sh
index e3d2c9e..5a51048 100755
--- a/scripts/ci/tools/ci_count_changed_files.sh
+++ b/scripts/ci/tools/ci_check_if_tests_should_be_run.sh
@@ -16,40 +16,41 @@
 # specific language governing permissions and limitations
 # under the License.
 
-# Returns number of files matching the pattern changed in revision specified
-# Versus the tip of the target branch
-# Parameters
-#  $1: Revision to compare
-#  $2: Pattern to match
-
 # shellcheck source=scripts/ci/libraries/_script_init.sh
 . "$( dirname "${BASH_SOURCE[0]}" )/../libraries/_script_init.sh"
 
-get_environment_for_builds_on_ci
-
-git remote add target "https://github.com/${CI_TARGET_REPO}"
-
-git fetch target "${CI_TARGET_BRANCH}:${CI_TARGET_BRANCH}" --depth=1
-
-CHANGED_FILES=$(git diff-tree --no-commit-id --name-only -r "${1}" "${CI_TARGET_BRANCH}" || true)
-
-echo
-echo "Changed files:"
-echo
-echo "${CHANGED_FILES}"
-echo
-
-echo
-echo "Changed files matching the ${2} pattern"
-echo
-echo "${CHANGED_FILES}" | grep -E "${2}" || true
-echo
-
-echo
-echo "Count changed files matching the ${2} pattern"
-echo
-COUNT_CHANGED_FILES=$(echo "${CHANGED_FILES}" | grep -c -E "${2}" || true)
-echo "${COUNT_CHANGED_FILES}"
-echo
-
-exit "${COUNT_CHANGED_FILES}"
+CHANGED_FILES_PATTERNS=(
+    "^airflow"
+    "^.github/workflows/"
+    "^Dockerfile"
+    "^scripts"
+    "^chart"
+    "^setup.py"
+    "^requirements"
+    "^tests"
+    "^kubernetes_tests"
+)
+
+CHANGED_FILES_REGEXP=""
+
+SEPARATOR=""
+for PATTERN in "${CHANGED_FILES_PATTERNS[@]}"
+do
+    CHANGED_FILES_REGEXP="${CHANGED_FILES_REGEXP}${SEPARATOR}${PATTERN}"
+    SEPARATOR="|"
+done
+
+echo
+echo "GitHub SHA: ${GITHUB_SHA}"
+echo
+
+set +e
+"${SCRIPTS_CI_DIR}/tools/ci_count_changed_files.sh" "${CHANGED_FILES_REGEXP}"
+COUNT_CHANGED_FILES=$?
+set -e
+
+if [[ ${COUNT_CHANGED_FILES} == "0" ]]; then
+    echo "::set-output name=run-tests::false"
+else
+    echo "::set-output name=run-tests::true"
+fi
diff --git a/scripts/ci/tools/ci_count_changed_files.sh b/scripts/ci/tools/ci_count_changed_files.sh
index e3d2c9e..ef16eb1 100755
--- a/scripts/ci/tools/ci_count_changed_files.sh
+++ b/scripts/ci/tools/ci_count_changed_files.sh
@@ -27,11 +27,22 @@
 
 get_environment_for_builds_on_ci
 
+if [[ ${CI_EVENT_TYPE} == "push" ]]; then
+    echo
+    echo "Always run all tests on push"
+    echo
+    exit 1
+fi
+
 git remote add target "https://github.com/${CI_TARGET_REPO}"
 
 git fetch target "${CI_TARGET_BRANCH}:${CI_TARGET_BRANCH}" --depth=1
 
-CHANGED_FILES=$(git diff-tree --no-commit-id --name-only -r "${1}" "${CI_TARGET_BRANCH}" || true)
+echo
+echo "Retrieve changed files from ${GITHUB_SHA} comparing to ${CI_TARGET_BRANCH}"
+echo
+
+CHANGED_FILES=$(git diff-tree --no-commit-id --name-only -r "${GITHUB_SHA}" "${CI_TARGET_BRANCH}" || true)
 
 echo
 echo "Changed files:"
@@ -40,13 +51,13 @@ echo "${CHANGED_FILES}"
 echo
 
 echo
-echo "Changed files matching the ${2} pattern"
+echo "Changed files matching the ${1} pattern"
 echo
-echo "${CHANGED_FILES}" | grep -E "${2}" || true
+echo "${CHANGED_FILES}" | grep -E "${1}" || true
 echo
 
 echo
-echo "Count changed files matching the ${2} pattern"
+echo "Count changed files matching the ${1} pattern"
 echo
 COUNT_CHANGED_FILES=$(echo "${CHANGED_FILES}" | grep -c -E "${2}" || true)
 echo "${COUNT_CHANGED_FILES}"


[airflow] 30/32: Fix more PodMutationHook issues for backwards compatibility (#10084)

Posted by ka...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

kaxilnaik pushed a commit to branch v1-10-test
in repository https://gitbox.apache.org/repos/asf/airflow.git

commit c230156739178762d5cef482ace3d7a05e683cc1
Author: Kaxil Naik <ka...@gmail.com>
AuthorDate: Fri Aug 7 11:50:44 2020 +0100

    Fix more PodMutationHook issues for backwards compatibility (#10084)
    
    Co-authored-by: Daniel Imberman <da...@gmail.com>
---
 UPDATING.md                                      |  10 +
 airflow/contrib/executors/kubernetes_executor.py |  20 +
 airflow/contrib/kubernetes/pod.py                | 143 ++++++-
 airflow/executors/kubernetes_executor.py         |   6 +
 airflow/kubernetes/pod.py                        |  31 +-
 airflow/kubernetes/pod_generator.py              |  76 +++-
 airflow/kubernetes/pod_launcher.py               |  73 +++-
 airflow/kubernetes/pod_launcher_helper.py        |  96 -----
 airflow/kubernetes/secret.py                     |  21 +-
 airflow/kubernetes/volume.py                     |  17 +-
 airflow/operators/python_operator.py             |   4 +-
 docs/conf.py                                     |   1 +
 kubernetes_tests/test_kubernetes_pod_operator.py |   1 -
 tests/kubernetes/models/test_pod.py              | 108 +++---
 tests/kubernetes/models/test_volume.py           |  40 ++
 tests/kubernetes/test_pod_generator.py           | 206 +++++++++-
 tests/kubernetes/test_pod_launcher.py            | 153 +++++++-
 tests/kubernetes/test_pod_launcher_helper.py     |  98 -----
 tests/kubernetes/test_worker_configuration.py    |   7 +
 tests/test_local_settings.py                     | 269 -------------
 tests/test_local_settings/__init__.py            |  16 +
 tests/test_local_settings/test_local_settings.py | 461 +++++++++++++++++++++++
 22 files changed, 1289 insertions(+), 568 deletions(-)

diff --git a/UPDATING.md b/UPDATING.md
index f82ba10..4f2b844 100644
--- a/UPDATING.md
+++ b/UPDATING.md
@@ -67,6 +67,16 @@ https://developers.google.com/style/inclusive-documentation
 Previously, when tasks skipped by SkipMixin (such as BranchPythonOperator, BaseBranchOperator and ShortCircuitOperator) are cleared, they execute. Since 1.10.12, when such skipped tasks are cleared,
 they will be skipped again by the newly introduced NotPreviouslySkippedDep.
 
+### The pod_mutation_hook function will now accept a kubernetes V1Pod object
+
+As of airflow 1.10.12, using the `airflow.contrib.kubernetes.Pod` class in the `pod_mutation_hook` is now deprecated. Instead we recommend that users
+treat the `pod` parameter as a `kubernetes.client.models.V1Pod` object. This means that users now have access to the full Kubernetes API
+when modifying airflow pods
+
+### pod_template_file option now available in the KubernetesPodOperator
+
+Users can now offer a path to a yaml for the KubernetesPodOperator using the `pod_template_file` parameter.
+
 ## Airflow 1.10.11
 
 ### Use NULL as default value for dag.description
diff --git a/airflow/contrib/executors/kubernetes_executor.py b/airflow/contrib/executors/kubernetes_executor.py
new file mode 100644
index 0000000..416b2d7
--- /dev/null
+++ b/airflow/contrib/executors/kubernetes_executor.py
@@ -0,0 +1,20 @@
+# -*- coding: utf-8 -*-
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+from airflow.executors import kubernetes_executor  # noqa
diff --git a/airflow/contrib/kubernetes/pod.py b/airflow/contrib/kubernetes/pod.py
index 0ab3616..944cd8c 100644
--- a/airflow/contrib/kubernetes/pod.py
+++ b/airflow/contrib/kubernetes/pod.py
@@ -19,7 +19,18 @@
 import warnings
 
 # pylint: disable=unused-import
-from airflow.kubernetes.pod import Port, Resources   # noqa
+from typing import List, Union
+
+from kubernetes.client import models as k8s
+
+from airflow.kubernetes.pod import Port, Resources  # noqa
+from airflow.kubernetes.volume import Volume
+from airflow.kubernetes.volume_mount import VolumeMount
+from airflow.kubernetes.secret import Secret
+
+from kubernetes.client.api_client import ApiClient
+
+api_client = ApiClient()
 
 warnings.warn(
     "This module is deprecated. Please use `airflow.kubernetes.pod`.",
@@ -120,7 +131,7 @@ class Pod(object):
         self.affinity = affinity or {}
         self.hostnetwork = hostnetwork or False
         self.tolerations = tolerations or []
-        self.security_context = security_context
+        self.security_context = security_context or {}
         self.configmaps = configmaps or []
         self.pod_runtime_info_envs = pod_runtime_info_envs or []
         self.dnspolicy = dnspolicy
@@ -154,6 +165,7 @@ class Pod(object):
             dns_policy=self.dnspolicy,
             host_network=self.hostnetwork,
             tolerations=self.tolerations,
+            affinity=self.affinity,
             security_context=self.security_context,
         )
 
@@ -161,17 +173,18 @@ class Pod(object):
             spec=spec,
             metadata=meta,
         )
-        for port in self.ports:
+        for port in _extract_ports(self.ports):
             pod = port.attach_to_pod(pod)
-        for volume in self.volumes:
+        volumes, _ = _extract_volumes_and_secrets(self.volumes, self.volume_mounts)
+        for volume in volumes:
             pod = volume.attach_to_pod(pod)
-        for volume_mount in self.volume_mounts:
+        for volume_mount in _extract_volume_mounts(self.volume_mounts):
             pod = volume_mount.attach_to_pod(pod)
         for secret in self.secrets:
             pod = secret.attach_to_pod(pod)
         for runtime_info in self.pod_runtime_info_envs:
             pod = runtime_info.attach_to_pod(pod)
-        pod = self.resources.attach_to_pod(pod)
+        pod = _extract_resources(self.resources).attach_to_pod(pod)
         return pod
 
     def as_dict(self):
@@ -182,3 +195,121 @@ class Pod(object):
         res['volumes'] = [volume.as_dict() for volume in res['volumes']]
 
         return res
+
+
+def _extract_env_vars_and_secrets(env_vars):
+    """
+    Extracts environment variables and Secret objects from V1Pod Environment
+    """
+    result = {}
+    env_vars = env_vars or []  # type: List[Union[k8s.V1EnvVar, dict]]
+    secrets = []
+    for env_var in env_vars:
+        if isinstance(env_var, k8s.V1EnvVar):
+            secret = _extract_env_secret(env_var)
+            if secret:
+                secrets.append(secret)
+                continue
+            env_var = api_client.sanitize_for_serialization(env_var)
+        result[env_var.get("name")] = env_var.get("value")
+    return result, secrets
+
+
+def _extract_env_secret(env_var):
+    if env_var.value_from and env_var.value_from.secret_key_ref:
+        secret = env_var.value_from.secret_key_ref  # type: k8s.V1SecretKeySelector
+        name = secret.name
+        key = secret.key
+        return Secret("env", deploy_target=env_var.name, secret=name, key=key)
+    return None
+
+
+def _extract_ports(ports):
+    result = []
+    ports = ports or []  # type: List[Union[k8s.V1ContainerPort, dict]]
+    for port in ports:
+        if isinstance(port, k8s.V1ContainerPort):
+            port = api_client.sanitize_for_serialization(port)
+            port = Port(name=port.get("name"), container_port=port.get("containerPort"))
+        elif not isinstance(port, Port):
+            port = Port(name=port.get("name"), container_port=port.get("containerPort"))
+        result.append(port)
+    return result
+
+
+def _extract_resources(resources):
+    if isinstance(resources, k8s.V1ResourceRequirements):
+        requests = resources.requests
+        limits = resources.limits
+        return Resources(
+            request_memory=requests.get('memory', None),
+            request_cpu=requests.get('cpu', None),
+            request_ephemeral_storage=requests.get('ephemeral-storage', None),
+            limit_memory=limits.get('memory', None),
+            limit_cpu=limits.get('cpu', None),
+            limit_ephemeral_storage=limits.get('ephemeral-storage', None),
+            limit_gpu=limits.get('nvidia.com/gpu')
+        )
+    elif isinstance(resources, Resources):
+        return resources
+
+
+def _extract_security_context(security_context):
+    if isinstance(security_context, k8s.V1PodSecurityContext):
+        security_context = api_client.sanitize_for_serialization(security_context)
+    return security_context
+
+
+def _extract_volume_mounts(volume_mounts):
+    result = []
+    volume_mounts = volume_mounts or []  # type: List[Union[k8s.V1VolumeMount, dict]]
+    for volume_mount in volume_mounts:
+        if isinstance(volume_mount, k8s.V1VolumeMount):
+            volume_mount = api_client.sanitize_for_serialization(volume_mount)
+            volume_mount = VolumeMount(
+                name=volume_mount.get("name"),
+                mount_path=volume_mount.get("mountPath"),
+                sub_path=volume_mount.get("subPath"),
+                read_only=volume_mount.get("readOnly")
+            )
+        elif not isinstance(volume_mount, VolumeMount):
+            volume_mount = VolumeMount(
+                name=volume_mount.get("name"),
+                mount_path=volume_mount.get("mountPath"),
+                sub_path=volume_mount.get("subPath"),
+                read_only=volume_mount.get("readOnly")
+            )
+
+        result.append(volume_mount)
+    return result
+
+
+def _extract_volumes_and_secrets(volumes, volume_mounts):
+    result = []
+    volumes = volumes or []  # type: List[Union[k8s.V1Volume, dict]]
+    secrets = []
+    volume_mount_dict = {
+        volume_mount.name: volume_mount
+        for volume_mount in _extract_volume_mounts(volume_mounts)
+    }
+    for volume in volumes:
+        if isinstance(volume, k8s.V1Volume):
+            secret = _extract_volume_secret(volume, volume_mount_dict.get(volume.name, None))
+            if secret:
+                secrets.append(secret)
+                continue
+            volume = api_client.sanitize_for_serialization(volume)
+            volume = Volume(name=volume.get("name"), configs=volume)
+        if not isinstance(volume, Volume):
+            volume = Volume(name=volume.get("name"), configs=volume)
+        result.append(volume)
+    return result, secrets
+
+
+def _extract_volume_secret(volume, volume_mount):
+    if not volume.secret:
+        return None
+    if volume_mount:
+        return Secret("volume", volume_mount.mount_path, volume.name, volume.secret.secret_name)
+    else:
+        return Secret("volume", None, volume.name, volume.secret.secret_name)
diff --git a/airflow/executors/kubernetes_executor.py b/airflow/executors/kubernetes_executor.py
index 7bbdc98..3ad4222 100644
--- a/airflow/executors/kubernetes_executor.py
+++ b/airflow/executors/kubernetes_executor.py
@@ -417,6 +417,12 @@ class AirflowKubernetesScheduler(LoggingMixin):
             kube_executor_config=kube_executor_config,
             worker_config=self.worker_configuration_pod
         )
+
+        sanitized_pod = self.launcher._client.api_client.sanitize_for_serialization(pod)
+        json_pod = json.dumps(sanitized_pod, indent=2)
+
+        self.log.debug('Pod Creation Request before mutation: \n%s', json_pod)
+
         # Reconcile the pod generated by the Operator and the Pod
         # generated by the .cfg file
         self.log.debug("Kubernetes running for command %s", command)
diff --git a/airflow/kubernetes/pod.py b/airflow/kubernetes/pod.py
index 9e455af..67dc983 100644
--- a/airflow/kubernetes/pod.py
+++ b/airflow/kubernetes/pod.py
@@ -20,7 +20,7 @@ Classes for interacting with Kubernetes API
 
 import copy
 
-import kubernetes.client.models as k8s
+from kubernetes.client import models as k8s
 
 from airflow.kubernetes.k8s_model import K8SModel
 
@@ -87,18 +87,25 @@ class Resources(K8SModel):
             self.request_ephemeral_storage is not None
 
     def to_k8s_client_obj(self):
-        return k8s.V1ResourceRequirements(
-            limits={
-                'cpu': self.limit_cpu,
-                'memory': self.limit_memory,
-                'nvidia.com/gpu': self.limit_gpu,
-                'ephemeral-storage': self.limit_ephemeral_storage
-            },
-            requests={
-                'cpu': self.request_cpu,
-                'memory': self.request_memory,
-                'ephemeral-storage': self.request_ephemeral_storage}
+        limits_raw = {
+            'cpu': self.limit_cpu,
+            'memory': self.limit_memory,
+            'nvidia.com/gpu': self.limit_gpu,
+            'ephemeral-storage': self.limit_ephemeral_storage
+        }
+        requests_raw = {
+            'cpu': self.request_cpu,
+            'memory': self.request_memory,
+            'ephemeral-storage': self.request_ephemeral_storage
+        }
+
+        limits = {k: v for k, v in limits_raw.items() if v}
+        requests = {k: v for k, v in requests_raw.items() if v}
+        resource_req = k8s.V1ResourceRequirements(
+            limits=limits,
+            requests=requests
         )
+        return resource_req
 
     def attach_to_pod(self, pod):
         cp_pod = copy.deepcopy(pod)
diff --git a/airflow/kubernetes/pod_generator.py b/airflow/kubernetes/pod_generator.py
index d11c175..090e2b1 100644
--- a/airflow/kubernetes/pod_generator.py
+++ b/airflow/kubernetes/pod_generator.py
@@ -36,6 +36,7 @@ from functools import reduce
 import kubernetes.client.models as k8s
 import yaml
 from kubernetes.client.api_client import ApiClient
+from airflow.contrib.kubernetes.pod import _extract_volume_mounts
 
 from airflow.exceptions import AirflowConfigException
 from airflow.version import version as airflow_version
@@ -249,7 +250,7 @@ class PodGenerator(object):
         self.container.image_pull_policy = image_pull_policy
         self.container.ports = ports or []
         self.container.resources = resources
-        self.container.volume_mounts = volume_mounts or []
+        self.container.volume_mounts = [v.to_k8s_client_obj() for v in _extract_volume_mounts(volume_mounts)]
 
         # Pod Spec
         self.spec = k8s.V1PodSpec(containers=[])
@@ -370,6 +371,11 @@ class PodGenerator(object):
                     requests=requests,
                     limits=limits
                 )
+        elif isinstance(resources, dict):
+            resources = k8s.V1ResourceRequirements(
+                requests=resources['requests'],
+                limits=resources['limits']
+            )
 
         annotations = namespaced.get('annotations', {})
         gcp_service_account_key = namespaced.get('gcp_service_account_key', None)
@@ -402,13 +408,36 @@ class PodGenerator(object):
 
         client_pod_cp = copy.deepcopy(client_pod)
         client_pod_cp.spec = PodGenerator.reconcile_specs(base_pod.spec, client_pod_cp.spec)
-
-        client_pod_cp.metadata = merge_objects(base_pod.metadata, client_pod_cp.metadata)
+        client_pod_cp.metadata = PodGenerator.reconcile_metadata(base_pod.metadata, client_pod_cp.metadata)
         client_pod_cp = merge_objects(base_pod, client_pod_cp)
 
         return client_pod_cp
 
     @staticmethod
+    def reconcile_metadata(base_meta, client_meta):
+        """
+        :param base_meta: has the base attributes which are overwritten if they exist
+            in the client_meta and remain if they do not exist in the client_meta
+        :type base_meta: k8s.V1ObjectMeta
+        :param client_meta: the spec that the client wants to create.
+        :type client_meta: k8s.V1ObjectMeta
+        :return: the merged specs
+        """
+        if base_meta and not client_meta:
+            return base_meta
+        if not base_meta and client_meta:
+            return client_meta
+        elif client_meta and base_meta:
+            client_meta.labels = merge_objects(base_meta.labels, client_meta.labels)
+            client_meta.annotations = merge_objects(base_meta.annotations, client_meta.annotations)
+            extend_object_field(base_meta, client_meta, 'managed_fields')
+            extend_object_field(base_meta, client_meta, 'finalizers')
+            extend_object_field(base_meta, client_meta, 'owner_references')
+            return merge_objects(base_meta, client_meta)
+
+        return None
+
+    @staticmethod
     def reconcile_specs(base_spec,
                         client_spec):
         """
@@ -580,10 +609,17 @@ def merge_objects(base_obj, client_obj):
 
     client_obj_cp = copy.deepcopy(client_obj)
 
+    if isinstance(base_obj, dict) and isinstance(client_obj_cp, dict):
+        client_obj_cp.update(base_obj)
+        return client_obj_cp
+
     for base_key in base_obj.to_dict().keys():
         base_val = getattr(base_obj, base_key, None)
         if not getattr(client_obj, base_key, None) and base_val:
-            setattr(client_obj_cp, base_key, base_val)
+            if not isinstance(client_obj_cp, dict):
+                setattr(client_obj_cp, base_key, base_val)
+            else:
+                client_obj_cp[base_key] = base_val
     return client_obj_cp
 
 
@@ -610,6 +646,36 @@ def extend_object_field(base_obj, client_obj, field_name):
         setattr(client_obj_cp, field_name, base_obj_field)
         return client_obj_cp
 
-    appended_fields = base_obj_field + client_obj_field
+    base_obj_set = _get_dict_from_list(base_obj_field)
+    client_obj_set = _get_dict_from_list(client_obj_field)
+
+    appended_fields = _merge_list_of_objects(base_obj_set, client_obj_set)
+
     setattr(client_obj_cp, field_name, appended_fields)
     return client_obj_cp
+
+
+def _merge_list_of_objects(base_obj_set, client_obj_set):
+    for k, v in base_obj_set.items():
+        if k not in client_obj_set:
+            client_obj_set[k] = v
+        else:
+            client_obj_set[k] = merge_objects(v, client_obj_set[k])
+    appended_field_keys = sorted(client_obj_set.keys())
+    appended_fields = [client_obj_set[k] for k in appended_field_keys]
+    return appended_fields
+
+
+def _get_dict_from_list(base_list):
+    """
+    :type base_list: list(Optional[dict, *to_dict])
+    """
+    result = {}
+    for obj in base_list:
+        if isinstance(obj, dict):
+            result[obj['name']] = obj
+        elif hasattr(obj, "to_dict"):
+            result[obj.name] = obj
+        else:
+            raise AirflowConfigException("Trying to merge invalid object {}".format(obj))
+    return result
diff --git a/airflow/kubernetes/pod_launcher.py b/airflow/kubernetes/pod_launcher.py
index d6507df..875a24c 100644
--- a/airflow/kubernetes/pod_launcher.py
+++ b/airflow/kubernetes/pod_launcher.py
@@ -22,18 +22,22 @@ from datetime import datetime as dt
 
 import tenacity
 from kubernetes import watch, client
+from kubernetes.client.api_client import ApiClient
+from kubernetes.client import models as k8s
 from kubernetes.client.rest import ApiException
 from kubernetes.stream import stream as kubernetes_stream
 from requests.exceptions import BaseHTTPError
 
 from airflow import AirflowException
-from airflow.kubernetes.pod_launcher_helper import convert_to_airflow_pod
-from airflow.kubernetes.pod_generator import PodDefaults
 from airflow import settings
+from airflow.contrib.kubernetes.pod import (
+    Pod, _extract_env_vars_and_secrets, _extract_volumes_and_secrets, _extract_volume_mounts,
+    _extract_ports, _extract_security_context
+)
+from airflow.kubernetes.kube_client import get_kube_client
+from airflow.kubernetes.pod_generator import PodDefaults, PodGenerator
 from airflow.utils.log.logging_mixin import LoggingMixin
 from airflow.utils.state import State
-import kubernetes.client.models as k8s  # noqa
-from .kube_client import get_kube_client
 
 
 class PodStatus:
@@ -90,19 +94,22 @@ class PodLauncher(LoggingMixin):
     def _mutate_pod_backcompat(pod):
         """Backwards compatible Pod Mutation Hook"""
         try:
-            settings.pod_mutation_hook(pod)
-            # attempts to run pod_mutation_hook using k8s.V1Pod, if this
-            # fails we attempt to run by converting pod to Old Pod
-        except AttributeError:
+            dummy_pod = _convert_to_airflow_pod(pod)
+            settings.pod_mutation_hook(dummy_pod)
             warnings.warn(
                 "Using `airflow.contrib.kubernetes.pod.Pod` is deprecated. "
                 "Please use `k8s.V1Pod` instead.", DeprecationWarning, stacklevel=2
             )
-            dummy_pod = convert_to_airflow_pod(pod)
-            settings.pod_mutation_hook(dummy_pod)
             dummy_pod = dummy_pod.to_v1_kubernetes_pod()
-            return dummy_pod
-        return pod
+
+            new_pod = PodGenerator.reconcile_pods(pod, dummy_pod)
+        except AttributeError as e:
+            try:
+                settings.pod_mutation_hook(pod)
+                return pod
+            except AttributeError as e2:
+                raise Exception([e, e2])
+        return new_pod
 
     def delete_pod(self, pod):
         """Deletes POD"""
@@ -269,7 +276,7 @@ class PodLauncher(LoggingMixin):
         return None
 
     def process_status(self, job_id, status):
-        """Process status infomration for the JOB"""
+        """Process status information for the JOB"""
         status = status.lower()
         if status == PodStatus.PENDING:
             return State.QUEUED
@@ -284,3 +291,43 @@ class PodLauncher(LoggingMixin):
         else:
             self.log.info('Event: Invalid state %s on job %s', status, job_id)
             return State.FAILED
+
+
+def _convert_to_airflow_pod(pod):
+    """
+    Converts a k8s V1Pod object into an `airflow.kubernetes.pod.Pod` object.
+    This function is purely for backwards compatibility
+    """
+    base_container = pod.spec.containers[0]  # type: k8s.V1Container
+    env_vars, secrets = _extract_env_vars_and_secrets(base_container.env)
+    volumes, vol_secrets = _extract_volumes_and_secrets(pod.spec.volumes, base_container.volume_mounts)
+    secrets.extend(vol_secrets)
+    api_client = ApiClient()
+    init_containers = pod.spec.init_containers
+    if pod.spec.init_containers is not None:
+        init_containers = [api_client.sanitize_for_serialization(i) for i in pod.spec.init_containers]
+    dummy_pod = Pod(
+        image=base_container.image,
+        envs=env_vars,
+        cmds=base_container.command,
+        args=base_container.args,
+        labels=pod.metadata.labels,
+        annotations=pod.metadata.annotations,
+        node_selectors=pod.spec.node_selector,
+        name=pod.metadata.name,
+        ports=_extract_ports(base_container.ports),
+        volumes=volumes,
+        volume_mounts=_extract_volume_mounts(base_container.volume_mounts),
+        namespace=pod.metadata.namespace,
+        image_pull_policy=base_container.image_pull_policy or 'IfNotPresent',
+        tolerations=pod.spec.tolerations,
+        init_containers=init_containers,
+        image_pull_secrets=pod.spec.image_pull_secrets,
+        resources=base_container.resources,
+        service_account_name=pod.spec.service_account_name,
+        secrets=secrets,
+        affinity=pod.spec.affinity,
+        hostnetwork=pod.spec.host_network,
+        security_context=_extract_security_context(pod.spec.security_context)
+    )
+    return dummy_pod
diff --git a/airflow/kubernetes/pod_launcher_helper.py b/airflow/kubernetes/pod_launcher_helper.py
deleted file mode 100644
index 8c9fc6e..0000000
--- a/airflow/kubernetes/pod_launcher_helper.py
+++ /dev/null
@@ -1,96 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-#   http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
-from typing import List, Union
-
-import kubernetes.client.models as k8s  # noqa
-
-from airflow.kubernetes.volume import Volume
-from airflow.kubernetes.volume_mount import VolumeMount
-from airflow.kubernetes.pod import Port
-from airflow.contrib.kubernetes.pod import Pod
-
-
-def convert_to_airflow_pod(pod):
-    base_container = pod.spec.containers[0]  # type: k8s.V1Container
-
-    dummy_pod = Pod(
-        image=base_container.image,
-        envs=_extract_env_vars(base_container.env),
-        volumes=_extract_volumes(pod.spec.volumes),
-        volume_mounts=_extract_volume_mounts(base_container.volume_mounts),
-        labels=pod.metadata.labels,
-        name=pod.metadata.name,
-        namespace=pod.metadata.namespace,
-        image_pull_policy=base_container.image_pull_policy or 'IfNotPresent',
-        cmds=[],
-        ports=_extract_ports(base_container.ports)
-    )
-    return dummy_pod
-
-
-def _extract_env_vars(env_vars):
-    """
-
-    :param env_vars:
-    :type env_vars: list
-    :return: result
-    :rtype: dict
-    """
-    result = {}
-    env_vars = env_vars or []  # type: List[Union[k8s.V1EnvVar, dict]]
-    for env_var in env_vars:
-        if isinstance(env_var, k8s.V1EnvVar):
-            env_var.to_dict()
-        result[env_var.get("name")] = env_var.get("value")
-    return result
-
-
-def _extract_volumes(volumes):
-    result = []
-    volumes = volumes or []  # type: List[Union[k8s.V1Volume, dict]]
-    for volume in volumes:
-        if isinstance(volume, k8s.V1Volume):
-            volume = volume.to_dict()
-        result.append(Volume(name=volume.get("name"), configs=volume))
-    return result
-
-
-def _extract_volume_mounts(volume_mounts):
-    result = []
-    volume_mounts = volume_mounts or []  # type: List[Union[k8s.V1VolumeMount, dict]]
-    for volume_mount in volume_mounts:
-        if isinstance(volume_mount, k8s.V1VolumeMount):
-            volume_mount = volume_mount.to_dict()
-        result.append(
-            VolumeMount(
-                name=volume_mount.get("name"),
-                mount_path=volume_mount.get("mount_path"),
-                sub_path=volume_mount.get("sub_path"),
-                read_only=volume_mount.get("read_only"))
-        )
-
-    return result
-
-
-def _extract_ports(ports):
-    result = []
-    ports = ports or []  # type: List[Union[k8s.V1ContainerPort, dict]]
-    for port in ports:
-        if isinstance(port, k8s.V1ContainerPort):
-            port = port.to_dict()
-        result.append(Port(name=port.get("name"), container_port=port.get("container_port")))
-    return result
diff --git a/airflow/kubernetes/secret.py b/airflow/kubernetes/secret.py
index 9ff1927..df07747 100644
--- a/airflow/kubernetes/secret.py
+++ b/airflow/kubernetes/secret.py
@@ -55,7 +55,7 @@ class Secret(K8SModel):
             # if deploying to env, capitalize the deploy target
             self.deploy_target = deploy_target.upper()
 
-        if key is not None and deploy_target is None:
+        if key is not None and deploy_target is None and deploy_type == "env":
             raise AirflowConfigException(
                 'If `key` is set, `deploy_target` should not be None'
             )
@@ -84,6 +84,14 @@ class Secret(K8SModel):
     def to_volume_secret(self):
         import kubernetes.client.models as k8s
         vol_id = 'secretvol{}'.format(uuid.uuid4())
+        if self.deploy_target:
+            volume_mount = k8s.V1VolumeMount(
+                mount_path=self.deploy_target,
+                name=vol_id,
+                read_only=True
+            )
+        else:
+            volume_mount = None
         return (
             k8s.V1Volume(
                 name=vol_id,
@@ -91,11 +99,7 @@ class Secret(K8SModel):
                     secret_name=self.secret
                 )
             ),
-            k8s.V1VolumeMount(
-                mount_path=self.deploy_target,
-                name=vol_id,
-                read_only=True
-            )
+            volume_mount
         )
 
     def attach_to_pod(self, pod):
@@ -104,8 +108,9 @@ class Secret(K8SModel):
             volume, volume_mount = self.to_volume_secret()
             cp_pod.spec.volumes = pod.spec.volumes or []
             cp_pod.spec.volumes.append(volume)
-            cp_pod.spec.containers[0].volume_mounts = pod.spec.containers[0].volume_mounts or []
-            cp_pod.spec.containers[0].volume_mounts.append(volume_mount)
+            if volume_mount:
+                cp_pod.spec.containers[0].volume_mounts = pod.spec.containers[0].volume_mounts or []
+                cp_pod.spec.containers[0].volume_mounts.append(volume_mount)
         if self.deploy_type == 'env' and self.key is not None:
             env = self.to_env_secret()
             cp_pod.spec.containers[0].env = cp_pod.spec.containers[0].env or []
diff --git a/airflow/kubernetes/volume.py b/airflow/kubernetes/volume.py
index 9d85959..9e5e5c4 100644
--- a/airflow/kubernetes/volume.py
+++ b/airflow/kubernetes/volume.py
@@ -37,9 +37,15 @@ class Volume(K8SModel):
         self.configs = configs
 
     def to_k8s_client_obj(self):
-        configs = self.configs
-        configs['name'] = self.name
-        return configs
+        from kubernetes.client import models as k8s
+        resp = k8s.V1Volume(name=self.name)
+        for k, v in self.configs.items():
+            snake_key = Volume._convert_to_snake_case(k)
+            if hasattr(resp, snake_key):
+                setattr(resp, snake_key, v)
+            else:
+                raise AttributeError("V1Volume does not have attribute {}".format(k))
+        return resp
 
     def attach_to_pod(self, pod):
         cp_pod = copy.deepcopy(pod)
@@ -47,3 +53,8 @@ class Volume(K8SModel):
         cp_pod.spec.volumes = pod.spec.volumes or []
         cp_pod.spec.volumes.append(volume)
         return cp_pod
+
+    # source: https://www.geeksforgeeks.org/python-program-to-convert-camel-case-string-to-snake-case/
+    @staticmethod
+    def _convert_to_snake_case(str):
+        return ''.join(['_' + i.lower() if i.isupper() else i for i in str]).lstrip('_')
diff --git a/airflow/operators/python_operator.py b/airflow/operators/python_operator.py
index 78b6a41..392e0fc 100644
--- a/airflow/operators/python_operator.py
+++ b/airflow/operators/python_operator.py
@@ -234,8 +234,8 @@ class PythonVirtualenvOperator(PythonOperator):
         python_version=None,  # type: Optional[str]
         use_dill=False,  # type: bool
         system_site_packages=True,  # type: bool
-        op_args=None,  # type: Iterable
-        op_kwargs=None,  # type: Dict
+        op_args=None,  # type: Optional[Iterable]
+        op_kwargs=None,  # type: Optional[Dict]
         provide_context=False,  # type: bool
         string_args=None,  # type: Optional[Iterable[str]]
         templates_dict=None,  # type: Optional[Dict]
diff --git a/docs/conf.py b/docs/conf.py
index d18b6ea..101d050 100644
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -220,6 +220,7 @@ exclude_patterns = [
     '_api/airflow/version',
     '_api/airflow/www',
     '_api/airflow/www_rbac',
+    '_api/kubernetes_executor',
     '_api/main',
     '_api/mesos_executor',
     'autoapi_templates',
diff --git a/kubernetes_tests/test_kubernetes_pod_operator.py b/kubernetes_tests/test_kubernetes_pod_operator.py
index b6cecda..50a1258 100644
--- a/kubernetes_tests/test_kubernetes_pod_operator.py
+++ b/kubernetes_tests/test_kubernetes_pod_operator.py
@@ -404,7 +404,6 @@ class TestKubernetesPodOperatorSystem(unittest.TestCase):
             'limits': {
                 'memory': '64Mi',
                 'cpu': 0.25,
-                'nvidia.com/gpu': None,
                 'ephemeral-storage': '2Gi'
             }
         }
diff --git a/tests/kubernetes/models/test_pod.py b/tests/kubernetes/models/test_pod.py
index 2e53d60..8a89da0 100644
--- a/tests/kubernetes/models/test_pod.py
+++ b/tests/kubernetes/models/test_pod.py
@@ -75,11 +75,16 @@ class TestPod(unittest.TestCase):
             }
         }, result)
 
-    def test_to_v1_pod(self):
+    @mock.patch('uuid.uuid4')
+    def test_to_v1_pod(self, mock_uuid):
         from airflow.contrib.kubernetes.pod import Pod as DeprecatedPod
         from airflow.kubernetes.volume import Volume
         from airflow.kubernetes.volume_mount import VolumeMount
+        from airflow.kubernetes.secret import Secret
         from airflow.kubernetes.pod import Resources
+        import uuid
+        static_uuid = uuid.UUID('cf4a56d2-8101-4217-b027-2af6216feb48')
+        mock_uuid.return_value = static_uuid
 
         pod = DeprecatedPod(
             image="foo",
@@ -93,7 +98,19 @@ class TestPod(unittest.TestCase):
                 request_cpu="100Mi",
                 limit_gpu="100G"
             ),
-            volumes=[Volume(name="foo", configs={})],
+            init_containers=k8s.V1Container(
+                name="test-container",
+                volume_mounts=k8s.V1VolumeMount(mount_path="/foo/bar", name="init-volume-secret")
+            ),
+            volumes=[
+                Volume(name="foo", configs={}),
+                {"name": "bar", 'secret': {'secretName': 'volume-secret'}}
+            ],
+            secrets=[
+                Secret("volume", None, "init-volume-secret"),
+                Secret('env', "AIRFLOW_SECRET", 'secret_name', "airflow_config"),
+                Secret("volume", "/opt/airflow", "volume-secret", "secret-key")
+            ],
             volume_mounts=[VolumeMount(name="foo", mount_path="/mnt", sub_path="/", read_only=True)]
         )
 
@@ -103,55 +120,40 @@ class TestPod(unittest.TestCase):
         result = k8s_client.sanitize_for_serialization(result)
 
         expected = \
-            {
-                'metadata':
-                    {
-                        'labels': {},
-                        'name': 'bar',
-                        'namespace': 'baz'
-                    },
-                'spec':
-                    {'containers':
-                        [
-                            {
-                                'args': [],
-                                'command': ['airflow'],
-                                'env': [{'name': 'test_key', 'value': 'test_value'}],
-                                'image': 'foo',
-                                'imagePullPolicy': 'Never',
-                                'name': 'base',
-                                'volumeMounts':
-                                    [
-                                        {
-                                            'mountPath': '/mnt',
-                                            'name': 'foo',
-                                            'readOnly': True, 'subPath': '/'
-                                        }
-                                    ],  # noqa
-                                'resources':
-                                    {
-                                        'limits':
-                                            {
-                                                'cpu': None,
-                                                'memory': None,
-                                                'nvidia.com/gpu': '100G',
-                                                'ephemeral-storage': None
-                                            },
-                                        'requests':
-                                            {
-                                                'cpu': '100Mi',
-                                                'memory': '1G',
-                                                'ephemeral-storage': None
-                                            }
-                                }
-                            }
-                        ],
-                        'hostNetwork': False,
-                        'tolerations': [],
-                        'volumes': [
-                            {'name': 'foo'}
-                        ]
-                     }
-            }
+            {'metadata': {'labels': {}, 'name': 'bar', 'namespace': 'baz'},
+             'spec': {'affinity': {},
+                      'containers': [{'args': [],
+                                      'command': ['airflow'],
+                                      'env': [{'name': 'test_key', 'value': 'test_value'},
+                                              {'name': 'AIRFLOW_SECRET',
+                                               'valueFrom': {'secretKeyRef': {'key': 'airflow_config',
+                                                                              'name': 'secret_name'}}}],
+                                      'image': 'foo',
+                                      'imagePullPolicy': 'Never',
+                                      'name': 'base',
+                                      'resources': {'limits': {'nvidia.com/gpu': '100G'},
+                                                    'requests': {'cpu': '100Mi',
+                                                                 'memory': '1G'}},
+                                      'volumeMounts': [{'mountPath': '/mnt',
+                                                        'name': 'foo',
+                                                        'readOnly': True,
+                                                        'subPath': '/'},
+                                                       {'mountPath': '/opt/airflow',
+                                                       'name': 'secretvol' + str(static_uuid),
+                                                        'readOnly': True}]}],
+                      'hostNetwork': False,
+                      'initContainers': {'name': 'test-container',
+                                         'volumeMounts': {'mountPath': '/foo/bar',
+                                                          'name': 'init-volume-secret'}},
+                      'securityContext': {},
+                      'tolerations': [],
+                      'volumes': [{'name': 'foo'},
+                                  {'name': 'bar',
+                                   'secret': {'secretName': 'volume-secret'}},
+                                  {'name': 'secretvolcf4a56d2-8101-4217-b027-2af6216feb48',
+                                   'secret': {'secretName': 'init-volume-secret'}},
+                                  {'name': 'secretvol' + str(static_uuid),
+                                   'secret': {'secretName': 'volume-secret'}}
+                                  ]}}
         self.maxDiff = None
-        self.assertEquals(expected, result)
+        self.assertEqual(expected, result)
diff --git a/tests/kubernetes/models/test_volume.py b/tests/kubernetes/models/test_volume.py
new file mode 100644
index 0000000..c1b8e29
--- /dev/null
+++ b/tests/kubernetes/models/test_volume.py
@@ -0,0 +1,40 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+import unittest
+
+from kubernetes.client import models as k8s
+
+from airflow.kubernetes.volume import Volume
+
+
+class TestVolume(unittest.TestCase):
+    def test_to_k8s_object(self):
+        volume_config = {
+            'persistentVolumeClaim':
+                {
+                    'claimName': 'test-volume'
+                }
+        }
+        volume = Volume(name='test-volume', configs=volume_config)
+        expected_volume = k8s.V1Volume(
+            name="test-volume",
+            persistent_volume_claim={
+                "claimName": "test-volume"
+            }
+        )
+        result = volume.to_k8s_client_obj()
+        self.assertEqual(result, expected_volume)
diff --git a/tests/kubernetes/test_pod_generator.py b/tests/kubernetes/test_pod_generator.py
index d0faf4c..bb714d4 100644
--- a/tests/kubernetes/test_pod_generator.py
+++ b/tests/kubernetes/test_pod_generator.py
@@ -255,6 +255,20 @@ class TestPodGenerator(unittest.TestCase):
                         "name": "example-kubernetes-test-volume",
                     },
                 ],
+                "resources": {
+                    "requests": {
+                        "memory": "256Mi",
+                        "cpu": "500m",
+                        "ephemeral-storage": "2G",
+                        "nvidia.com/gpu": "0"
+                    },
+                    "limits": {
+                        "memory": "512Mi",
+                        "cpu": "1000m",
+                        "ephemeral-storage": "2G",
+                        "nvidia.com/gpu": "0"
+                    }
+                }
             }
         })
         result = self.k8s_client.sanitize_for_serialization(result)
@@ -277,6 +291,92 @@ class TestPodGenerator(unittest.TestCase):
                         'mountPath': '/foo/',
                         'name': 'example-kubernetes-test-volume'
                     }],
+                    "resources": {
+                        "requests": {
+                            "memory": "256Mi",
+                            "cpu": "500m",
+                            "ephemeral-storage": "2G",
+                            "nvidia.com/gpu": "0"
+                        },
+                        "limits": {
+                            "memory": "512Mi",
+                            "cpu": "1000m",
+                            "ephemeral-storage": "2G",
+                            "nvidia.com/gpu": "0"
+                        }
+                    }
+                }],
+                'hostNetwork': False,
+                'imagePullSecrets': [],
+                'volumes': [{
+                    'hostPath': {'path': '/tmp/'},
+                    'name': 'example-kubernetes-test-volume'
+                }],
+            }
+        }, result)
+
+    @mock.patch('uuid.uuid4')
+    def test_from_obj_with_resources_object(self, mock_uuid):
+        mock_uuid.return_value = self.static_uuid
+        result = PodGenerator.from_obj({
+            "KubernetesExecutor": {
+                "annotations": {"test": "annotation"},
+                "volumes": [
+                    {
+                        "name": "example-kubernetes-test-volume",
+                        "hostPath": {"path": "/tmp/"},
+                    },
+                ],
+                "volume_mounts": [
+                    {
+                        "mountPath": "/foo/",
+                        "name": "example-kubernetes-test-volume",
+                    },
+                ],
+                "resources": {
+                    "requests": {
+                        "memory": "256Mi",
+                        "cpu": "500m",
+                        "ephemeral-storage": "2G",
+                        "nvidia.com/gpu": "0"
+                    },
+                    "limits": {
+                        "memory": "512Mi",
+                        "cpu": "1000m",
+                        "ephemeral-storage": "2G",
+                        "nvidia.com/gpu": "0"
+                    }
+                }
+            }
+        })
+        result = self.k8s_client.sanitize_for_serialization(result)
+
+        self.assertEqual({
+            'apiVersion': 'v1',
+            'kind': 'Pod',
+            'metadata': {
+                'annotations': {'test': 'annotation'},
+            },
+            'spec': {
+                'containers': [{
+                    'args': [],
+                    'command': [],
+                    'env': [],
+                    'envFrom': [],
+                    'name': 'base',
+                    'ports': [],
+                    'volumeMounts': [{
+                        'mountPath': '/foo/',
+                        'name': 'example-kubernetes-test-volume'
+                    }],
+                    'resources': {'limits': {'cpu': '1000m',
+                                             'ephemeral-storage': '2G',
+                                             'memory': '512Mi',
+                                             'nvidia.com/gpu': '0'},
+                                  'requests': {'cpu': '500m',
+                                               'ephemeral-storage': '2G',
+                                               'memory': '256Mi',
+                                               'nvidia.com/gpu': '0'}},
                 }],
                 'hostNetwork': False,
                 'imagePullSecrets': [],
@@ -586,7 +686,7 @@ class TestPodGenerator(unittest.TestCase):
         }, sanitized_result)
 
     @mock.patch('uuid.uuid4')
-    def test_construct_pod_empty_execuctor_config(self, mock_uuid):
+    def test_construct_pod_empty_executor_config(self, mock_uuid):
         mock_uuid.return_value = self.static_uuid
         worker_config = k8s.V1Pod(
             spec=k8s.V1PodSpec(
@@ -731,6 +831,92 @@ class TestPodGenerator(unittest.TestCase):
             }
         }, sanitized_result)
 
+    @mock.patch('uuid.uuid4')
+    def test_construct_pod_with_mutation(self, mock_uuid):
+        mock_uuid.return_value = self.static_uuid
+        worker_config = k8s.V1Pod(
+            metadata=k8s.V1ObjectMeta(
+                name='gets-overridden-by-dynamic-args',
+                annotations={
+                    'should': 'stay'
+                }
+            ),
+            spec=k8s.V1PodSpec(
+                containers=[
+                    k8s.V1Container(
+                        name='doesnt-override',
+                        resources=k8s.V1ResourceRequirements(
+                            limits={
+                                'cpu': '1m',
+                                'memory': '1G'
+                            }
+                        ),
+                        security_context=k8s.V1SecurityContext(
+                            run_as_user=1
+                        )
+                    )
+                ]
+            )
+        )
+        executor_config = k8s.V1Pod(
+            spec=k8s.V1PodSpec(
+                containers=[
+                    k8s.V1Container(
+                        name='doesnt-override-either',
+                        resources=k8s.V1ResourceRequirements(
+                            limits={
+                                'cpu': '2m',
+                                'memory': '2G'
+                            }
+                        )
+                    )
+                ]
+            )
+        )
+
+        result = PodGenerator.construct_pod(
+            'dag_id',
+            'task_id',
+            'pod_id',
+            3,
+            'date',
+            ['command'],
+            executor_config,
+            worker_config,
+            'namespace',
+            'uuid',
+        )
+        sanitized_result = self.k8s_client.sanitize_for_serialization(result)
+
+        self.metadata.update({'annotations': {'should': 'stay'}})
+
+        self.assertEqual({
+            'apiVersion': 'v1',
+            'kind': 'Pod',
+            'metadata': self.metadata,
+            'spec': {
+                'containers': [{
+                    'args': [],
+                    'command': ['command'],
+                    'env': [],
+                    'envFrom': [],
+                    'name': 'base',
+                    'ports': [],
+                    'resources': {
+                        'limits': {
+                            'cpu': '2m',
+                            'memory': '2G'
+                        }
+                    },
+                    'volumeMounts': [],
+                    'securityContext': {'runAsUser': 1}
+                }],
+                'hostNetwork': False,
+                'imagePullSecrets': [],
+                'volumes': []
+            }
+        }, sanitized_result)
+
     def test_merge_objects_empty(self):
         annotations = {'foo1': 'bar1'}
         base_obj = k8s.V1ObjectMeta(annotations=annotations)
@@ -901,3 +1087,21 @@ spec:
         PodGenerator(image='k')
         PodGenerator(pod_template_file='tests/kubernetes/pod.yaml')
         PodGenerator(pod=k8s.V1Pod())
+
+    def test_add_custom_label(self):
+        from kubernetes.client import models as k8s
+
+        pod = PodGenerator.construct_pod(
+            namespace="test",
+            worker_uuid="test",
+            pod_id="test",
+            dag_id="test",
+            task_id="test",
+            try_number=1,
+            date="23-07-2020",
+            command="test",
+            kube_executor_config=None,
+            worker_config=k8s.V1Pod(metadata=k8s.V1ObjectMeta(labels={"airflow-test": "airflow-task-pod"},
+                                                              annotations={"my.annotation": "foo"})))
+        self.assertIn("airflow-test", pod.metadata.labels)
+        self.assertIn("my.annotation", pod.metadata.annotations)
diff --git a/tests/kubernetes/test_pod_launcher.py b/tests/kubernetes/test_pod_launcher.py
index 09ba339..64c24c6 100644
--- a/tests/kubernetes/test_pod_launcher.py
+++ b/tests/kubernetes/test_pod_launcher.py
@@ -16,11 +16,17 @@
 # under the License.
 import unittest
 import mock
+from kubernetes.client import models as k8s
 
 from requests.exceptions import BaseHTTPError
 
 from airflow import AirflowException
-from airflow.kubernetes.pod_launcher import PodLauncher
+from airflow.contrib.kubernetes.pod import Pod
+from airflow.kubernetes.pod import Port
+from airflow.kubernetes.pod_launcher import PodLauncher, _convert_to_airflow_pod
+from airflow.kubernetes.volume import Volume
+from airflow.kubernetes.secret import Secret
+from airflow.kubernetes.volume_mount import VolumeMount
 
 
 class TestPodLauncher(unittest.TestCase):
@@ -162,3 +168,148 @@ class TestPodLauncher(unittest.TestCase):
             self.pod_launcher.read_pod,
             mock.sentinel
         )
+
+
+class TestPodLauncherHelper(unittest.TestCase):
+    def test_convert_to_airflow_pod(self):
+        input_pod = k8s.V1Pod(
+            metadata=k8s.V1ObjectMeta(
+                name="foo",
+                namespace="bar"
+            ),
+            spec=k8s.V1PodSpec(
+                init_containers=[
+                    k8s.V1Container(
+                        name="init-container",
+                        volume_mounts=[k8s.V1VolumeMount(mount_path="/tmp", name="init-secret")]
+                    )
+                ],
+                containers=[
+                    k8s.V1Container(
+                        name="base",
+                        command=["foo"],
+                        image="myimage",
+                        env=[
+                            k8s.V1EnvVar(
+                                name="AIRFLOW_SECRET",
+                                value_from=k8s.V1EnvVarSource(
+                                    secret_key_ref=k8s.V1SecretKeySelector(
+                                        name="ai",
+                                        key="secret_key"
+                                    )
+                                ))
+                        ],
+                        ports=[
+                            k8s.V1ContainerPort(
+                                name="myport",
+                                container_port=8080,
+                            )
+                        ],
+                        volume_mounts=[
+                            k8s.V1VolumeMount(
+                                name="myvolume",
+                                mount_path="/tmp/mount",
+                                read_only="True"
+                            ),
+                            k8s.V1VolumeMount(
+                                name='airflow-config',
+                                mount_path='/config',
+                                sub_path='airflow.cfg',
+                                read_only=True
+                            ),
+                            k8s.V1VolumeMount(
+                                name="airflow-secret",
+                                mount_path="/opt/mount",
+                                read_only=True
+                            )]
+                    )
+                ],
+                security_context=k8s.V1PodSecurityContext(
+                    run_as_user=0,
+                    fs_group=0,
+                ),
+                volumes=[
+                    k8s.V1Volume(
+                        name="myvolume"
+                    ),
+                    k8s.V1Volume(
+                        name="airflow-config",
+                        config_map=k8s.V1ConfigMap(
+                            data="airflow-data"
+                        )
+                    ),
+                    k8s.V1Volume(
+                        name="airflow-secret",
+                        secret=k8s.V1SecretVolumeSource(
+                            secret_name="secret-name",
+
+                        )
+                    ),
+                    k8s.V1Volume(
+                        name="init-secret",
+                        secret=k8s.V1SecretVolumeSource(
+                            secret_name="secret-name",
+                        )
+                    )
+                ]
+            )
+        )
+        result_pod = _convert_to_airflow_pod(input_pod)
+
+        expected = Pod(
+            name="foo",
+            namespace="bar",
+            envs={},
+            init_containers=[
+                {'name': 'init-container', 'volumeMounts': [{'mountPath': '/tmp', 'name': 'init-secret'}]}
+            ],
+            cmds=["foo"],
+            image="myimage",
+            ports=[
+                Port(name="myport", container_port=8080)
+            ],
+            volume_mounts=[
+                VolumeMount(
+                    name="myvolume",
+                    mount_path="/tmp/mount",
+                    sub_path=None,
+                    read_only="True"
+                ),
+                VolumeMount(
+                    name="airflow-config",
+                    read_only=True,
+                    mount_path="/config",
+                    sub_path="airflow.cfg"
+                ),
+                VolumeMount(
+                    name="airflow-secret",
+                    read_only=True,
+                    mount_path="/opt/mount",
+                    sub_path=None,
+                )],
+            secrets=[Secret("env", "AIRFLOW_SECRET", "ai", "secret_key"),
+                     Secret('volume', '/opt/mount', 'airflow-secret', "secret-name"),
+                     Secret('volume', None, 'init-secret', 'secret-name')],
+            security_context={'fsGroup': 0, 'runAsUser': 0},
+            volumes=[Volume(name="myvolume", configs={'name': 'myvolume'}),
+                     Volume(name="airflow-config", configs={'configMap': {'data': 'airflow-data'},
+                                                            'name': 'airflow-config'})]
+        )
+        expected_dict = expected.as_dict()
+        result_dict = result_pod.as_dict()
+        parsed_configs = self.pull_out_volumes(result_dict)
+        result_dict['volumes'] = parsed_configs
+        self.assertDictEqual(expected_dict, result_dict)
+
+    @staticmethod
+    def pull_out_volumes(result_dict):
+        parsed_configs = []
+        for volume in result_dict['volumes']:
+            vol = {'name': volume['name']}
+            confs = {}
+            for k, v in volume['configs'].items():
+                if v and k[0] != '_':
+                    confs[k] = v
+            vol['configs'] = confs
+            parsed_configs.append(vol)
+        return parsed_configs
diff --git a/tests/kubernetes/test_pod_launcher_helper.py b/tests/kubernetes/test_pod_launcher_helper.py
deleted file mode 100644
index 761d138..0000000
--- a/tests/kubernetes/test_pod_launcher_helper.py
+++ /dev/null
@@ -1,98 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-#   http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
-import unittest
-
-from airflow.kubernetes.pod import Port
-from airflow.kubernetes.volume_mount import VolumeMount
-from airflow.kubernetes.volume import Volume
-from airflow.kubernetes.pod_launcher_helper import convert_to_airflow_pod
-from airflow.contrib.kubernetes.pod import Pod
-import kubernetes.client.models as k8s
-
-
-class TestPodLauncherHelper(unittest.TestCase):
-    def test_convert_to_airflow_pod(self):
-        input_pod = k8s.V1Pod(
-            metadata=k8s.V1ObjectMeta(
-                name="foo",
-                namespace="bar"
-            ),
-            spec=k8s.V1PodSpec(
-                containers=[
-                    k8s.V1Container(
-                        name="base",
-                        command="foo",
-                        image="myimage",
-                        ports=[
-                            k8s.V1ContainerPort(
-                                name="myport",
-                                container_port=8080,
-                            )
-                        ],
-                        volume_mounts=[k8s.V1VolumeMount(
-                            name="mymount",
-                            mount_path="/tmp/mount",
-                            read_only="True"
-                        )]
-                    )
-                ],
-                volumes=[
-                    k8s.V1Volume(
-                        name="myvolume"
-                    )
-                ]
-            )
-        )
-        result_pod = convert_to_airflow_pod(input_pod)
-
-        expected = Pod(
-            name="foo",
-            namespace="bar",
-            envs={},
-            cmds=[],
-            image="myimage",
-            ports=[
-                Port(name="myport", container_port=8080)
-            ],
-            volume_mounts=[VolumeMount(
-                name="mymount",
-                mount_path="/tmp/mount",
-                sub_path=None,
-                read_only="True"
-            )],
-            volumes=[Volume(name="myvolume", configs={'name': 'myvolume'})]
-        )
-        expected_dict = expected.as_dict()
-        result_dict = result_pod.as_dict()
-        parsed_configs = self.pull_out_volumes(result_dict)
-        result_dict['volumes'] = parsed_configs
-        self.maxDiff = None
-
-        self.assertDictEqual(expected_dict, result_dict)
-
-    @staticmethod
-    def pull_out_volumes(result_dict):
-        parsed_configs = []
-        for volume in result_dict['volumes']:
-            vol = {'name': volume['name']}
-            confs = {}
-            for k, v in volume['configs'].items():
-                if v and k[0] != '_':
-                    confs[k] = v
-            vol['configs'] = confs
-            parsed_configs.append(vol)
-        return parsed_configs
diff --git a/tests/kubernetes/test_worker_configuration.py b/tests/kubernetes/test_worker_configuration.py
index a94a112..0273ae8 100644
--- a/tests/kubernetes/test_worker_configuration.py
+++ b/tests/kubernetes/test_worker_configuration.py
@@ -173,6 +173,13 @@ class TestKubernetesWorkerConfiguration(unittest.TestCase):
 
         self.assertNotIn('AIRFLOW__CORE__DAGS_FOLDER', env)
 
+    @conf_vars({
+        ('kubernetes', 'airflow_configmap'): 'airflow-configmap'})
+    def test_worker_adds_config(self):
+        worker_config = WorkerConfiguration(self.kube_config)
+        volumes = worker_config._get_volumes()
+        print(volumes)
+
     def test_worker_environment_when_dags_folder_specified(self):
         self.kube_config.airflow_configmap = 'airflow-configmap'
         self.kube_config.git_dags_folder_mount_point = ''
diff --git a/tests/test_local_settings.py b/tests/test_local_settings.py
deleted file mode 100644
index 0e45ad8..0000000
--- a/tests/test_local_settings.py
+++ /dev/null
@@ -1,269 +0,0 @@
-# -*- coding: utf-8 -*-
-#
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-#   http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
-#
-import os
-import sys
-import tempfile
-import unittest
-from airflow.kubernetes import pod_generator
-from tests.compat import MagicMock, Mock, call, patch
-
-
-SETTINGS_FILE_POLICY = """
-def test_policy(task_instance):
-    task_instance.run_as_user = "myself"
-"""
-
-SETTINGS_FILE_POLICY_WITH_DUNDER_ALL = """
-__all__ = ["test_policy"]
-
-def test_policy(task_instance):
-    task_instance.run_as_user = "myself"
-
-def not_policy():
-    print("This shouldn't be imported")
-"""
-
-SETTINGS_FILE_POD_MUTATION_HOOK = """
-from airflow.kubernetes.volume import Volume
-from airflow.kubernetes.pod import Port, Resources
-
-def pod_mutation_hook(pod):
-    pod.namespace = 'airflow-tests'
-    pod.image = 'my_image'
-    pod.volumes.append(Volume(name="bar", configs={}))
-    pod.ports = [Port(container_port=8080)]
-    pod.resources = Resources(
-                    request_memory="2G",
-                    request_cpu="200Mi",
-                    limit_gpu="200G"
-                )
-
-"""
-
-SETTINGS_FILE_POD_MUTATION_HOOK_V1_POD = """
-def pod_mutation_hook(pod):
-    pod.spec.containers[0].image = "test-image"
-
-"""
-
-
-class SettingsContext:
-    def __init__(self, content, module_name):
-        self.content = content
-        self.settings_root = tempfile.mkdtemp()
-        filename = "{}.py".format(module_name)
-        self.settings_file = os.path.join(self.settings_root, filename)
-
-    def __enter__(self):
-        with open(self.settings_file, 'w') as handle:
-            handle.writelines(self.content)
-        sys.path.append(self.settings_root)
-        return self.settings_file
-
-    def __exit__(self, *exc_info):
-        sys.path.remove(self.settings_root)
-
-
-class LocalSettingsTest(unittest.TestCase):
-    # Make sure that the configure_logging is not cached
-    def setUp(self):
-        self.old_modules = dict(sys.modules)
-
-    def tearDown(self):
-        # Remove any new modules imported during the test run. This lets us
-        # import the same source files for more than one test.
-        for mod in [m for m in sys.modules if m not in self.old_modules]:
-            del sys.modules[mod]
-
-    @patch("airflow.settings.import_local_settings")
-    @patch("airflow.settings.prepare_syspath")
-    def test_initialize_order(self, prepare_syspath, import_local_settings):
-        """
-        Tests that import_local_settings is called after prepare_classpath
-        """
-        mock = Mock()
-        mock.attach_mock(prepare_syspath, "prepare_syspath")
-        mock.attach_mock(import_local_settings, "import_local_settings")
-
-        import airflow.settings
-        airflow.settings.initialize()
-
-        mock.assert_has_calls([call.prepare_syspath(), call.import_local_settings()])
-
-    def test_import_with_dunder_all_not_specified(self):
-        """
-        Tests that if __all__ is specified in airflow_local_settings,
-        only module attributes specified within are imported.
-        """
-        with SettingsContext(SETTINGS_FILE_POLICY_WITH_DUNDER_ALL, "airflow_local_settings"):
-            from airflow import settings
-            settings.import_local_settings()  # pylint: ignore
-
-            with self.assertRaises(AttributeError):
-                settings.not_policy()
-
-    def test_import_with_dunder_all(self):
-        """
-        Tests that if __all__ is specified in airflow_local_settings,
-        only module attributes specified within are imported.
-        """
-        with SettingsContext(SETTINGS_FILE_POLICY_WITH_DUNDER_ALL, "airflow_local_settings"):
-            from airflow import settings
-            settings.import_local_settings()  # pylint: ignore
-
-            task_instance = MagicMock()
-            settings.test_policy(task_instance)
-
-            assert task_instance.run_as_user == "myself"
-
-    @patch("airflow.settings.log.debug")
-    def test_import_local_settings_without_syspath(self, log_mock):
-        """
-        Tests that an ImportError is raised in import_local_settings
-        if there is no airflow_local_settings module on the syspath.
-        """
-        from airflow import settings
-        settings.import_local_settings()
-        log_mock.assert_called_with("Failed to import airflow_local_settings.", exc_info=True)
-
-    def test_policy_function(self):
-        """
-        Tests that task instances are mutated by the policy
-        function in airflow_local_settings.
-        """
-        with SettingsContext(SETTINGS_FILE_POLICY, "airflow_local_settings"):
-            from airflow import settings
-            settings.import_local_settings()  # pylint: ignore
-
-            task_instance = MagicMock()
-            settings.test_policy(task_instance)
-
-            assert task_instance.run_as_user == "myself"
-
-    def test_pod_mutation_hook(self):
-        """
-        Tests that pods are mutated by the pod_mutation_hook
-        function in airflow_local_settings.
-        """
-        with SettingsContext(SETTINGS_FILE_POD_MUTATION_HOOK, "airflow_local_settings"):
-            from airflow import settings
-            settings.import_local_settings()  # pylint: ignore
-
-            pod = MagicMock()
-            pod.volumes = []
-            settings.pod_mutation_hook(pod)
-
-            assert pod.namespace == 'airflow-tests'
-            self.assertEqual(pod.volumes[0].name, "bar")
-
-    def test_pod_mutation_to_k8s_pod(self):
-        with SettingsContext(SETTINGS_FILE_POD_MUTATION_HOOK, "airflow_local_settings"):
-            from airflow import settings
-            settings.import_local_settings()  # pylint: ignore
-            from airflow.kubernetes.pod_launcher import PodLauncher
-
-            self.mock_kube_client = Mock()
-            self.pod_launcher = PodLauncher(kube_client=self.mock_kube_client)
-            pod = pod_generator.PodGenerator(
-                image="foo",
-                name="bar",
-                namespace="baz",
-                image_pull_policy="Never",
-                cmds=["foo"],
-                volume_mounts=[
-                    {"name": "foo", "mount_path": "/mnt", "sub_path": "/", "read_only": "True"}
-                ],
-                volumes=[{"name": "foo"}]
-            ).gen_pod()
-
-            self.assertEqual(pod.metadata.namespace, "baz")
-            self.assertEqual(pod.spec.containers[0].image, "foo")
-            self.assertEqual(pod.spec.volumes, [{'name': 'foo'}])
-            self.assertEqual(pod.spec.containers[0].ports, [])
-            self.assertEqual(pod.spec.containers[0].resources, None)
-
-            pod = self.pod_launcher._mutate_pod_backcompat(pod)
-
-            self.assertEqual(pod.metadata.namespace, "airflow-tests")
-            self.assertEqual(pod.spec.containers[0].image, "my_image")
-            self.assertEqual(pod.spec.volumes, [{'name': 'foo'}, {'name': 'bar'}])
-            self.maxDiff = None
-            self.assertEqual(
-                pod.spec.containers[0].ports[0].to_dict(),
-                {
-                    "container_port": 8080,
-                    "host_ip": None,
-                    "host_port": None,
-                    "name": None,
-                    "protocol": None
-                }
-            )
-            self.assertEqual(
-                pod.spec.containers[0].resources.to_dict(),
-                {
-                    'limits': {
-                        'cpu': None,
-                        'memory': None,
-                        'ephemeral-storage': None,
-                        'nvidia.com/gpu': '200G'},
-                    'requests': {'cpu': '200Mi', 'ephemeral-storage': None, 'memory': '2G'}
-                }
-            )
-
-    def test_pod_mutation_v1_pod(self):
-        with SettingsContext(SETTINGS_FILE_POD_MUTATION_HOOK_V1_POD, "airflow_local_settings"):
-            from airflow import settings
-            settings.import_local_settings()  # pylint: ignore
-            from airflow.kubernetes.pod_launcher import PodLauncher
-
-            self.mock_kube_client = Mock()
-            self.pod_launcher = PodLauncher(kube_client=self.mock_kube_client)
-            pod = pod_generator.PodGenerator(
-                image="myimage",
-                cmds=["foo"],
-                volume_mounts={
-                    "name": "foo", "mount_path": "/mnt", "sub_path": "/", "read_only": "True"
-                },
-                volumes=[{"name": "foo"}]
-            ).gen_pod()
-
-            self.assertEqual(pod.spec.containers[0].image, "myimage")
-            pod = self.pod_launcher._mutate_pod_backcompat(pod)
-            self.assertEqual(pod.spec.containers[0].image, "test-image")
-
-
-class TestStatsWithAllowList(unittest.TestCase):
-
-    def setUp(self):
-        from airflow.settings import SafeStatsdLogger, AllowListValidator
-        self.statsd_client = Mock()
-        self.stats = SafeStatsdLogger(self.statsd_client, AllowListValidator("stats_one, stats_two"))
-
-    def test_increment_counter_with_allowed_key(self):
-        self.stats.incr('stats_one')
-        self.statsd_client.incr.assert_called_once_with('stats_one', 1, 1)
-
-    def test_increment_counter_with_allowed_prefix(self):
-        self.stats.incr('stats_two.bla')
-        self.statsd_client.incr.assert_called_once_with('stats_two.bla', 1, 1)
-
-    def test_not_increment_counter_if_not_allowed(self):
-        self.stats.incr('stats_three')
-        self.statsd_client.assert_not_called()
diff --git a/tests/test_local_settings/__init__.py b/tests/test_local_settings/__init__.py
new file mode 100644
index 0000000..13a8339
--- /dev/null
+++ b/tests/test_local_settings/__init__.py
@@ -0,0 +1,16 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
diff --git a/tests/test_local_settings/test_local_settings.py b/tests/test_local_settings/test_local_settings.py
new file mode 100644
index 0000000..7c4abf1
--- /dev/null
+++ b/tests/test_local_settings/test_local_settings.py
@@ -0,0 +1,461 @@
+# -*- coding: utf-8 -*-
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+import os
+import sys
+import tempfile
+import unittest
+from airflow.kubernetes import pod_generator
+from kubernetes.client import ApiClient
+import kubernetes.client.models as k8s
+from tests.compat import MagicMock, Mock, mock, call, patch
+
+api_client = ApiClient()
+
+SETTINGS_FILE_POLICY = """
+def test_policy(task_instance):
+    task_instance.run_as_user = "myself"
+"""
+
+SETTINGS_FILE_POLICY_WITH_DUNDER_ALL = """
+__all__ = ["test_policy"]
+
+def test_policy(task_instance):
+    task_instance.run_as_user = "myself"
+
+def not_policy():
+    print("This shouldn't be imported")
+"""
+
+SETTINGS_FILE_POD_MUTATION_HOOK = """
+from airflow.kubernetes.volume import Volume
+from airflow.kubernetes.pod import Port, Resources
+
+def pod_mutation_hook(pod):
+    pod.namespace = 'airflow-tests'
+    pod.image = 'my_image'
+    pod.volumes.append(Volume(name="bar", configs={}))
+    pod.ports = [Port(container_port=8080), {"containerPort": 8081}]
+    pod.resources = Resources(
+                    request_memory="2G",
+                    request_cpu="200Mi",
+                    limit_gpu="200G"
+                )
+
+    secret_volume = {
+        "name":  "airflow-secrets-mount",
+        "secret": {
+          "secretName": "airflow-test-secrets"
+        }
+    }
+    secret_volume_mount = {
+      "name": "airflow-secrets-mount",
+      "readOnly": True,
+      "mountPath": "/opt/airflow/secrets/"
+    }
+
+    if pod.init_containers is not None:
+        for i in range(len(pod.init_containers)):
+             init_container = pod.init_containers[i]
+             init_container['securityContext'] = {"runAsGroup":50000,"runAsUser":50000}
+             if init_container['name'] == 'dag-sync':
+                init_container['securityContext'] = {"runAsGroup":40000,"runAsUser":40000}
+
+    pod.volumes.append(secret_volume)
+    pod.volume_mounts.append(secret_volume_mount)
+
+    pod.labels.update({"test_label": "test_value"})
+    pod.envs.update({"TEST_USER": "ADMIN"})
+
+    pod.tolerations += [
+        {"key": "dynamic-pods", "operator": "Equal", "value": "true", "effect": "NoSchedule"}
+    ]
+    pod.affinity.update(
+        {"nodeAffinity":
+            {"requiredDuringSchedulingIgnoredDuringExecution":
+                {"nodeSelectorTerms":
+                    [{
+                        "matchExpressions": [
+                            {"key": "test/dynamic-pods", "operator": "In", "values": ["true"]}
+                        ]
+                    }]
+                }
+            }
+        }
+    )
+
+    if 'fsGroup' in pod.security_context and pod.security_context['fsGroup'] == 0 :
+        del pod.security_context['fsGroup']
+    if 'runAsUser' in pod.security_context and pod.security_context['runAsUser'] == 0 :
+        del pod.security_context['runAsUser']
+
+    if pod.args and pod.args[0] == "/bin/sh":
+        pod.args = ['/bin/sh', '-c', 'touch /tmp/healthy2']
+
+"""
+
+SETTINGS_FILE_POD_MUTATION_HOOK_V1_POD = """
+def pod_mutation_hook(pod):
+    from kubernetes.client import models as k8s
+    secret_volume = {
+        "name":  "airflow-secrets-mount",
+        "secret": {
+          "secretName": "airflow-test-secrets"
+        }
+    }
+    secret_volume_mount = {
+      "name": "airflow-secrets-mount",
+      "readOnly": True,
+      "mountPath": "/opt/airflow/secrets/"
+    }
+    base_container = pod.spec.containers[0]
+    base_container.image = "test-image"
+    base_container.volume_mounts.append(secret_volume_mount)
+    base_container.env.extend([{'name': 'TEST_USER', 'value': 'ADMIN'}])
+    base_container.ports.extend([{'containerPort': 8080}, k8s.V1ContainerPort(container_port=8081)])
+
+    pod.spec.volumes.append(secret_volume)
+    pod.metadata.namespace = 'airflow-tests'
+
+"""
+
+
+class SettingsContext:
+    def __init__(self, content, module_name):
+        self.content = content
+        self.settings_root = tempfile.mkdtemp()
+        filename = "{}.py".format(module_name)
+        self.settings_file = os.path.join(self.settings_root, filename)
+
+    def __enter__(self):
+        with open(self.settings_file, 'w') as handle:
+            handle.writelines(self.content)
+        sys.path.append(self.settings_root)
+        return self.settings_file
+
+    def __exit__(self, *exc_info):
+        sys.path.remove(self.settings_root)
+
+
+class LocalSettingsTest(unittest.TestCase):
+    # Make sure that the configure_logging is not cached
+    def setUp(self):
+        self.old_modules = dict(sys.modules)
+        self.maxDiff = None
+
+    def tearDown(self):
+        # Remove any new modules imported during the test run. This lets us
+        # import the same source files for more than one test.
+        for mod in [m for m in sys.modules if m not in self.old_modules]:
+            del sys.modules[mod]
+
+    @patch("airflow.settings.import_local_settings")
+    @patch("airflow.settings.prepare_syspath")
+    def test_initialize_order(self, prepare_syspath, import_local_settings):
+        """
+        Tests that import_local_settings is called after prepare_classpath
+        """
+        mock = Mock()
+        mock.attach_mock(prepare_syspath, "prepare_syspath")
+        mock.attach_mock(import_local_settings, "import_local_settings")
+
+        import airflow.settings
+        airflow.settings.initialize()
+
+        mock.assert_has_calls([call.prepare_syspath(), call.import_local_settings()])
+
+    def test_import_with_dunder_all_not_specified(self):
+        """
+        Tests that if __all__ is specified in airflow_local_settings,
+        only module attributes specified within are imported.
+        """
+        with SettingsContext(SETTINGS_FILE_POLICY_WITH_DUNDER_ALL, "airflow_local_settings"):
+            from airflow import settings
+            settings.import_local_settings()  # pylint: ignore
+
+            with self.assertRaises(AttributeError):
+                settings.not_policy()
+
+    def test_import_with_dunder_all(self):
+        """
+        Tests that if __all__ is specified in airflow_local_settings,
+        only module attributes specified within are imported.
+        """
+        with SettingsContext(SETTINGS_FILE_POLICY_WITH_DUNDER_ALL, "airflow_local_settings"):
+            from airflow import settings
+            settings.import_local_settings()  # pylint: ignore
+
+            task_instance = MagicMock()
+            settings.test_policy(task_instance)
+
+            assert task_instance.run_as_user == "myself"
+
+    @patch("airflow.settings.log.debug")
+    def test_import_local_settings_without_syspath(self, log_mock):
+        """
+        Tests that an ImportError is raised in import_local_settings
+        if there is no airflow_local_settings module on the syspath.
+        """
+        from airflow import settings
+        settings.import_local_settings()
+        log_mock.assert_called_with("Failed to import airflow_local_settings.", exc_info=True)
+
+    def test_policy_function(self):
+        """
+        Tests that task instances are mutated by the policy
+        function in airflow_local_settings.
+        """
+        with SettingsContext(SETTINGS_FILE_POLICY, "airflow_local_settings"):
+            from airflow import settings
+            settings.import_local_settings()  # pylint: ignore
+
+            task_instance = MagicMock()
+            settings.test_policy(task_instance)
+
+            assert task_instance.run_as_user == "myself"
+
+    def test_pod_mutation_hook(self):
+        """
+        Tests that pods are mutated by the pod_mutation_hook
+        function in airflow_local_settings.
+        """
+        with SettingsContext(SETTINGS_FILE_POD_MUTATION_HOOK, "airflow_local_settings"):
+            from airflow import settings
+            settings.import_local_settings()  # pylint: ignore
+
+            pod = MagicMock()
+            pod.volumes = []
+            settings.pod_mutation_hook(pod)
+
+            assert pod.namespace == 'airflow-tests'
+            self.assertEqual(pod.volumes[0].name, "bar")
+
+    def test_pod_mutation_to_k8s_pod(self):
+        with SettingsContext(SETTINGS_FILE_POD_MUTATION_HOOK, "airflow_local_settings"):
+            from airflow import settings
+            settings.import_local_settings()  # pylint: ignore
+            from airflow.kubernetes.pod_launcher import PodLauncher
+
+            self.mock_kube_client = Mock()
+            self.pod_launcher = PodLauncher(kube_client=self.mock_kube_client)
+            init_container = k8s.V1Container(
+                name="init-container",
+                volume_mounts=[k8s.V1VolumeMount(mount_path="/tmp", name="init-secret")]
+            )
+            pod = pod_generator.PodGenerator(
+                image="foo",
+                name="bar",
+                namespace="baz",
+                image_pull_policy="Never",
+                init_containers=[init_container],
+                cmds=["foo"],
+                args=["/bin/sh", "-c", "touch /tmp/healthy"],
+                tolerations=[
+                    {'effect': 'NoSchedule',
+                     'key': 'static-pods',
+                     'operator': 'Equal',
+                     'value': 'true'}
+                ],
+                volume_mounts=[
+                    {"name": "foo", "mountPath": "/mnt", "subPath": "/", "readOnly": True}
+                ],
+                security_context=k8s.V1PodSecurityContext(fs_group=0, run_as_user=1),
+                volumes=[k8s.V1Volume(name="foo")]
+            ).gen_pod()
+
+            sanitized_pod_pre_mutation = api_client.sanitize_for_serialization(pod)
+            self.assertEqual(
+                sanitized_pod_pre_mutation,
+                {'apiVersion': 'v1',
+                 'kind': 'Pod',
+                 'metadata': {'name': mock.ANY,
+                              'namespace': 'baz'},
+                 'spec': {'containers': [{'args': ['/bin/sh', '-c', 'touch /tmp/healthy'],
+                                          'command': ['foo'],
+                                          'env': [],
+                                          'envFrom': [],
+                                          'image': 'foo',
+                                          'imagePullPolicy': 'Never',
+                                          'name': 'base',
+                                          'ports': [],
+                                          'volumeMounts': [{'mountPath': '/mnt',
+                                                            'name': 'foo',
+                                                            'readOnly': True,
+                                                            'subPath': '/'}]}],
+                          'initContainers': [{'name': 'init-container',
+                                              'volumeMounts': [{'mountPath': '/tmp',
+                                                                'name': 'init-secret'}]}],
+                          'hostNetwork': False,
+                          'imagePullSecrets': [],
+                          'tolerations': [{'effect': 'NoSchedule',
+                                           'key': 'static-pods',
+                                           'operator': 'Equal',
+                                           'value': 'true'}],
+                          'volumes': [{'name': 'foo'}],
+                          'securityContext': {'fsGroup': 0, 'runAsUser': 1}}},
+            )
+
+            # Apply Pod Mutation Hook
+            pod = self.pod_launcher._mutate_pod_backcompat(pod)
+
+            sanitized_pod_post_mutation = api_client.sanitize_for_serialization(pod)
+
+            self.assertEqual(
+                sanitized_pod_post_mutation,
+                {"apiVersion": "v1",
+                 "kind": "Pod",
+                 'metadata': {'labels': {'test_label': 'test_value'},
+                              'name': mock.ANY,
+                              'namespace': 'airflow-tests'},
+                 'spec': {'affinity': {'nodeAffinity': {'requiredDuringSchedulingIgnoredDuringExecution': {
+                     'nodeSelectorTerms': [{'matchExpressions': [{'key': 'test/dynamic-pods',
+                                                                  'operator': 'In',
+                                                                  'values': ['true']}]}]}}},
+                          'containers': [{'args': ['/bin/sh', '-c', 'touch /tmp/healthy2'],
+                                          'command': ['foo'],
+                                          'env': [{'name': 'TEST_USER', 'value': 'ADMIN'}],
+                                          'image': 'my_image',
+                                          'imagePullPolicy': 'Never',
+                                          'name': 'base',
+                                          'ports': [{'containerPort': 8080},
+                                                    {'containerPort': 8081}],
+                                          'resources': {'limits': {'nvidia.com/gpu': '200G'},
+                                                        'requests': {'cpu': '200Mi',
+                                                                     'memory': '2G'}},
+                                          'volumeMounts': [{'mountPath': '/opt/airflow/secrets/',
+                                                            'name': 'airflow-secrets-mount',
+                                                            'readOnly': True},
+                                                           {'mountPath': '/mnt',
+                                                            'name': 'foo',
+                                                            'readOnly': True,
+                                                            'subPath': '/'}
+                                                           ]}],
+                          'hostNetwork': False,
+                          'imagePullSecrets': [],
+                          'initContainers': [{'name': 'init-container',
+                                              'securityContext': {'runAsGroup': 50000,
+                                                                  'runAsUser': 50000},
+                                              'volumeMounts': [{'mountPath': '/tmp',
+                                                                'name': 'init-secret'}]}],
+                          'tolerations': [{'effect': 'NoSchedule',
+                                           'key': 'static-pods',
+                                           'operator': 'Equal',
+                                           'value': 'true'},
+                                          {'effect': 'NoSchedule',
+                                           'key': 'dynamic-pods',
+                                           'operator': 'Equal',
+                                           'value': 'true'}],
+                          'volumes': [{'name': 'airflow-secrets-mount',
+                                       'secret': {'secretName': 'airflow-test-secrets'}},
+                                      {'name': 'bar'},
+                                      {'name': 'foo'},
+                                      ],
+                          'securityContext': {'runAsUser': 1}}}
+            )
+
+    def test_pod_mutation_v1_pod(self):
+        with SettingsContext(SETTINGS_FILE_POD_MUTATION_HOOK_V1_POD, "airflow_local_settings"):
+            from airflow import settings
+            settings.import_local_settings()  # pylint: ignore
+            from airflow.kubernetes.pod_launcher import PodLauncher
+
+            self.mock_kube_client = Mock()
+            self.pod_launcher = PodLauncher(kube_client=self.mock_kube_client)
+            pod = pod_generator.PodGenerator(
+                image="myimage",
+                cmds=["foo"],
+                namespace="baz",
+                volume_mounts=[
+                    {"name": "foo", "mountPath": "/mnt", "subPath": "/", "readOnly": True}
+                ],
+                volumes=[{"name": "foo"}]
+            ).gen_pod()
+
+            sanitized_pod_pre_mutation = api_client.sanitize_for_serialization(pod)
+
+            self.assertEqual(
+                sanitized_pod_pre_mutation,
+                {'apiVersion': 'v1',
+                 'kind': 'Pod',
+                 'metadata': {'namespace': 'baz'},
+                 'spec': {'containers': [{'args': [],
+                                          'command': ['foo'],
+                                          'env': [],
+                                          'envFrom': [],
+                                          'image': 'myimage',
+                                          'name': 'base',
+                                          'ports': [],
+                                          'volumeMounts': [{'mountPath': '/mnt',
+                                                            'name': 'foo',
+                                                            'readOnly': True,
+                                                            'subPath': '/'}]}],
+                          'hostNetwork': False,
+                          'imagePullSecrets': [],
+                          'volumes': [{'name': 'foo'}]}}
+            )
+
+            # Apply Pod Mutation Hook
+            pod = self.pod_launcher._mutate_pod_backcompat(pod)
+
+            sanitized_pod_post_mutation = api_client.sanitize_for_serialization(pod)
+            self.assertEqual(
+                sanitized_pod_post_mutation,
+                {'apiVersion': 'v1',
+                 'kind': 'Pod',
+                 'metadata': {'namespace': 'airflow-tests'},
+                 'spec': {'containers': [{'args': [],
+                                          'command': ['foo'],
+                                          'env': [{'name': 'TEST_USER', 'value': 'ADMIN'}],
+                                          'envFrom': [],
+                                          'image': 'test-image',
+                                          'name': 'base',
+                                          'ports': [{'containerPort': 8080}, {'containerPort': 8081}],
+                                          'volumeMounts': [{'mountPath': '/mnt',
+                                                            'name': 'foo',
+                                                            'readOnly': True,
+                                                            'subPath': '/'},
+                                                           {'mountPath': '/opt/airflow/secrets/',
+                                                            'name': 'airflow-secrets-mount',
+                                                            'readOnly': True}]}],
+                          'hostNetwork': False,
+                          'imagePullSecrets': [],
+                          'volumes': [{'name': 'foo'},
+                                      {'name': 'airflow-secrets-mount',
+                                       'secret': {'secretName': 'airflow-test-secrets'}}]}}
+            )
+
+
+class TestStatsWithAllowList(unittest.TestCase):
+
+    def setUp(self):
+        from airflow.settings import SafeStatsdLogger, AllowListValidator
+        self.statsd_client = Mock()
+        self.stats = SafeStatsdLogger(self.statsd_client, AllowListValidator("stats_one, stats_two"))
+
+    def test_increment_counter_with_allowed_key(self):
+        self.stats.incr('stats_one')
+        self.statsd_client.incr.assert_called_once_with('stats_one', 1, 1)
+
+    def test_increment_counter_with_allowed_prefix(self):
+        self.stats.incr('stats_two.bla')
+        self.statsd_client.incr.assert_called_once_with('stats_two.bla', 1, 1)
+
+    def test_not_increment_counter_if_not_allowed(self):
+        self.stats.incr('stats_three')
+        self.statsd_client.assert_not_called()


[airflow] 05/32: Tests should also be triggered when there is just setup.py change (#9690)

Posted by ka...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

kaxilnaik pushed a commit to branch v1-10-test
in repository https://gitbox.apache.org/repos/asf/airflow.git

commit 0718977d8b15754eec0a9bafbd8a44e556b1d5bb
Author: Jarek Potiuk <ja...@polidea.com>
AuthorDate: Mon Jul 6 20:41:35 2020 +0200

    Tests should also be triggered when there is just setup.py change (#9690)
    
    So far tests were not triggered when only requirements changed,
    but this is quite needed in fact.
    
    (cherry picked from commit 72abf824cef6a1d82ecf882756206f02ed6a6864)
---
 .github/workflows/ci.yml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml
index 029c341..134bc1f 100644
--- a/.github/workflows/ci.yml
+++ b/.github/workflows/ci.yml
@@ -113,7 +113,7 @@ jobs:
         run: |
           set +e
           ./scripts/ci/ci_count_changed_files.sh ${GITHUB_SHA} \
-              '^airflow|.github/workflows/|^Dockerfile|^scripts|^chart'
+              '^airflow|.github/workflows/|^Dockerfile|^scripts|^chart|^setup.py|^requirements|^tests|^kubernetes_tests'
           echo "::set-output name=count::$?"
         id: trigger-tests
 


[airflow] 31/32: Fix KubernetesPodOperator reattachment (#10230)

Posted by ka...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

kaxilnaik pushed a commit to branch v1-10-test
in repository https://gitbox.apache.org/repos/asf/airflow.git

commit c47a7c443056382401c05363d0e57b8301f1bf31
Author: Daniel Imberman <da...@gmail.com>
AuthorDate: Tue Aug 11 07:01:27 2020 -0700

    Fix KubernetesPodOperator reattachment (#10230)
    
    (cherry picked from commit 8cd2be9e161635480581a0dc723b69ed24166f8d)
---
 .../contrib/operators/kubernetes_pod_operator.py   | 46 ++++++++++++++++------
 1 file changed, 33 insertions(+), 13 deletions(-)

diff --git a/airflow/contrib/operators/kubernetes_pod_operator.py b/airflow/contrib/operators/kubernetes_pod_operator.py
index 41f0df3..98464b7 100644
--- a/airflow/contrib/operators/kubernetes_pod_operator.py
+++ b/airflow/contrib/operators/kubernetes_pod_operator.py
@@ -270,23 +270,16 @@ class KubernetesPodOperator(BaseOperator):  # pylint: disable=too-many-instance-
 
             pod_list = client.list_namespaced_pod(self.namespace, label_selector=label_selector)
 
-            if len(pod_list.items) > 1:
+            if len(pod_list.items) > 1 and self.reattach_on_restart:
                 raise AirflowException(
                     'More than one pod running with labels: '
                     '{label_selector}'.format(label_selector=label_selector))
 
             launcher = pod_launcher.PodLauncher(kube_client=client, extract_xcom=self.do_xcom_push)
 
-            if len(pod_list.items) == 1 and \
-                    self._try_numbers_do_not_match(context, pod_list.items[0]) and \
-                    self.reattach_on_restart:
-                self.log.info("found a running pod with labels %s but a different try_number"
-                              "Will attach to this pod and monitor instead of starting new one", labels)
-                final_state, _, result = self.create_new_pod_for_operator(labels, launcher)
-            elif len(pod_list.items) == 1:
-                self.log.info("found a running pod with labels %s."
-                              "Will monitor this pod instead of starting new one", labels)
-                final_state, result = self.monitor_launched_pod(launcher, pod_list[0])
+            if len(pod_list.items) == 1:
+                try_numbers_match = self._try_numbers_match(context, pod_list.items[0])
+                final_state, result = self.handle_pod_overlap(labels, try_numbers_match, launcher, pod_list)
             else:
                 final_state, _, result = self.create_new_pod_for_operator(labels, launcher)
             if final_state != State.SUCCESS:
@@ -296,14 +289,41 @@ class KubernetesPodOperator(BaseOperator):  # pylint: disable=too-many-instance-
         except AirflowException as ex:
             raise AirflowException('Pod Launching failed: {error}'.format(error=ex))
 
+    def handle_pod_overlap(self, labels, try_numbers_match, launcher, pod_list):
+        """
+        In cases where the Scheduler restarts while a KubernetsPodOperator task is running,
+        this function will either continue to monitor the existing pod or launch a new pod
+        based on the `reattach_on_restart` parameter.
+        :param labels: labels used to determine if a pod is repeated
+        :type labels: dict
+        :param try_numbers_match: do the try numbers match? Only needed for logging purposes
+        :type try_numbers_match: bool
+        :param launcher: PodLauncher
+        :param pod_list: list of pods found
+        """
+        if try_numbers_match:
+            log_line = "found a running pod with labels {} and the same try_number.".format(labels)
+        else:
+            log_line = "found a running pod with labels {} but a different try_number.".format(labels)
+
+        if self.reattach_on_restart:
+            log_line = log_line + " Will attach to this pod and monitor instead of starting new one"
+            self.log.info(log_line)
+            final_state, result = self.monitor_launched_pod(launcher, pod_list.items[0])
+        else:
+            log_line = log_line + "creating pod with labels {} and launcher {}".format(labels, launcher)
+            self.log.info(log_line)
+            final_state, _, result = self.create_new_pod_for_operator(labels, launcher)
+        return final_state, result
+
     @staticmethod
     def _get_pod_identifying_label_string(labels):
         filtered_labels = {label_id: label for label_id, label in labels.items() if label_id != 'try_number'}
         return ','.join([label_id + '=' + label for label_id, label in sorted(filtered_labels.items())])
 
     @staticmethod
-    def _try_numbers_do_not_match(context, pod):
-        return pod.metadata.labels['try_number'] != context['ti'].try_number
+    def _try_numbers_match(context, pod):
+        return pod.metadata.labels['try_number'] == context['ti'].try_number
 
     @staticmethod
     def _set_resources(resources):


[airflow] 03/32: Fix task_instance_mutation_hook (#9910)

Posted by ka...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

kaxilnaik pushed a commit to branch v1-10-test
in repository https://gitbox.apache.org/repos/asf/airflow.git

commit e6b017aff28ef2f7c08998386e8221abe1b553de
Author: Jarek Potiuk <ja...@potiuk.com>
AuthorDate: Wed Jul 22 16:55:54 2020 +0200

    Fix task_instance_mutation_hook (#9910)
    
    Fixes #9902
---
 airflow/__init__.py | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/airflow/__init__.py b/airflow/__init__.py
index c8dcd21..287efe3 100644
--- a/airflow/__init__.py
+++ b/airflow/__init__.py
@@ -40,12 +40,14 @@ import sys
 from airflow import utils
 from airflow import settings
 from airflow.configuration import conf
-from airflow.models import DAG
 from flask_admin import BaseView
 from importlib import import_module
 from airflow.exceptions import AirflowException
 
 settings.initialize()
+# Delay the import of airflow.models to be after the settings initialization to make sure that
+# any reference to a settings' functions (e.g task_instance_mutation_hook) holds the expected implementation
+from airflow.models import DAG  # noqa: E402
 
 login = None  # type: Any
 log = logging.getLogger(__name__)


[airflow] 27/32: Make XCom 2.7 compatible

Posted by ka...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

kaxilnaik pushed a commit to branch v1-10-test
in repository https://gitbox.apache.org/repos/asf/airflow.git

commit ec1cb7da1464708ba83bbfe0bc5efc367ea361c3
Author: Daniel Imberman <da...@gmail.com>
AuthorDate: Mon Aug 3 14:23:18 2020 -0700

    Make XCom 2.7 compatible
---
 airflow/models/xcom.py | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/airflow/models/xcom.py b/airflow/models/xcom.py
index 0b6a81d..9861c2c 100644
--- a/airflow/models/xcom.py
+++ b/airflow/models/xcom.py
@@ -234,7 +234,7 @@ class BaseXCom(Base, LoggingMixin):
             raise
 
     @staticmethod
-    def deserialize_value(result) -> Any:
+    def deserialize_value(result):
         # TODO: "pickling" has been deprecated and JSON is preferred.
         # "pickling" will be removed in Airflow 2.0.
         enable_pickling = conf.getboolean('core', 'enable_xcom_pickling')
@@ -253,11 +253,13 @@ class BaseXCom(Base, LoggingMixin):
 
 def resolve_xcom_backend():
     """Resolves custom XCom class"""
-    clazz = conf.getimport("core", "xcom_backend", fallback=f"airflow.models.xcom.{BaseXCom.__name__}")
+    clazz = conf.getimport("core", "xcom_backend", fallback="airflow.models.xcom.{}"
+                           .format(BaseXCom.__name__))
     if clazz:
         if not issubclass(clazz, BaseXCom):
             raise TypeError(
-                f"Your custom XCom class `{clazz.__name__}` is not a subclass of `{BaseXCom.__name__}`."
+                "Your custom XCom class `{class_name}` is not a subclass of `{base_name}`."
+                .format(class_name=clazz.__name__, base_name=BaseXCom.__name__)
             )
         return clazz
     return BaseXCom


[airflow] 19/32: Fix bug in executor_config when defining resources (#9935)

Posted by ka...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

kaxilnaik pushed a commit to branch v1-10-test
in repository https://gitbox.apache.org/repos/asf/airflow.git

commit 05ec21a22f84cdbe2aaed38b712c30f2cbb38b59
Author: Daniel Imberman <da...@gmail.com>
AuthorDate: Thu Jul 23 19:52:20 2020 -0700

    Fix bug in executor_config when defining resources (#9935)
    
    * Fix PodGenerator to handle Kubernetes resources
    
    In Airflow 1.10.11, `namespaced['resources'] = resources` is missing.
    This PR improves the definition of pod resources, `requests` and `limits` are optional.
    
    * Make it working in 2.7
    
    * Add limit_gpu and fix ephemeral-storage keys
    
    * Fix flake8
    
    Co-authored-by: Riccardo Bini <od...@gmail.com>
---
 airflow/kubernetes/pod.py              |   4 +-
 airflow/kubernetes/pod_generator.py    |  33 ++++++---
 tests/kubernetes/test_pod_generator.py | 132 +++++++++++++++++++++++++++++++++
 3 files changed, 155 insertions(+), 14 deletions(-)

diff --git a/airflow/kubernetes/pod.py b/airflow/kubernetes/pod.py
index b1df462..0b332c2 100644
--- a/airflow/kubernetes/pod.py
+++ b/airflow/kubernetes/pod.py
@@ -33,7 +33,7 @@ class Resources(K8SModel):
     :type request_memory: str
     :param request_cpu: requested CPU number
     :type request_cpu: float | str
-    :param request_ephemeral_storage: requested ephermeral storage
+    :param request_ephemeral_storage: requested ephemeral storage
     :type request_ephemeral_storage: str
     :param limit_memory: limit for memory usage
     :type limit_memory: str
@@ -41,7 +41,7 @@ class Resources(K8SModel):
     :type limit_cpu: float | str
     :param limit_gpu: Limits for GPU used
     :type limit_gpu: int
-    :param limit_ephemeral_storage: Limit for ephermeral storage
+    :param limit_ephemeral_storage: Limit for ephemeral storage
     :type limit_ephemeral_storage: float | str
     """
     def __init__(
diff --git a/airflow/kubernetes/pod_generator.py b/airflow/kubernetes/pod_generator.py
index e46407b..d11c175 100644
--- a/airflow/kubernetes/pod_generator.py
+++ b/airflow/kubernetes/pod_generator.py
@@ -344,18 +344,26 @@ class PodGenerator(object):
         resources = namespaced.get('resources')
 
         if resources is None:
-            requests = {
-                'cpu': namespaced.get('request_cpu'),
-                'memory': namespaced.get('request_memory'),
-                'ephemeral-storage': namespaced.get('ephemeral-storage')
-            }
-            limits = {
-                'cpu': namespaced.get('limit_cpu'),
-                'memory': namespaced.get('limit_memory'),
-                'ephemeral-storage': namespaced.get('ephemeral-storage')
-            }
-            all_resources = list(requests.values()) + list(limits.values())
-            if all(r is None for r in all_resources):
+            def extract(cpu, memory, ephemeral_storage, limit_gpu=None):
+                resources_obj = {
+                    'cpu': namespaced.pop(cpu, None),
+                    'memory': namespaced.pop(memory, None),
+                    'ephemeral-storage': namespaced.pop(ephemeral_storage, None),
+                }
+                if limit_gpu is not None:
+                    resources_obj['nvidia.com/gpu'] = namespaced.pop(limit_gpu, None)
+
+                resources_obj = {k: v for k, v in resources_obj.items() if v is not None}
+
+                if all(r is None for r in resources_obj):
+                    resources_obj = None
+                return namespaced, resources_obj
+
+            namespaced, requests = extract('request_cpu', 'request_memory', 'request_ephemeral_storage')
+            namespaced, limits = extract('limit_cpu', 'limit_memory', 'limit_ephemeral_storage',
+                                         limit_gpu='limit_gpu')
+
+            if requests is None and limits is None:
                 resources = None
             else:
                 resources = k8s.V1ResourceRequirements(
@@ -371,6 +379,7 @@ class PodGenerator(object):
                 'iam.cloud.google.com/service-account': gcp_service_account_key
             })
 
+        namespaced['resources'] = resources
         return PodGenerator(**namespaced).gen_pod()
 
     @staticmethod
diff --git a/tests/kubernetes/test_pod_generator.py b/tests/kubernetes/test_pod_generator.py
index 7d39cdc..d0faf4c 100644
--- a/tests/kubernetes/test_pod_generator.py
+++ b/tests/kubernetes/test_pod_generator.py
@@ -288,6 +288,138 @@ class TestPodGenerator(unittest.TestCase):
         }, result)
 
     @mock.patch('uuid.uuid4')
+    def test_from_obj_with_resources(self, mock_uuid):
+        self.maxDiff = None
+
+        mock_uuid.return_value = self.static_uuid
+        result = PodGenerator.from_obj({
+            "KubernetesExecutor": {
+                "annotations": {"test": "annotation"},
+                "volumes": [
+                    {
+                        "name": "example-kubernetes-test-volume",
+                        "hostPath": {"path": "/tmp/"},
+                    },
+                ],
+                "volume_mounts": [
+                    {
+                        "mountPath": "/foo/",
+                        "name": "example-kubernetes-test-volume",
+                    },
+                ],
+                'request_cpu': "200m",
+                'limit_cpu': "400m",
+                'request_memory': "500Mi",
+                'limit_memory': "1000Mi",
+                'limit_gpu': "2",
+                'request_ephemeral_storage': '2Gi',
+                'limit_ephemeral_storage': '4Gi',
+            }
+        })
+        result = self.k8s_client.sanitize_for_serialization(result)
+
+        self.assertEqual({
+            'apiVersion': 'v1',
+            'kind': 'Pod',
+            'metadata': {
+                'annotations': {'test': 'annotation'},
+            },
+            'spec': {
+                'containers': [{
+                    'args': [],
+                    'command': [],
+                    'env': [],
+                    'envFrom': [],
+                    'name': 'base',
+                    'ports': [],
+                    'resources': {
+                        'limits': {
+                            'cpu': '400m',
+                            'ephemeral-storage': '4Gi',
+                            'memory': '1000Mi',
+                            'nvidia.com/gpu': "2",
+                        },
+                        'requests': {
+                            'cpu': '200m',
+                            'ephemeral-storage': '2Gi',
+                            'memory': '500Mi',
+                        },
+                    },
+                    'volumeMounts': [{
+                        'mountPath': '/foo/',
+                        'name': 'example-kubernetes-test-volume'
+                    }],
+                }],
+                'hostNetwork': False,
+                'imagePullSecrets': [],
+                'volumes': [{
+                    'hostPath': {'path': '/tmp/'},
+                    'name': 'example-kubernetes-test-volume'
+                }],
+            }
+        }, result)
+
+    @mock.patch('uuid.uuid4')
+    def test_from_obj_with_only_request_resources(self, mock_uuid):
+        self.maxDiff = None
+
+        mock_uuid.return_value = self.static_uuid
+        result = PodGenerator.from_obj({
+            "KubernetesExecutor": {
+                "annotations": {"test": "annotation"},
+                "volumes": [
+                    {
+                        "name": "example-kubernetes-test-volume",
+                        "hostPath": {"path": "/tmp/"},
+                    },
+                ],
+                "volume_mounts": [
+                    {
+                        "mountPath": "/foo/",
+                        "name": "example-kubernetes-test-volume",
+                    },
+                ],
+                'request_cpu': "200m",
+                'request_memory': "500Mi",
+            }
+        })
+        result = self.k8s_client.sanitize_for_serialization(result)
+
+        self.assertEqual({
+            'apiVersion': 'v1',
+            'kind': 'Pod',
+            'metadata': {
+                'annotations': {'test': 'annotation'},
+            },
+            'spec': {
+                'containers': [{
+                    'args': [],
+                    'command': [],
+                    'env': [],
+                    'envFrom': [],
+                    'name': 'base',
+                    'ports': [],
+                    'resources': {
+                        'requests': {
+                            'cpu': '200m',
+                            'memory': '500Mi',
+                        },
+                    },
+                    'volumeMounts': [{
+                        'mountPath': '/foo/',
+                        'name': 'example-kubernetes-test-volume'
+                    }],
+                }],
+                'hostNetwork': False,
+                'imagePullSecrets': [],
+                'volumes': [{
+                    'hostPath': {'path': '/tmp/'},
+                    'name': 'example-kubernetes-test-volume'
+                }],
+            }
+        }, result)
+
+    @mock.patch('uuid.uuid4')
     def test_reconcile_pods_empty_mutator_pod(self, mock_uuid):
         mock_uuid.return_value = self.static_uuid
         base_pod = PodGenerator(


[airflow] 32/32: Makes multi-namespace mode optional (#9570)

Posted by ka...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

kaxilnaik pushed a commit to branch v1-10-test
in repository https://gitbox.apache.org/repos/asf/airflow.git

commit 242d6d0e9a1955b11677d2be1b7ae5e28243e619
Author: Daniel Imberman <da...@gmail.com>
AuthorDate: Mon Aug 10 13:41:40 2020 -0700

    Makes multi-namespace mode optional (#9570)
    
    Running the airflow k8sexecutor with multiple namespace abilities
    requires creating a ClusterRole which can break existing deployments
    
    Co-authored-by: Daniel Imberman <da...@astronomer.io>
    (cherry picked from commit 2e3c878066f9241d17f2e4ba41fe0e2ba02de79e)
---
 airflow/config_templates/config.yml          |  7 ++++++
 airflow/config_templates/default_airflow.cfg |  4 ++++
 airflow/executors/kubernetes_executor.py     | 32 +++++++++++++++++++++++-----
 3 files changed, 38 insertions(+), 5 deletions(-)

diff --git a/airflow/config_templates/config.yml b/airflow/config_templates/config.yml
index f54255e..75c47cb 100644
--- a/airflow/config_templates/config.yml
+++ b/airflow/config_templates/config.yml
@@ -1812,6 +1812,13 @@
       type: string
       example: ~
       default: "default"
+    - name: multi_namespace_mode
+      description: |
+        Allows users to launch pods in multiple namespaces.
+        Will require creating a cluster-role for the scheduler
+      type: boolean
+      example: ~
+      default: "False"
     - name: airflow_configmap
       description: |
         The name of the Kubernetes ConfigMap containing the Airflow Configuration (this file)
diff --git a/airflow/config_templates/default_airflow.cfg b/airflow/config_templates/default_airflow.cfg
index e18e538..3a9bba2 100644
--- a/airflow/config_templates/default_airflow.cfg
+++ b/airflow/config_templates/default_airflow.cfg
@@ -838,6 +838,10 @@ worker_pods_creation_batch_size = 1
 # The Kubernetes namespace where airflow workers should be created. Defaults to ``default``
 namespace = default
 
+# Allows users to launch pods in multiple namespaces.
+# Will require creating a cluster-role for the scheduler
+multi_namespace_mode = False
+
 # The name of the Kubernetes ConfigMap containing the Airflow Configuration (this file)
 # Example: airflow_configmap = airflow-configmap
 airflow_configmap =
diff --git a/airflow/executors/kubernetes_executor.py b/airflow/executors/kubernetes_executor.py
index 3ad4222..7b31b45 100644
--- a/airflow/executors/kubernetes_executor.py
+++ b/airflow/executors/kubernetes_executor.py
@@ -22,6 +22,7 @@ KubernetesExecutor
     :ref:`executor:KubernetesExecutor`
 """
 import base64
+import functools
 import json
 import multiprocessing
 import time
@@ -162,6 +163,7 @@ class KubeConfig:
         # cluster has RBAC enabled, your scheduler may need service account permissions to
         # create, watch, get, and delete pods in this namespace.
         self.kube_namespace = conf.get(self.kubernetes_section, 'namespace')
+        self.multi_namespace_mode = conf.get(self.kubernetes_section, 'multi_namespace_mode')
         # The Kubernetes Namespace in which pods will be created by the executor. Note
         # that if your
         # cluster has RBAC enabled, your workers may need service account permissions to
@@ -254,9 +256,17 @@ class KubeConfig:
 
 class KubernetesJobWatcher(multiprocessing.Process, LoggingMixin):
     """Watches for Kubernetes jobs"""
-    def __init__(self, namespace, watcher_queue, resource_version, worker_uuid, kube_config):
+
+    def __init__(self,
+                 namespace,
+                 mult_namespace_mode,
+                 watcher_queue,
+                 resource_version,
+                 worker_uuid,
+                 kube_config):
         multiprocessing.Process.__init__(self)
         self.namespace = namespace
+        self.multi_namespace_mode = mult_namespace_mode
         self.worker_uuid = worker_uuid
         self.watcher_queue = watcher_queue
         self.resource_version = resource_version
@@ -295,8 +305,16 @@ class KubernetesJobWatcher(multiprocessing.Process, LoggingMixin):
                 kwargs[key] = value
 
         last_resource_version = None
-        for event in watcher.stream(kube_client.list_namespaced_pod, self.namespace,
-                                    **kwargs):
+        if self.multi_namespace_mode:
+            list_worker_pods = functools.partial(watcher.stream,
+                                                 kube_client.list_pod_for_all_namespaces,
+                                                 **kwargs)
+        else:
+            list_worker_pods = functools.partial(watcher.stream,
+                                                 kube_client.list_namespaced_pod,
+                                                 self.namespace,
+                                                 **kwargs)
+        for event in list_worker_pods():
             task = event['object']
             self.log.info(
                 'Event: %s had an event of type %s',
@@ -377,8 +395,12 @@ class AirflowKubernetesScheduler(LoggingMixin):
 
     def _make_kube_watcher(self):
         resource_version = KubeResourceVersion.get_current_resource_version()
-        watcher = KubernetesJobWatcher(self.namespace, self.watcher_queue,
-                                       resource_version, self.worker_uuid, self.kube_config)
+        watcher = KubernetesJobWatcher(watcher_queue=self.watcher_queue,
+                                       namespace=self.kube_config.kube_namespace,
+                                       mult_namespace_mode=self.kube_config.multi_namespace_mode,
+                                       resource_version=resource_version,
+                                       worker_uuid=self.worker_uuid,
+                                       kube_config=self.kube_config)
         watcher.start()
         return watcher