You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@airflow.apache.org by po...@apache.org on 2021/03/25 00:52:22 UTC

[airflow] branch v2-0-test updated (fd931e2 -> 4cea470)

This is an automated email from the ASF dual-hosted git repository.

potiuk pushed a change to branch v2-0-test
in repository https://gitbox.apache.org/repos/asf/airflow.git.


 discard fd931e2  Disable Providers tests for v2-0-test branch
 discard b4faca2  Much easier to use and better documented Docker image (#14911)
 discard d908526  Fixes default group of Airflow user. (#14944)
 discard 7641506  Create a documentation package for Docker image (#14846)
 discard b02bef5  Quarantine test_clit_tasks - they have a lot of errors
 discard 9cfdeee  Disable Providers tests for v2-0-test branch
     new d7da7f5  Quarantine test_clit_tasks - they have a lot of errors
     new 768dadc  Create a documentation package for Docker image (#14846)
     new 3cd0bdc  Fixes default group of Airflow user. (#14944)
     new 7bb7380  Much easier to use and better documented Docker image (#14911)
     new 4cea470  Skips provider package builds and provider tests for non-master

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (fd931e2)
            \
             N -- N -- N   refs/heads/v2-0-test (4cea470)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 5 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .github/workflows/ci.yml              | 12 ++++--------
 PULL_REQUEST_WORKFLOW.rst             |  9 ++++++---
 scripts/ci/selective_ci_checks.sh     | 32 ++++++++++++++++++++++++--------
 scripts/in_container/entrypoint_ci.sh |  4 ++--
 4 files changed, 36 insertions(+), 21 deletions(-)

[airflow] 05/05: Skips provider package builds and provider tests for non-master

Posted by po...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

potiuk pushed a commit to branch v2-0-test
in repository https://gitbox.apache.org/repos/asf/airflow.git

commit 4cea470503f49110e2a07d592457476cf2710167
Author: Jarek Potiuk <ja...@potiuk.com>
AuthorDate: Thu Mar 25 01:48:14 2021 +0100

    Skips provider package builds and provider tests for non-master
    
    This PR skips building Provider packages for branches different
    than master. Provider packages are always released from master
    but never from any other branch so there is no point in running
    the package building and tests there
    
    (cherry picked from commit df368f17df361af699dc868af9481ddc3abf0416)
---
 .github/workflows/ci.yml          |  8 ++++----
 PULL_REQUEST_WORKFLOW.rst         |  9 ++++++---
 scripts/ci/selective_ci_checks.sh | 26 +++++++++++++++++++++-----
 3 files changed, 31 insertions(+), 12 deletions(-)

diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml
index 79eb7fb..a70632c 100644
--- a/.github/workflows/ci.yml
+++ b/.github/workflows/ci.yml
@@ -154,6 +154,7 @@ jobs:
       needs-helm-tests: ${{ steps.selective-checks.outputs.needs-helm-tests }}
       needs-api-tests: ${{ steps.selective-checks.outputs.needs-api-tests }}
       needs-api-codegen: ${{ steps.selective-checks.outputs.needs-api-codegen }}
+      default-branch: ${{ steps.selective-checks.outputs.default-branch }}
       pullRequestNumber: ${{ steps.source-run-info.outputs.pullRequestNumber }}
       pullRequestLabels: ${{ steps.source-run-info.outputs.pullRequestLabels }}
       runsOn: ${{ steps.set-runs-on.outputs.runsOn }}
@@ -504,12 +505,13 @@ ${{ hashFiles('.pre-commit-config.yaml') }}"
     strategy:
       matrix:
         package-format: ['wheel', 'sdist']
-    if: needs.build-info.outputs.image-build == 'true'
+    if: needs.build-info.outputs.image-build == 'true' && needs.build-info.outputs.default-branch == 'master'
     steps:
       - name: "Checkout ${{ github.ref }} ( ${{ github.sha }} )"
         uses: actions/checkout@v2
         with:
           persist-credentials: false
+        if: needs.build-info.outputs.default-branch == 'master'
       - name: "Setup python"
         uses: actions/setup-python@v2
         with:
@@ -551,7 +553,7 @@ ${{ hashFiles('.pre-commit-config.yaml') }}"
     strategy:
       matrix:
         package-format: ['wheel', 'sdist']
-    if: needs.build-info.outputs.image-build == 'true'
+    if: needs.build-info.outputs.image-build == 'true' && needs.build-info.outputs.default-branch == 'master'
     steps:
       - name: "Checkout ${{ github.ref }} ( ${{ github.sha }} )"
         uses: actions/checkout@v2
@@ -1033,8 +1035,6 @@ ${{ hashFiles('.pre-commit-config.yaml') }}"
       - tests-postgres
       - tests-mysql
       - tests-kubernetes
-      - prepare-provider-packages
-      - test-provider-packages-released-airflow
       - prod-images
       - docs
     if: >
diff --git a/PULL_REQUEST_WORKFLOW.rst b/PULL_REQUEST_WORKFLOW.rst
index 3e77c53..69ff971 100644
--- a/PULL_REQUEST_WORKFLOW.rst
+++ b/PULL_REQUEST_WORKFLOW.rst
@@ -125,7 +125,8 @@ The logic implemented for the changes works as follows:
 
 1) In case of direct push (so when PR gets merged) or scheduled run, we always run all tests and checks.
    This is in order to make sure that the merge did not miss anything important. The remainder of the logic
-   is executed only in case of Pull Requests.
+   is executed only in case of Pull Requests. We do not add providers tests in case DEFAULT_BRANCH is
+   different than master, because providers are only important in master branch and PRs to master branch.
 
 2) We retrieve which files have changed in the incoming Merge Commit (github.sha is a merge commit
    automatically prepared by GitHub in case of Pull Request, so we can retrieve the list of changed
@@ -133,7 +134,9 @@ The logic implemented for the changes works as follows:
 
 3) If any of the important, environment files changed (Dockerfile, ci scripts, setup.py, GitHub workflow
    files), then we again run all tests and checks. Those are cases where the logic of the checks changed
-   or the environment for the checks changed so we want to make sure to check everything.
+   or the environment for the checks changed so we want to make sure to check everything. We do not add
+   providers tests in case DEFAULT_BRANCH is different than master, because providers are only
+   important in master branch and PRs to master branch.
 
 4) If any of py files changed: we need to have CI image and run full static checks so we enable image building
 
@@ -157,7 +160,7 @@ The logic implemented for the changes works as follows:
    b) if any of the Airflow API files changed we enable ``API`` test type
    c) if any of the Airflow CLI files changed we enable ``CLI`` test type and Kubernetes tests (the
       K8S tests depend on CLI changes as helm chart uses CLI to run Airflow).
-   d) if any of the Provider files changed we enable ``Providers`` test type
+   d) if this is a master branch and if any of the Provider files changed we enable ``Providers`` test type
    e) if any of the WWW files changed we enable ``WWW`` test type
    f) if any of the Kubernetes files changed we enable ``Kubernetes`` test type
    g) Then we subtract count of all the ``specific`` above per-type changed files from the count of
diff --git a/scripts/ci/selective_ci_checks.sh b/scripts/ci/selective_ci_checks.sh
index e1ca0be..cf81b33 100755
--- a/scripts/ci/selective_ci_checks.sh
+++ b/scripts/ci/selective_ci_checks.sh
@@ -122,7 +122,12 @@ function output_all_basic_variables() {
         initialization::ga_output sqlite-exclude '[]'
     fi
 
+
+    initialization::ga_output default-helm-version "${HELM_VERSION}"
     initialization::ga_output kubernetes-exclude '[]'
+
+    initialization::ga_output default-branch "${DEFAULT_BRANCH}"
+
 }
 
 function get_changed_files() {
@@ -198,8 +203,12 @@ function set_upgrade_to_newer_dependencies() {
     initialization::ga_output upgrade-to-newer-dependencies "${@}"
 }
 
-
-ALL_TESTS="Always API Core Other CLI Providers WWW Integration"
+if [[ ${DEFAULT_BRANCH} == "master" ]]; then
+    ALL_TESTS="Always API Core Other CLI Providers WWW Integration"
+else
+    # Skips Provider tests in case current default branch is not master
+    ALL_TESTS="Always API Core Other CLI WWW Integration"
+fi
 readonly ALL_TESTS
 
 function set_outputs_run_everything_and_exit() {
@@ -588,11 +597,18 @@ function calculate_test_types_to_run() {
             SELECTED_TESTS="${SELECTED_TESTS} CLI"
             kubernetes_tests_needed="true"
         fi
-        if [[ ${COUNT_PROVIDERS_CHANGED_FILES} != "0" ]]; then
+
+        if [[ ${DEFAULT_BRANCH} == "master" ]]; then
+            if [[ ${COUNT_PROVIDERS_CHANGED_FILES} != "0" ]]; then
+                echo
+                echo "Adding Providers to selected files as ${COUNT_PROVIDERS_CHANGED_FILES} Provider files changed"
+                echo
+                SELECTED_TESTS="${SELECTED_TESTS} Providers"
+            fi
+        else
             echo
-            echo "Adding Providers to selected files as ${COUNT_PROVIDERS_CHANGED_FILES} Provider files changed"
+            echo "Providers tests are not added because they are only run in case of master branch."
             echo
-            SELECTED_TESTS="${SELECTED_TESTS} Providers"
         fi
         if [[ ${COUNT_WWW_CHANGED_FILES} != "0" ]]; then
             echo

[airflow] 02/05: Create a documentation package for Docker image (#14846)

Posted by po...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

potiuk pushed a commit to branch v2-0-test
in repository https://gitbox.apache.org/repos/asf/airflow.git

commit 768dadc21d153ca12be978ba6db5d2685ed8c841
Author: Kamil Breguła <mi...@users.noreply.github.com>
AuthorDate: Sun Mar 21 09:08:10 2021 +0100

    Create a documentation package for Docker image (#14846)
    
    (cherry picked from commit a18cbc4e91b86c7b32589b3b00fa71cceceb755d)
---
 docs/apache-airflow/installation.rst               |   2 +-
 docs/apache-airflow/production-deployment.rst      | 847 +--------------------
 docs/apache-airflow/start/docker.rst               |   2 +-
 docs/build_docs.py                                 |  12 +-
 docs/conf.py                                       |   3 +
 docs/docker-stack/build.rst                        | 511 +++++++++++++
 .../docker-images-recipes/gcloud.Dockerfile        |   0
 .../docker-images-recipes/hadoop.Dockerfile        |   0
 docs/docker-stack/entrypoint.rst                   | 201 +++++
 docs/docker-stack/img/docker-logo.png              | Bin 0 -> 50112 bytes
 docs/docker-stack/index.rst                        |  54 ++
 docs/docker-stack/recipes.rst                      |  70 ++
 docs/exts/airflow_intersphinx.py                   |  13 +-
 .../exts/docs_build/dev_index_template.html.jinja2 |  11 +
 docs/exts/docs_build/docs_builder.py               |  11 +-
 docs/exts/docs_build/fetch_inventories.py          |  51 +-
 16 files changed, 915 insertions(+), 873 deletions(-)

diff --git a/docs/apache-airflow/installation.rst b/docs/apache-airflow/installation.rst
index eac6894..0184216 100644
--- a/docs/apache-airflow/installation.rst
+++ b/docs/apache-airflow/installation.rst
@@ -27,7 +27,7 @@ installation with other tools as well.
 
 .. note::
 
-    Airflow is also distributed as a Docker image (OCI Image). For more information, see: :ref:`docker_image`
+    Airflow is also distributed as a Docker image (OCI Image). Consider using it to guarantee that software will always run the same no matter where it is deployed. For more information, see: :doc:`docker-stack:index`.
 
 Prerequisites
 '''''''''''''
diff --git a/docs/apache-airflow/production-deployment.rst b/docs/apache-airflow/production-deployment.rst
index 4fb693d..ecc6077 100644
--- a/docs/apache-airflow/production-deployment.rst
+++ b/docs/apache-airflow/production-deployment.rst
@@ -118,852 +118,7 @@ To mitigate these issues, make sure you have a :doc:`health check </logging-moni
 Production Container Images
 ===========================
 
-Production-ready reference Image
---------------------------------
-
-For the ease of deployment in production, the community releases a production-ready reference container
-image.
-
-The docker image provided (as convenience binary package) in the
-`Apache Airflow DockerHub <https://hub.docker.com/r/apache/airflow>`_ is a bare image
-that has a few external dependencies and extras installed..
-
-The Apache Airflow image provided as convenience package is optimized for size, so
-it provides just a bare minimal set of the extras and dependencies installed and in most cases
-you want to either extend or customize the image. You can see all possible extras in
-:doc:`extra-packages-ref`. The set of extras used in Airflow Production image are available in the
-`Dockerfile <https://github.com/apache/airflow/blob/2c6c7fdb2308de98e142618836bdf414df9768c8/Dockerfile#L39>`_.
-
-The production images are build in DockerHub from released version and release candidates. There
-are also images published from branches but they are used mainly for development and testing purpose.
-See `Airflow Git Branching <https://github.com/apache/airflow/blob/master/CONTRIBUTING.rst#airflow-git-branches>`_
-for details.
-
-
-Customizing or extending the Production Image
----------------------------------------------
-
-Before you dive-deeply in the way how the Airflow Image is build, named and why we are doing it the
-way we do, you might want to know very quickly how you can extend or customize the existing image
-for Apache Airflow. This chapter gives you a short answer to those questions.
-
-Airflow Summit 2020's `Production Docker Image <https://youtu.be/wDr3Y7q2XoI>`_ talk provides more
-details about the context, architecture and customization/extension methods for the Production Image.
-
-Extending the image
-...................
-
-Extending the image is easiest if you just need to add some dependencies that do not require
-compiling. The compilation framework of Linux (so called ``build-essential``) is pretty big, and
-for the production images, size is really important factor to optimize for, so our Production Image
-does not contain ``build-essential``. If you need compiler like gcc or g++ or make/cmake etc. - those
-are not found in the image and it is recommended that you follow the "customize" route instead.
-
-How to extend the image - it is something you are most likely familiar with - simply
-build a new image using Dockerfile's ``FROM`` directive and add whatever you need. Then you can add your
-Debian dependencies with ``apt`` or PyPI dependencies with ``pip install`` or any other stuff you need.
-
-You should be aware, about a few things:
-
-* The production image of airflow uses "airflow" user, so if you want to add some of the tools
-  as ``root`` user, you need to switch to it with ``USER`` directive of the Dockerfile. Also you
-  should remember about following the
-  `best practises of Dockerfiles <https://docs.docker.com/develop/develop-images/dockerfile_best-practices/>`_
-  to make sure your image is lean and small.
-
-.. code-block:: dockerfile
-
-  FROM apache/airflow:2.0.0
-  USER root
-  RUN apt-get update \
-    && apt-get install -y --no-install-recommends \
-           my-awesome-apt-dependency-to-add \
-    && apt-get autoremove -yqq --purge \
-    && apt-get clean \
-    && rm -rf /var/lib/apt/lists/*
-  USER airflow
-
-
-* PyPI dependencies in Apache Airflow are installed in the user library, of the "airflow" user, so
-  you need to install them with the ``--user`` flag and WITHOUT switching to airflow user. Note also
-  that using --no-cache-dir is a good idea that can help to make your image smaller.
-
-.. code-block:: dockerfile
-
-  FROM apache/airflow:2.0.0
-  RUN pip install --no-cache-dir --user my-awesome-pip-dependency-to-add
-
-* As of 2.0.1 image the ``--user`` flag is turned on by default by setting ``PIP_USER`` environment variable
-  to ``true``. This can be disabled by un-setting the variable or by setting it to ``false``.
-
-
-* If your apt, or PyPI dependencies require some of the build-essentials, then your best choice is
-  to follow the "Customize the image" route. However it requires to checkout sources of Apache Airflow,
-  so you might still want to choose to add build essentials to your image, even if your image will
-  be significantly bigger.
-
-.. code-block:: dockerfile
-
-  FROM apache/airflow:2.0.0
-  USER root
-  RUN apt-get update \
-    && apt-get install -y --no-install-recommends \
-           build-essential my-awesome-apt-dependency-to-add \
-    && apt-get autoremove -yqq --purge \
-    && apt-get clean \
-    && rm -rf /var/lib/apt/lists/*
-  USER airflow
-  RUN pip install --no-cache-dir --user my-awesome-pip-dependency-to-add
-
-
-* You can also embed your dags in the image by simply adding them with COPY directive of Airflow.
-  The DAGs in production image are in /opt/airflow/dags folder.
-
-Customizing the image
-.....................
-
-Customizing the image is an alternative way of adding your own dependencies to the image - better
-suited to prepare optimized production images.
-
-The advantage of this method is that it produces optimized image even if you need some compile-time
-dependencies that are not needed in the final image. You need to use Airflow Sources to build such images
-from the `official distribution folder of Apache Airflow <https://downloads.apache.org/airflow/>`_ for the
-released versions, or checked out from the GitHub project if you happen to do it from git sources.
-
-The easiest way to build the image is to use ``breeze`` script, but you can also build such customized
-image by running appropriately crafted docker build in which you specify all the ``build-args``
-that you need to add to customize it. You can read about all the args and ways you can build the image
-in the `<#production-image-build-arguments>`_ chapter below.
-
-Here just a few examples are presented which should give you general understanding of what you can customize.
-
-This builds the production image in version 3.7 with additional airflow extras from 2.0.0 PyPI package and
-additional apt dev and runtime dependencies.
-
-.. code-block:: bash
-
-  docker build . \
-    --build-arg PYTHON_BASE_IMAGE="python:3.7-slim-buster" \
-    --build-arg PYTHON_MAJOR_MINOR_VERSION=3.7 \
-    --build-arg AIRFLOW_INSTALLATION_METHOD="apache-airflow" \
-    --build-arg AIRFLOW_VERSION="2.0.0" \
-    --build-arg AIRFLOW_VERSION_SPECIFICATION="==2.0.0" \
-    --build-arg AIRFLOW_CONSTRAINTS_REFERENCE="constraints-2-0" \
-    --build-arg AIRFLOW_SOURCES_FROM="empty" \
-    --build-arg AIRFLOW_SOURCES_TO="/empty" \
-    --build-arg ADDITIONAL_AIRFLOW_EXTRAS="jdbc" \
-    --build-arg ADDITIONAL_PYTHON_DEPS="pandas" \
-    --build-arg ADDITIONAL_DEV_APT_DEPS="gcc g++" \
-    --build-arg ADDITIONAL_RUNTIME_APT_DEPS="default-jre-headless" \
-    --tag my-image
-
-
-the same image can be built using ``breeze`` (it supports auto-completion of the options):
-
-.. code-block:: bash
-
-  ./breeze build-image \
-      --production-image  --python 3.7 --install-airflow-version=2.0.0 \
-      --additional-extras=jdbc --additional-python-deps="pandas" \
-      --additional-dev-apt-deps="gcc g++" --additional-runtime-apt-deps="default-jre-headless"
-
-
-You can customize more aspects of the image - such as additional commands executed before apt dependencies
-are installed, or adding extra sources to install your dependencies from. You can see all the arguments
-described below but here is an example of rather complex command to customize the image
-based on example in `this comment <https://github.com/apache/airflow/issues/8605#issuecomment-690065621>`_:
-
-.. code-block:: bash
-
-  docker build . -f Dockerfile \
-    --build-arg PYTHON_BASE_IMAGE="python:3.7-slim-buster" \
-    --build-arg PYTHON_MAJOR_MINOR_VERSION=3.7 \
-    --build-arg AIRFLOW_INSTALLATION_METHOD="apache-airflow" \
-    --build-arg AIRFLOW_VERSION="2.0.0" \
-    --build-arg AIRFLOW_VERSION_SPECIFICATION="==2.0.0" \
-    --build-arg AIRFLOW_CONSTRAINTS_REFERENCE="constraints-2-0" \
-    --build-arg AIRFLOW_SOURCES_FROM="empty" \
-    --build-arg AIRFLOW_SOURCES_TO="/empty" \
-    --build-arg ADDITIONAL_AIRFLOW_EXTRAS="slack" \
-    --build-arg ADDITIONAL_PYTHON_DEPS=" \
-        apache-airflow-providers-odbc \
-        azure-storage-blob \
-        sshtunnel \
-        google-api-python-client \
-        oauth2client \
-        beautifulsoup4 \
-        dateparser \
-        rocketchat_API \
-        typeform" \
-    --build-arg ADDITIONAL_DEV_APT_DEPS="msodbcsql17 unixodbc-dev g++" \
-    --build-arg ADDITIONAL_DEV_APT_COMMAND="curl https://packages.microsoft.com/keys/microsoft.asc | \
-    apt-key add --no-tty - && \
-    curl https://packages.microsoft.com/config/debian/10/prod.list > /etc/apt/sources.list.d/mssql-release.list" \
-    --build-arg ADDITIONAL_DEV_ENV_VARS="ACCEPT_EULA=Y" \
-    --build-arg ADDITIONAL_RUNTIME_APT_COMMAND="curl https://packages.microsoft.com/keys/microsoft.asc | \
-    apt-key add --no-tty - && \
-    curl https://packages.microsoft.com/config/debian/10/prod.list > /etc/apt/sources.list.d/mssql-release.list" \
-    --build-arg ADDITIONAL_RUNTIME_APT_DEPS="msodbcsql17 unixodbc git procps vim" \
-    --build-arg ADDITIONAL_RUNTIME_ENV_VARS="ACCEPT_EULA=Y" \
-    --tag my-image
-
-Customizing images in high security restricted environments
-...........................................................
-
-You can also make sure your image is only build using local constraint file and locally downloaded
-wheel files. This is often useful in Enterprise environments where the binary files are verified and
-vetted by the security teams.
-
-This builds below builds the production image in version 3.7 with packages and constraints used from the local
-``docker-context-files`` rather than installed from PyPI or GitHub. It also disables MySQL client
-installation as it is using external installation method.
-
-Note that as a prerequisite - you need to have downloaded wheel files. In the example below we
-first download such constraint file locally and then use ``pip download`` to get the .whl files needed
-but in most likely scenario, those wheel files should be copied from an internal repository of such .whl
-files. Note that ``AIRFLOW_VERSION_SPECIFICATION`` is only there for reference, the apache airflow .whl file
-in the right version is part of the .whl files downloaded.
-
-Note that 'pip download' will only works on Linux host as some of the packages need to be compiled from
-sources and you cannot install them providing ``--platform`` switch. They also need to be downloaded using
-the same python version as the target image.
-
-The ``pip download`` might happen in a separate environment. The files can be committed to a separate
-binary repository and vetted/verified by the security team and used subsequently to build images
-of Airflow when needed on an air-gaped system.
-
-Preparing the constraint files and wheel files:
-
-.. code-block:: bash
-
-  rm docker-context-files/*.whl docker-context-files/*.txt
-
-  curl -Lo "docker-context-files/constraints-2-0.txt" \
-    https://raw.githubusercontent.com/apache/airflow/constraints-2-0/constraints-3.7.txt
-
-  pip download --dest docker-context-files \
-    --constraint docker-context-files/constraints-2-0.txt  \
-    apache-airflow[async,aws,azure,celery,dask,elasticsearch,gcp,kubernetes,mysql,postgres,redis,slack,ssh,statsd,virtualenv]==2.0.0
-
-Since apache-airflow .whl packages are treated differently by the docker image, you need to rename the
-downloaded apache-airflow* files, for example:
-
-.. code-block:: bash
-
-   pushd docker-context-files
-   for file in apache?airflow*
-   do
-     mv ${file} _${file}
-   done
-   popd
-
-Building the image:
-
-.. code-block:: bash
-
-  ./breeze build-image \
-      --production-image --python 3.7 --install-airflow-version=2.0.0 \
-      --disable-mysql-client-installation --disable-pip-cache --install-from-local-files-when-building \
-      --constraints-location="/docker-context-files/constraints-2-0.txt"
-
-or
-
-.. code-block:: bash
-
-  docker build . \
-    --build-arg PYTHON_BASE_IMAGE="python:3.7-slim-buster" \
-    --build-arg PYTHON_MAJOR_MINOR_VERSION=3.7 \
-    --build-arg AIRFLOW_INSTALLATION_METHOD="apache-airflow" \
-    --build-arg AIRFLOW_VERSION="2.0.0" \
-    --build-arg AIRFLOW_VERSION_SPECIFICATION="==2.0.0" \
-    --build-arg AIRFLOW_CONSTRAINTS_REFERENCE="constraints-2-0" \
-    --build-arg AIRFLOW_SOURCES_FROM="empty" \
-    --build-arg AIRFLOW_SOURCES_TO="/empty" \
-    --build-arg INSTALL_MYSQL_CLIENT="false" \
-    --build-arg AIRFLOW_PRE_CACHED_PIP_PACKAGES="false" \
-    --build-arg INSTALL_FROM_DOCKER_CONTEXT_FILES="true" \
-    --build-arg AIRFLOW_CONSTRAINTS_LOCATION="/docker-context-files/constraints-2-0.txt"
-
-
-Customizing & extending the image together
-..........................................
-
-You can combine both - customizing & extending the image. You can build the image first using
-``customize`` method (either with docker command or with ``breeze`` and then you can ``extend``
-the resulting image using ``FROM`` any dependencies you want.
-
-Customizing PYPI installation
-.............................
-
-You can customize PYPI sources used during image build by adding a docker-context-files/.pypirc file
-This .pypirc will never be committed to the repository and will not be present in the final production image.
-It is added and used only in the build segment of the image so it is never copied to the final image.
-
-External sources for dependencies
----------------------------------
-
-In corporate environments, there is often the need to build your Container images using
-other than default sources of dependencies. The docker file uses standard sources (such as
-Debian apt repositories or PyPI repository. However, in corporate environments, the dependencies
-are often only possible to be installed from internal, vetted repositories that are reviewed and
-approved by the internal security teams. In those cases, you might need to use those different
-sources.
-
-This is rather easy if you extend the image - you simply write your extension commands
-using the right sources - either by adding/replacing the sources in apt configuration or
-specifying the source repository in pip install command.
-
-It's a bit more involved in the case of customizing the image. We do not have yet (but we are working
-on it) a capability of changing the sources via build args. However, since the builds use
-Dockerfile that is a source file, you can rather easily simply modify the file manually and
-specify different sources to be used by either of the commands.
-
-
-Comparing extending and customizing the image
----------------------------------------------
-
-Here is the comparison of the two types of building images.
-
-+----------------------------------------------------+---------------------+-----------------------+
-|                                                    | Extending the image | Customizing the image |
-+====================================================+=====================+=======================+
-| Produces optimized image                           | No                  | Yes                   |
-+----------------------------------------------------+---------------------+-----------------------+
-| Use Airflow Dockerfile sources to build the image  | No                  | Yes                   |
-+----------------------------------------------------+---------------------+-----------------------+
-| Requires Airflow sources                           | No                  | Yes                   |
-+----------------------------------------------------+---------------------+-----------------------+
-| You can build it with Breeze                       | No                  | Yes                   |
-+----------------------------------------------------+---------------------+-----------------------+
-| Allows to use non-default sources for dependencies | Yes                 | No [1]                |
-+----------------------------------------------------+---------------------+-----------------------+
-
-[1] When you combine customizing and extending the image, you can use external sources
-in the "extend" part. There are plans to add functionality to add external sources
-option to image customization. You can also modify Dockerfile manually if you want to
-use non-default sources for dependencies.
-
-Using the production image
---------------------------
-
-The PROD image entrypoint works as follows:
-
-* In case the user is not "airflow" (with undefined user id) and the group id of the user is set to 0 (root),
-  then the user is dynamically added to /etc/passwd at entry using USER_NAME variable to define the user name.
-  This is in order to accommodate the
-  `OpenShift Guidelines <https://docs.openshift.com/enterprise/3.0/creating_images/guidelines.html>`_
-
-* The ``AIRFLOW_HOME`` is set by default to ``/opt/airflow/`` - this means that DAGs
-  are in default in the ``/opt/airflow/dags`` folder and logs are in the ``/opt/airflow/logs``
-
-* The working directory is ``/opt/airflow`` by default.
-
-* If ``AIRFLOW__CORE__SQL_ALCHEMY_CONN`` variable is passed to the container and it is either mysql or postgres
-  SQL alchemy connection, then the connection is checked and the script waits until the database is reachable.
-  If ``AIRFLOW__CORE__SQL_ALCHEMY_CONN_CMD`` variable is passed to the container, it is evaluated as a
-  command to execute and result of this evaluation is used as ``AIRFLOW__CORE__SQL_ALCHEMY_CONN``. The
-  ``_CMD`` variable takes precedence over the ``AIRFLOW__CORE__SQL_ALCHEMY_CONN`` variable.
-
-* If no ``AIRFLOW__CORE__SQL_ALCHEMY_CONN`` variable is set then SQLite database is created in
-  ${AIRFLOW_HOME}/airflow.db and db reset is executed.
-
-* If first argument equals to "bash" - you are dropped to a bash shell or you can executes bash command
-  if you specify extra arguments. For example:
-
-.. code-block:: bash
-
-  docker run -it apache/airflow:master-python3.6 bash -c "ls -la"
-  total 16
-  drwxr-xr-x 4 airflow root 4096 Jun  5 18:12 .
-  drwxr-xr-x 1 root    root 4096 Jun  5 18:12 ..
-  drwxr-xr-x 2 airflow root 4096 Jun  5 18:12 dags
-  drwxr-xr-x 2 airflow root 4096 Jun  5 18:12 logs
-
-* If first argument is equal to "python" - you are dropped in python shell or python commands are executed if
-  you pass extra parameters. For example:
-
-.. code-block:: bash
-
-  > docker run -it apache/airflow:master-python3.6 python -c "print('test')"
-  test
-
-* If first argument equals to "airflow" - the rest of the arguments is treated as an airflow command
-  to execute. Example:
-
-.. code-block:: bash
-
-   docker run -it apache/airflow:master-python3.6 airflow webserver
-
-* If there are any other arguments - they are simply passed to the "airflow" command
-
-.. code-block:: bash
-
-  > docker run -it apache/airflow:master-python3.6 version
-  2.0.0.dev0
-
-* If ``AIRFLOW__CELERY__BROKER_URL`` variable is passed and airflow command with
-  scheduler, worker of flower command is used, then the script checks the broker connection
-  and waits until the Celery broker database is reachable.
-  If ``AIRFLOW__CELERY__BROKER_URL_CMD`` variable is passed to the container, it is evaluated as a
-  command to execute and result of this evaluation is used as ``AIRFLOW__CELERY__BROKER_URL``. The
-  ``_CMD`` variable takes precedence over the ``AIRFLOW__CELERY__BROKER_URL`` variable.
-
-Production image build arguments
---------------------------------
-
-The following build arguments (``--build-arg`` in docker build command) can be used for production images:
-
-+------------------------------------------+------------------------------------------+------------------------------------------+
-| Build argument                           | Default value                            | Description                              |
-+==========================================+==========================================+==========================================+
-| ``PYTHON_BASE_IMAGE``                    | ``python:3.6-slim-buster``               | Base python image.                       |
-+------------------------------------------+------------------------------------------+------------------------------------------+
-| ``PYTHON_MAJOR_MINOR_VERSION``           | ``3.6``                                  | major/minor version of Python (should    |
-|                                          |                                          | match base image).                       |
-+------------------------------------------+------------------------------------------+------------------------------------------+
-| ``AIRFLOW_VERSION``                      | ``2.0.0.dev0``                           | version of Airflow.                      |
-+------------------------------------------+------------------------------------------+------------------------------------------+
-| ``AIRFLOW_REPO``                         | ``apache/airflow``                       | the repository from which PIP            |
-|                                          |                                          | dependencies are pre-installed.          |
-+------------------------------------------+------------------------------------------+------------------------------------------+
-| ``AIRFLOW_BRANCH``                       | ``master``                               | the branch from which PIP dependencies   |
-|                                          |                                          | are pre-installed initially.             |
-+------------------------------------------+------------------------------------------+------------------------------------------+
-| ``AIRFLOW_CONSTRAINTS_LOCATION``         |                                          | If not empty, it will override the       |
-|                                          |                                          | source of the constraints with the       |
-|                                          |                                          | specified URL or file. Note that the     |
-|                                          |                                          | file has to be in docker context so      |
-|                                          |                                          | it's best to place such file in          |
-|                                          |                                          | one of the folders included in           |
-|                                          |                                          | .dockerignore.                           |
-+------------------------------------------+------------------------------------------+------------------------------------------+
-| ``AIRFLOW_CONSTRAINTS_REFERENCE``        | ``constraints-master``                   | Reference (branch or tag) from GitHub    |
-|                                          |                                          | where constraints file is taken from     |
-|                                          |                                          | It can be ``constraints-master`` but     |
-|                                          |                                          | also can be ``constraints-1-10`` for     |
-|                                          |                                          | 1.10.* installation. In case of building |
-|                                          |                                          | specific version you want to point it    |
-|                                          |                                          | to specific tag, for example             |
-|                                          |                                          | ``constraints-1.10.15``.                 |
-+------------------------------------------+------------------------------------------+------------------------------------------+
-| ``INSTALL_PROVIDERS_FROM_SOURCES``       | ``false``                                | If set to ``true`` and image is built    |
-|                                          |                                          | from sources, all provider packages are  |
-|                                          |                                          | installed from sources rather than from  |
-|                                          |                                          | packages. It has no effect when          |
-|                                          |                                          | installing from PyPI or GitHub repo.     |
-+------------------------------------------+------------------------------------------+------------------------------------------+
-| ``AIRFLOW_EXTRAS``                       | (see Dockerfile)                         | Default extras with which airflow is     |
-|                                          |                                          | installed.                               |
-+------------------------------------------+------------------------------------------+------------------------------------------+
-| ``INSTALL_FROM_PYPI``                    | ``true``                                 | If set to true, Airflow is installed     |
-|                                          |                                          | from PyPI. if you want to install        |
-|                                          |                                          | Airflow from self-build package          |
-|                                          |                                          | you can set it to false, put package in  |
-|                                          |                                          | ``docker-context-files`` and set         |
-|                                          |                                          | ``INSTALL_FROM_DOCKER_CONTEXT_FILES`` to |
-|                                          |                                          | ``true``. For this you have to also keep |
-|                                          |                                          | ``AIRFLOW_PRE_CACHED_PIP_PACKAGES`` flag |
-|                                          |                                          | set to ``false``.                        |
-+------------------------------------------+------------------------------------------+------------------------------------------+
-| ``AIRFLOW_PRE_CACHED_PIP_PACKAGES``      | ``false``                                | Allows to pre-cache airflow PIP packages |
-|                                          |                                          | from the GitHub of Apache Airflow        |
-|                                          |                                          | This allows to optimize iterations for   |
-|                                          |                                          | Image builds and speeds up CI builds.    |
-+------------------------------------------+------------------------------------------+------------------------------------------+
-| ``INSTALL_FROM_DOCKER_CONTEXT_FILES``    | ``false``                                | If set to true, Airflow, providers and   |
-|                                          |                                          | all dependencies are installed from      |
-|                                          |                                          | from locally built/downloaded            |
-|                                          |                                          | .whl and .tar.gz files placed in the     |
-|                                          |                                          | ``docker-context-files``. In certain     |
-|                                          |                                          | corporate environments, this is required |
-|                                          |                                          | to install airflow from such pre-vetted  |
-|                                          |                                          | packages rather than from PyPI. For this |
-|                                          |                                          | to work, also set ``INSTALL_FROM_PYPI``. |
-|                                          |                                          | Note that packages starting with         |
-|                                          |                                          | ``apache?airflow`` glob are treated      |
-|                                          |                                          | differently than other packages. All     |
-|                                          |                                          | ``apache?airflow`` packages are          |
-|                                          |                                          | installed with dependencies limited by   |
-|                                          |                                          | airflow constraints. All other packages  |
-|                                          |                                          | are installed without dependencies       |
-|                                          |                                          | 'as-is'. If you wish to install airflow  |
-|                                          |                                          | via 'pip download' with all dependencies |
-|                                          |                                          | downloaded, you have to rename the       |
-|                                          |                                          | apache airflow and provider packages to  |
-|                                          |                                          | not start with ``apache?airflow`` glob.  |
-+------------------------------------------+------------------------------------------+------------------------------------------+
-| ``UPGRADE_TO_NEWER_DEPENDENCIES``        | ``false``                                | If set to true, the dependencies are     |
-|                                          |                                          | upgraded to newer versions matching      |
-|                                          |                                          | setup.py before installation.            |
-+------------------------------------------+------------------------------------------+------------------------------------------+
-| ``CONTINUE_ON_PIP_CHECK_FAILURE``        | ``false``                                | By default the image build fails if pip  |
-|                                          |                                          | check fails for it. This is good for     |
-|                                          |                                          | interactive building but on CI the       |
-|                                          |                                          | image should be built regardless - we    |
-|                                          |                                          | have a separate step to verify image.    |
-+------------------------------------------+------------------------------------------+------------------------------------------+
-| ``ADDITIONAL_AIRFLOW_EXTRAS``            |                                          | Optional additional extras with which    |
-|                                          |                                          | airflow is installed.                    |
-+------------------------------------------+------------------------------------------+------------------------------------------+
-| ``ADDITIONAL_PYTHON_DEPS``               |                                          | Optional python packages to extend       |
-|                                          |                                          | the image with some extra dependencies.  |
-+------------------------------------------+------------------------------------------+------------------------------------------+
-| ``DEV_APT_COMMAND``                      | (see Dockerfile)                         | Dev apt command executed before dev deps |
-|                                          |                                          | are installed in the Build image.        |
-+------------------------------------------+------------------------------------------+------------------------------------------+
-| ``ADDITIONAL_DEV_APT_COMMAND``           |                                          | Additional Dev apt command executed      |
-|                                          |                                          | before dev dep are installed             |
-|                                          |                                          | in the Build image. Should start with    |
-|                                          |                                          | ``&&``.                                  |
-+------------------------------------------+------------------------------------------+------------------------------------------+
-| ``DEV_APT_DEPS``                         | (see Dockerfile)                         | Dev APT dependencies installed           |
-|                                          |                                          | in the Build image.                      |
-+------------------------------------------+------------------------------------------+------------------------------------------+
-| ``ADDITIONAL_DEV_APT_DEPS``              |                                          | Additional apt dev dependencies          |
-|                                          |                                          | installed in the Build image.            |
-+------------------------------------------+------------------------------------------+------------------------------------------+
-| ``ADDITIONAL_DEV_APT_ENV``               |                                          | Additional env variables defined         |
-|                                          |                                          | when installing dev deps.                |
-+------------------------------------------+------------------------------------------+------------------------------------------+
-| ``RUNTIME_APT_COMMAND``                  | (see Dockerfile)                         | Runtime apt command executed before deps |
-|                                          |                                          | are installed in the Main image.         |
-+------------------------------------------+------------------------------------------+------------------------------------------+
-| ``ADDITIONAL_RUNTIME_APT_COMMAND``       |                                          | Additional Runtime apt command executed  |
-|                                          |                                          | before runtime dep are installed         |
-|                                          |                                          | in the Main image. Should start with     |
-|                                          |                                          | ``&&``.                                  |
-+------------------------------------------+------------------------------------------+------------------------------------------+
-| ``RUNTIME_APT_DEPS``                     | (see Dockerfile)                         | Runtime APT dependencies installed       |
-|                                          |                                          | in the Main image.                       |
-+------------------------------------------+------------------------------------------+------------------------------------------+
-| ``ADDITIONAL_RUNTIME_APT_DEPS``          |                                          | Additional apt runtime dependencies      |
-|                                          |                                          | installed in the Main image.             |
-+------------------------------------------+------------------------------------------+------------------------------------------+
-| ``ADDITIONAL_RUNTIME_APT_ENV``           |                                          | Additional env variables defined         |
-|                                          |                                          | when installing runtime deps.            |
-+------------------------------------------+------------------------------------------+------------------------------------------+
-| ``AIRFLOW_HOME``                         | ``/opt/airflow``                         | Airflow’s HOME (that’s where logs and    |
-|                                          |                                          | SQLite databases are stored).            |
-+------------------------------------------+------------------------------------------+------------------------------------------+
-| ``AIRFLOW_UID``                          | ``50000``                                | Airflow user UID.                        |
-+------------------------------------------+------------------------------------------+------------------------------------------+
-| ``AIRFLOW_GID``                          | ``50000``                                | Airflow group GID. Note that most files  |
-|                                          |                                          | created on behalf of airflow user belong |
-|                                          |                                          | to the ``root`` group (0) to keep        |
-|                                          |                                          | OpenShift Guidelines compatibility.      |
-+------------------------------------------+------------------------------------------+------------------------------------------+
-| ``AIRFLOW_USER_HOME_DIR``                | ``/home/airflow``                        | Home directory of the Airflow user.      |
-+------------------------------------------+------------------------------------------+------------------------------------------+
-| ``CASS_DRIVER_BUILD_CONCURRENCY``        | ``8``                                    | Number of processors to use for          |
-|                                          |                                          | cassandra PIP install (speeds up         |
-|                                          |                                          | installing in case cassandra extra is    |
-|                                          |                                          | used).                                   |
-+------------------------------------------+------------------------------------------+------------------------------------------+
-| ``INSTALL_MYSQL_CLIENT``                 | ``true``                                 | Whether MySQL client should be installed |
-|                                          |                                          | The mysql extra is removed from extras   |
-|                                          |                                          | if the client is not installed.          |
-+------------------------------------------+------------------------------------------+------------------------------------------+
-| ``AIRFLOW_PIP_VERSION``                  | ``20.2.4``                               | PIP version used.                        |
-+------------------------------------------+------------------------------------------+------------------------------------------+
-| ``PIP_PROGRESS_BAR``                     | ``on``                                   | Progress bar for PIP installation        |
-+------------------------------------------+------------------------------------------+------------------------------------------+
-
-There are build arguments that determine the installation mechanism of Apache Airflow for the
-production image. There are three types of build:
-
-* From local sources (by default for example when you use ``docker build .``)
-* You can build the image from released PyPI airflow package (used to build the official Docker image)
-* You can build the image from any version in GitHub repository(this is used mostly for system testing).
-
-+-----------------------------------+------------------------+-----------------------------------------------------------------------------------+
-| Build argument                    | Default                | What to specify                                                                   |
-+===================================+========================+===================================================================================+
-| ``AIRFLOW_INSTALLATION_METHOD``   | ``apache-airflow``     | Should point to the installation method of Apache Airflow. It can be              |
-|                                   |                        | ``apache-airflow`` for installation from packages and URL to installation from    |
-|                                   |                        | GitHub repository tag or branch or "." to install from sources.                   |
-|                                   |                        | Note that installing from local sources requires appropriate values of the        |
-|                                   |                        | ``AIRFLOW_SOURCES_FROM`` and ``AIRFLOW_SOURCES_TO`` variables as described below. |
-|                                   |                        | Only used when ``INSTALL_FROM_PYPI`` is set to ``true``.                          |
-+-----------------------------------+------------------------+-----------------------------------------------------------------------------------+
-| ``AIRFLOW_VERSION_SPECIFICATION`` |                        | Optional - might be used for package installation of different Airflow version    |
-|                                   |                        | for example"==2.0.0". For consistency, you should also set``AIRFLOW_VERSION``     |
-|                                   |                        | to the same value AIRFLOW_VERSION is resolved as label in the image created.      |
-+-----------------------------------+------------------------+-----------------------------------------------------------------------------------+
-| ``AIRFLOW_CONSTRAINTS_REFERENCE`` | ``constraints-master`` | Reference (branch or tag) from GitHub where constraints file is taken from.       |
-|                                   |                        | It can be ``constraints-master`` but also can be``constraints-1-10`` for          |
-|                                   |                        | 1.10.*  installations. In case of building specific version                       |
-|                                   |                        | you want to point it to specific tag, for example ``constraints-2.0.0``           |
-+-----------------------------------+------------------------+-----------------------------------------------------------------------------------+
-| ``AIRFLOW_WWW``                   | ``www``                | In case of Airflow 2.0 it should be "www", in case of Airflow 1.10                |
-|                                   |                        | series it should be "www_rbac".                                                   |
-+-----------------------------------+------------------------+-----------------------------------------------------------------------------------+
-| ``AIRFLOW_SOURCES_FROM``          | ``empty``              | Sources of Airflow. Set it to "." when you install airflow from                   |
-|                                   |                        | local sources.                                                                    |
-+-----------------------------------+------------------------+-----------------------------------------------------------------------------------+
-| ``AIRFLOW_SOURCES_TO``            | ``/empty``             | Target for Airflow sources. Set to "/opt/airflow" when                            |
-|                                   |                        | you want to install airflow from local sources.                                   |
-+-----------------------------------+------------------------+-----------------------------------------------------------------------------------+
-
-This builds production image in version 3.6 with default extras from the local sources (master version
-of 2.0 currently):
-
-.. code-block:: bash
-
-  docker build .
-
-This builds the production image in version 3.7 with default extras from 2.0.0 tag and
-constraints taken from constraints-2-0 branch in GitHub.
-
-.. code-block:: bash
-
-  docker build . \
-    --build-arg PYTHON_BASE_IMAGE="python:3.7-slim-buster" \
-    --build-arg PYTHON_MAJOR_MINOR_VERSION=3.7 \
-    --build-arg AIRFLOW_INSTALLATION_METHOD="https://github.com/apache/airflow/archive/2.0.0.tar.gz#egg=apache-airflow" \
-    --build-arg AIRFLOW_CONSTRAINTS_REFERENCE="constraints-2-0" \
-    --build-arg AIRFLOW_BRANCH="v1-10-test" \
-    --build-arg AIRFLOW_SOURCES_FROM="empty" \
-    --build-arg AIRFLOW_SOURCES_TO="/empty"
-
-This builds the production image in version 3.7 with default extras from 2.0.0 PyPI package and
-constraints taken from 2.0.0 tag in GitHub and pre-installed pip dependencies from the top
-of v1-10-test branch.
-
-.. code-block:: bash
-
-  docker build . \
-    --build-arg PYTHON_BASE_IMAGE="python:3.7-slim-buster" \
-    --build-arg PYTHON_MAJOR_MINOR_VERSION=3.7 \
-    --build-arg AIRFLOW_INSTALLATION_METHOD="apache-airflow" \
-    --build-arg AIRFLOW_VERSION="2.0.0" \
-    --build-arg AIRFLOW_VERSION_SPECIFICATION="==2.0.0" \
-    --build-arg AIRFLOW_BRANCH="v1-10-test" \
-    --build-arg AIRFLOW_CONSTRAINTS_REFERENCE="constraints-2.0.0" \
-    --build-arg AIRFLOW_SOURCES_FROM="empty" \
-    --build-arg AIRFLOW_SOURCES_TO="/empty"
-
-This builds the production image in version 3.7 with additional airflow extras from 2.0.0 PyPI package and
-additional python dependencies and pre-installed pip dependencies from 2.0.0 tagged constraints.
-
-.. code-block:: bash
-
-  docker build . \
-    --build-arg PYTHON_BASE_IMAGE="python:3.7-slim-buster" \
-    --build-arg PYTHON_MAJOR_MINOR_VERSION=3.7 \
-    --build-arg AIRFLOW_INSTALLATION_METHOD="apache-airflow" \
-    --build-arg AIRFLOW_VERSION="2.0.0" \
-    --build-arg AIRFLOW_VERSION_SPECIFICATION="==2.0.0" \
-    --build-arg AIRFLOW_BRANCH="v1-10-test" \
-    --build-arg AIRFLOW_CONSTRAINTS_REFERENCE="constraints-2.0.0" \
-    --build-arg AIRFLOW_SOURCES_FROM="empty" \
-    --build-arg AIRFLOW_SOURCES_TO="/empty" \
-    --build-arg ADDITIONAL_AIRFLOW_EXTRAS="mssql,hdfs" \
-    --build-arg ADDITIONAL_PYTHON_DEPS="sshtunnel oauth2client"
-
-This builds the production image in version 3.7 with additional airflow extras from 2.0.0 PyPI package and
-additional apt dev and runtime dependencies.
-
-.. code-block:: bash
-
-  docker build . \
-    --build-arg PYTHON_BASE_IMAGE="python:3.7-slim-buster" \
-    --build-arg PYTHON_MAJOR_MINOR_VERSION=3.7 \
-    --build-arg AIRFLOW_INSTALLATION_METHOD="apache-airflow" \
-    --build-arg AIRFLOW_VERSION="2.0.0" \
-    --build-arg AIRFLOW_VERSION_SPECIFICATION="==2.0.0" \
-    --build-arg AIRFLOW_CONSTRAINTS_REFERENCE="constraints-2-0" \
-    --build-arg AIRFLOW_SOURCES_FROM="empty" \
-    --build-arg AIRFLOW_SOURCES_TO="/empty" \
-    --build-arg ADDITIONAL_AIRFLOW_EXTRAS="jdbc" \
-    --build-arg ADDITIONAL_DEV_APT_DEPS="gcc g++" \
-    --build-arg ADDITIONAL_RUNTIME_APT_DEPS="default-jre-headless"
-
-
-Actions executed at image start
--------------------------------
-
-If you are using the default entrypoint of the production image,
-there are a few actions that are automatically performed when the container starts.
-In some cases, you can pass environment variables to the image to trigger some of that behaviour.
-
-The variables that control the "execution" behaviour start with ``_AIRFLOW`` to distinguish them
-from the variables used to build the image starting with ``AIRFLOW``.
-
-Creating system user
-....................
-
-Airflow image is Open-Shift compatible, which means that you can start it with random user ID and group id 0.
-Airflow will automatically create such a user and make it's home directory point to ``/home/airflow``.
-You can read more about it in the "Support arbitrary user ids" chapter in the
-`Openshift best practices <https://docs.openshift.com/container-platform/4.1/openshift_images/create-images.html#images-create-guide-openshift_create-images>`_.
-
-Waits for Airflow DB connection
-...............................
-
-In case Postgres or MySQL DB is used, the entrypoint will wait until the airflow DB connection becomes
-available. This happens always when you use the default entrypoint.
-
-The script detects backend type depending on the URL schema and assigns default port numbers if not specified
-in the URL. Then it loops until the connection to the host/port specified can be established
-It tries ``CONNECTION_CHECK_MAX_COUNT`` times and sleeps ``CONNECTION_CHECK_SLEEP_TIME`` between checks
-To disable check, set ``CONNECTION_CHECK_MAX_COUNT=0``.
-
-Supported schemes:
-
-* ``postgres://`` - default port 5432
-* ``mysql://``    - default port 3306
-* ``sqlite://``
-
-In case of SQLite backend, there is no connection to establish and waiting is skipped.
-
-Upgrading Airflow DB
-....................
-
-If you set ``_AIRFLOW_DB_UPGRADE`` variable to a non-empty value, the entrypoint will run
-the ``airflow db upgrade`` command right after verifying the connection. You can also use this
-when you are running airflow with internal SQLite database (default) to upgrade the db and create
-admin users at entrypoint, so that you can start the webserver immediately. Note - using SQLite is
-intended only for testing purpose, never use SQLite in production as it has severe limitations when it
-comes to concurrency.
-
-
-Creating admin user
-...................
-
-The entrypoint can also create webserver user automatically when you enter it. you need to set
-``_AIRFLOW_WWW_USER_CREATE`` to a non-empty value in order to do that. This is not intended for
-production, it is only useful if you would like to run a quick test with the production image.
-You need to pass at least password to create such user via ``_AIRFLOW_WWW_USER_PASSWORD_CMD`` or
-``_AIRFLOW_WWW_USER_PASSWORD_CMD`` similarly like for other ``*_CMD`` variables, the content of
-the ``*_CMD`` will be evaluated as shell command and it's output will be set ass password.
-
-User creation will fail if none of the ``PASSWORD`` variables are set - there is no default for
-password for security reasons.
-
-+-----------+--------------------------+----------------------------------------------------------------------+
-| Parameter | Default                  | Environment variable                                                 |
-+===========+==========================+======================================================================+
-| username  | admin                    | ``_AIRFLOW_WWW_USER_USERNAME``                                       |
-+-----------+--------------------------+----------------------------------------------------------------------+
-| password  |                          | ``_AIRFLOW_WWW_USER_PASSWORD_CMD`` or ``_AIRFLOW_WWW_USER_PASSWORD`` |
-+-----------+--------------------------+----------------------------------------------------------------------+
-| firstname | Airflow                  | ``_AIRFLOW_WWW_USER_FIRSTNAME``                                      |
-+-----------+--------------------------+----------------------------------------------------------------------+
-| lastname  | Admin                    | ``_AIRFLOW_WWW_USER_LASTNAME``                                       |
-+-----------+--------------------------+----------------------------------------------------------------------+
-| email     | airflowadmin@example.com | ``_AIRFLOW_WWW_USER_EMAIL``                                          |
-+-----------+--------------------------+----------------------------------------------------------------------+
-| role      | Admin                    | ``_AIRFLOW_WWW_USER_ROLE``                                           |
-+-----------+--------------------------+----------------------------------------------------------------------+
-
-In case the password is specified, the user will be attempted to be created, but the entrypoint will
-not fail if the attempt fails (this accounts for the case that the user is already created).
-
-You can, for example start the webserver in the production image with initializing the internal SQLite
-database and creating an ``admin/admin`` Admin user with the following command:
-
-.. code-block:: bash
-
-  docker run -it -p 8080:8080 \
-    --env "_AIRFLOW_DB_UPGRADE=true" \
-    --env "_AIRFLOW_WWW_USER_CREATE=true" \
-    --env "_AIRFLOW_WWW_USER_PASSWORD=admin" \
-      apache/airflow:master-python3.8 webserver
-
-
-.. code-block:: bash
-
-  docker run -it -p 8080:8080 \
-    --env "_AIRFLOW_DB_UPGRADE=true" \
-    --env "_AIRFLOW_WWW_USER_CREATE=true" \
-    --env "_AIRFLOW_WWW_USER_PASSWORD_CMD=echo admin" \
-      apache/airflow:master-python3.8 webserver
-
-The commands above perform initialization of the SQLite database, create admin user with admin password
-and Admin role. They also forward local port ``8080`` to the webserver port and finally start the webserver.
-
-
-Waits for celery broker connection
-..................................
-
-In case Postgres or MySQL DB is used, and one of the ``scheduler``, ``celery``, ``worker``, or ``flower``
-commands are used the entrypoint will wait until the celery broker DB connection is available.
-
-The script detects backend type depending on the URL schema and assigns default port numbers if not specified
-in the URL. Then it loops until connection to the host/port specified can be established
-It tries ``CONNECTION_CHECK_MAX_COUNT`` times and sleeps ``CONNECTION_CHECK_SLEEP_TIME`` between checks.
-To disable check, set ``CONNECTION_CHECK_MAX_COUNT=0``.
-
-Supported schemes:
-
-* ``amqp(s)://``  (rabbitmq) - default port 5672
-* ``redis://``               - default port 6379
-* ``postgres://``            - default port 5432
-* ``mysql://``               - default port 3306
-* ``sqlite://``
-
-In case of SQLite backend, there is no connection to establish and waiting is skipped.
-
-
-Recipes
--------
-
-Users sometimes share interesting ways of using the Docker images. We encourage users to contribute these
-recipes to the documentation in case they prove useful to other members of the community by
-submitting a pull request. The sections below capture this knowledge.
-
-Google Cloud SDK installation
-.............................
-
-Some operators, such as :class:`airflow.providers.google.cloud.operators.kubernetes_engine.GKEStartPodOperator`,
-:class:`airflow.providers.google.cloud.operators.dataflow.DataflowStartSqlJobOperator`, require
-the installation of `Google Cloud SDK <https://cloud.google.com/sdk>`__ (includes ``gcloud``).
-You can also run these commands with BashOperator.
-
-Create a new Dockerfile like the one shown below.
-
-.. exampleinclude:: /docker-images-recipes/gcloud.Dockerfile
-    :language: dockerfile
-
-Then build a new image.
-
-.. code-block:: bash
-
-  docker build . \
-    --build-arg BASE_AIRFLOW_IMAGE="apache/airflow:2.0.0" \
-    -t my-airflow-image
-
-
-Apache Hadoop Stack installation
-................................
-
-Airflow is often used to run tasks on Hadoop cluster. It required Java Runtime Environment (JRE) to run.
-Below are the steps to take tools that are frequently used in Hadoop-world:
-
-- Java Runtime Environment (JRE)
-- Apache Hadoop
-- Apache Hive
-- `Cloud Storage connector for Apache Hadoop <https://cloud.google.com/dataproc/docs/concepts/connectors/cloud-storage>`__
-
-
-Create a new Dockerfile like the one shown below.
-
-.. exampleinclude:: /docker-images-recipes/hadoop.Dockerfile
-    :language: dockerfile
-
-Then build a new image.
-
-.. code-block:: bash
-
-  docker build . \
-    --build-arg BASE_AIRFLOW_IMAGE="apache/airflow:2.0.0" \
-    -t my-airflow-image
-
-More details about the images
------------------------------
-
-You can read more details about the images - the context, their parameters and internal structure in the
-`IMAGES.rst <https://github.com/apache/airflow/blob/master/IMAGES.rst>`_ document.
+We provide :doc:`a Docker Image (OCI) for Apache Airflow <docker-stack:index>` for use in a containerized environment. Consider using it to guarantee that software will always run the same no matter where it’s deployed.
 
 .. _production-deployment:kerberos:
 
diff --git a/docs/apache-airflow/start/docker.rst b/docs/apache-airflow/start/docker.rst
index e79cae5..0e2becf 100644
--- a/docs/apache-airflow/start/docker.rst
+++ b/docs/apache-airflow/start/docker.rst
@@ -195,7 +195,7 @@ To stop and delete containers, delete volumes with database data and download im
 Notes
 =====
 
-By default, the Docker Compose file uses the latest Airflow image (`apache/airflow <https://hub.docker.com/r/apache/airflow>`__). If you need, you can :ref:`customize and extend it <docker_image>`.
+By default, the Docker Compose file uses the latest Airflow image (`apache/airflow <https://hub.docker.com/r/apache/airflow>`__). If you need, you can :doc:`customize and extend it <docker-stack:index>`.
 
 What's Next?
 ============
diff --git a/docs/build_docs.py b/docs/build_docs.py
index 1080533..4e4786f 100755
--- a/docs/build_docs.py
+++ b/docs/build_docs.py
@@ -205,19 +205,23 @@ def main():
         _promote_new_flags()
 
     with with_group("Available packages"):
-        for pkg in available_packages:
+        for pkg in sorted(available_packages):
             print(f" - {pkg}")
 
     if package_filters:
         print("Current package filters: ", package_filters)
     current_packages = process_package_filters(available_packages, package_filters)
+
+    with with_group("Fetching inventories"):
+        # Inventories that could not be retrieved should be retrieved first. This may mean this is a
+        # new package.
+        priority_packages = fetch_inventories()
+    current_packages = sorted(current_packages, key=lambda d: -1 if d in priority_packages else 1)
+
     with with_group(f"Documentation will be built for {len(current_packages)} package(s)"):
         for pkg_no, pkg in enumerate(current_packages, start=1):
             print(f"{pkg_no}. {pkg}")
 
-    with with_group("Fetching inventories"):
-        fetch_inventories()
-
     all_build_errors: Dict[Optional[str], List[DocBuildError]] = {}
     all_spelling_errors: Dict[Optional[str], List[SpellingError]] = {}
     package_build_errors, package_spelling_errors = build_docs_for_packages(
diff --git a/docs/conf.py b/docs/conf.py
index 2a4ca2b..678f053 100644
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -145,6 +145,9 @@ if PACKAGE_NAME == "apache-airflow-providers":
             'providers_packages_ref',
         ]
     )
+elif PACKAGE_NAME in ("helm-chart", "docker-stack"):
+    # No extra extensions
+    pass
 else:
     extensions.append('autoapi.extension')
 # List of patterns, relative to source directory, that match files and
diff --git a/docs/docker-stack/build.rst b/docs/docker-stack/build.rst
new file mode 100644
index 0000000..a07a837
--- /dev/null
+++ b/docs/docker-stack/build.rst
@@ -0,0 +1,511 @@
+ .. Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+ ..   http://www.apache.org/licenses/LICENSE-2.0
+
+ .. Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+
+Building the image
+==================
+
+Before you dive-deeply in the way how the Airflow Image is build, named and why we are doing it the
+way we do, you might want to know very quickly how you can extend or customize the existing image
+for Apache Airflow. This chapter gives you a short answer to those questions.
+
+Extending vs. customizing the image
+-----------------------------------
+
+Here is the comparison of the two types of building images. Here is your guide if you want to choose
+how you want to build your image.
+
++----------------------------------------------------+-----------+-------------+
+|                                                    | Extending | Customizing |
++====================================================+===========+=============+
+| Can be built without airflow sources               | Yes       | No          |
++----------------------------------------------------+-----------+-------------+
+| Uses familiar 'FROM ' pattern of image building    | Yes       | No          |
++----------------------------------------------------+-----------+-------------+
+| Requires only basic knowledge about images         | Yes       | No          |
++----------------------------------------------------+-----------+-------------+
+| Builds quickly                                     | Yes       | No          |
++----------------------------------------------------+-----------+-------------+
+| Produces image heavily optimized for size          | No        | Yes         |
++----------------------------------------------------+-----------+-------------+
+| Can build from custom airflow sources (forks)      | No        | Yes         |
++----------------------------------------------------+-----------+-------------+
+| Can build on air-gaped system                      | No        | Yes         |
++----------------------------------------------------+-----------+-------------+
+
+TL;DR; If you have a need to build custom image, it is easier to start with "Extending" however if your
+dependencies require compilation step or when your require to build the image from security vetted
+packages, switching to "Customizing" the image provides much more optimized images. In the example further
+where we compare equivalent "Extending" and "Customizing" the image, similar images build by
+Extending vs. Customization had shown 1.1GB vs 874MB image sizes respectively - with 20% improvement in
+size of the Customized image.
+
+.. note::
+
+  You can also combine both - customizing & extending the image in one. You can build your
+  optimized base image first using ``customization`` method (for example by your admin team) with all
+  the heavy compilation required dependencies and you can publish it in your registry and let others
+  ``extend`` your image using ``FROM`` and add their own lightweight dependencies. This reflects well
+  the split where typically "Casual" users will Extend the image and "Power-users" will customize it.
+
+Airflow Summit 2020's `Production Docker Image <https://youtu.be/wDr3Y7q2XoI>`_ talk provides more
+details about the context, architecture and customization/extension methods for the Production Image.
+
+Extending the image
+-------------------
+
+Extending the image is easiest if you just need to add some dependencies that do not require
+compiling. The compilation framework of Linux (so called ``build-essential``) is pretty big, and
+for the production images, size is really important factor to optimize for, so our Production Image
+does not contain ``build-essential``. If you need compiler like gcc or g++ or make/cmake etc. - those
+are not found in the image and it is recommended that you follow the "customize" route instead.
+
+How to extend the image - it is something you are most likely familiar with - simply
+build a new image using Dockerfile's ``FROM`` directive and add whatever you need. Then you can add your
+Debian dependencies with ``apt`` or PyPI dependencies with ``pip install`` or any other stuff you need.
+
+You should be aware, about a few things:
+
+* The production image of airflow uses "airflow" user, so if you want to add some of the tools
+  as ``root`` user, you need to switch to it with ``USER`` directive of the Dockerfile and switch back to
+  ``airflow`` user when you are done. Also you should remember about following the
+  `best practises of Dockerfiles <https://docs.docker.com/develop/develop-images/dockerfile_best-practices/>`_
+  to make sure your image is lean and small.
+
+* The PyPI dependencies in Apache Airflow are installed in the user library, of the "airflow" user, so
+  PIP packages are installed to ``~/.local`` folder as if the ``--user`` flag was specified when running PIP.
+  Note also that using ``--no-cache-dir`` is a good idea that can help to make your image smaller.
+
+* If your apt, or PyPI dependencies require some of the ``build-essential`` or other packages that need
+  to compile your python dependencies, then your best choice is to follow the "Customize the image" route,
+  because you can build a highly-optimized (for size) image this way. However it requires to checkout sources
+  of Apache Airflow, so you might still want to choose to add ``build-essential`` to your image,
+  even if your image will be significantly bigger.
+
+* You can also embed your dags in the image by simply adding them with COPY directive of Airflow.
+  The DAGs in production image are in ``/opt/airflow/dags`` folder.
+
+* You can build your image without any need for Airflow sources. It is enough that you place the
+  ``Dockerfile`` and any files that are referred to (such as Dag files) in a separate directory and run
+  a command ``docker build . --tag my-image:my-tag`` (where ``my-image`` is the name you want to name it
+  and ``my-tag`` is the tag you want to tag the image with.
+
+.. note::
+  As of 2.0.1 image the ``--user`` flag is turned on by default by setting ``PIP_USER`` environment variable
+  to ``true``. This can be disabled by un-setting the variable or by setting it to ``false``. In the
+  2.0.0 image you had to add the ``--user`` flag as ``pip install --user`` command.
+
+Examples of image extending
+---------------------------
+
+An ``apt`` package example
+..........................
+
+The following example adds ``vim`` to the airflow image.
+
+.. exampleinclude:: docker-examples/extending/add-apt-packages/Dockerfile
+    :language: Dockerfile
+    :start-after: [START Dockerfile]
+    :end-before: [END Dockerfile]
+
+A ``PyPI`` package example
+..........................
+
+The following example adds ``lxml`` python package from PyPI to the image.
+
+.. exampleinclude:: docker-examples/extending/add-pypi-packages/Dockerfile
+    :language: Dockerfile
+    :start-after: [START Dockerfile]
+    :end-before: [END Dockerfile]
+
+A ``build-essential`` requiring package example
+...............................................
+
+The following example adds ``mpi4py`` package which requires both ``build-essential`` and ``mpi compiler``.
+
+.. exampleinclude:: docker-examples/extending/add-build-essential-extend/Dockerfile
+    :language: Dockerfile
+    :start-after: [START Dockerfile]
+    :end-before: [END Dockerfile]
+
+The size of this image is ~ 1.1 GB when build. As you will see further, you can achieve 20% reduction in
+size of the image in case you use "Customizing" rather than "Extending" the image.
+
+DAG embedding example
+.....................
+
+The following example adds ``test_dag.py`` to your image in the ``/opt/airflow/dags`` folder.
+
+.. exampleinclude:: docker-examples/extending/embedding-dags/Dockerfile
+    :language: Dockerfile
+    :start-after: [START Dockerfile]
+    :end-before: [END Dockerfile]
+
+
+.. exampleinclude:: docker-examples/extending/embedding-dags/test_dag.py
+    :language: Python
+    :start-after: [START dag]
+    :end-before: [END dag]
+
+Customizing the image
+---------------------
+
+Customizing the image is an optimized way of adding your own dependencies to the image - better
+suited to prepare highly optimized (for size) production images, especially when you have dependencies
+that require to be compiled before installing (such as ``mpi4py``).
+
+It also allows more sophisticated usages, needed by "Power-users" - for example using forked version
+of Airflow, or building the images from security-vetted sources.
+
+The big advantage of this method is that it produces optimized image even if you need some compile-time
+dependencies that are not needed in the final image.
+
+The disadvantage is that you need to use Airflow Sources to build such images from the
+`official distribution repository of Apache Airflow <https://downloads.apache.org/airflow/>`_ for the
+released versions, or from the checked out sources (using release tags or main branches) in the
+`Airflow GitHub Project <https://github.com/apache/airflow>`_ or from your own fork
+if you happen to do maintain your own fork of Airflow.
+
+Another disadvantage is that the pattern of building Docker images with ``--build-arg`` is less familiar
+to developers of such images. However it is quite well-known to "power-users". That's why the
+customizing flow is better suited for those users who have more familiarity and have more custom
+requirements.
+
+The image also usually builds much longer than the equivalent "Extended" image because instead of
+extending the layers that are already coming from the base image, it rebuilds the layers needed
+to add extra dependencies needed at early stages of image building.
+
+When customizing the image you can choose a number of options how you install Airflow:
+
+   * From the PyPI releases (default)
+   * From the custom installation sources - using additional/replacing the original apt or PyPI repositories
+   * From local sources. This is used mostly during development.
+   * From tag or branch, or specific commit from a GitHub Airflow repository (or fork). This is particularly
+     useful when you build image for a custom version of Airflow that you keep in your fork and you do not
+     want to release the custom Airflow version to PyPI.
+   * From locally stored binary packages for Airflow, Airflow Providers and other dependencies. This is
+     particularly useful if you want to build Airflow in a highly-secure environment where all such packages
+     must be vetted by your security team and stored in your private artifact registry. This also
+     allows to build airflow image in an air-gaped environment.
+   * Side note. Building ``Airflow`` in an ``air-gaped`` environment sounds pretty funny, doesn't it?
+
+You can also add a range of customizations while building the image:
+
+   * base python image you use for Airflow
+   * version of Airflow to install
+   * extras to install for Airflow (or even removing some default extras)
+   * additional apt/python dependencies to use while building Airflow (DEV dependencies)
+   * additional apt/python dependencies to install for runtime version of Airflow (RUNTIME dependencies)
+   * additional commands and variables to set if needed during building or preparing Airflow runtime
+   * choosing constraint file to use when installing Airflow
+
+Additional explanation is needed for the last point. Airflow uses constraints to make sure
+that it can be predictably installed, even if some new versions of Airflow dependencies are
+released (or even dependencies of our dependencies!). The docker image and accompanying scripts
+usually determine automatically the right versions of constraints to be used based on the Airflow
+version installed and Python version. For example 2.0.1 version of Airflow installed from PyPI
+uses constraints from ``constraints-2.0.1`` tag). However in some cases - when installing airflow from
+GitHub for example - you have to manually specify the version of constraints used, otherwise
+it will default to the latest version of the constraints which might not be compatible with the
+version of Airflow you use.
+
+You can also download any version of Airflow constraints and adapt it with your own set of
+constraints and manually set your own versions of dependencies in your own constraints and use the version
+of constraints that you manually prepared.
+
+You can read more about constraints in the documentation of the
+`Installation <http://airflow.apache.org/docs/apache-airflow/stable/installation.html#constraints-files>`_
+
+Examples of image customizing
+-----------------------------
+
+.. _image-build-pypi:
+
+
+Building from PyPI packages
+...........................
+
+This is the basic way of building the custom images from sources.
+
+The following example builds the production image in version ``3.6`` with latest PyPI-released Airflow,
+with default set of Airflow extras and dependencies. The ``2.0.1`` constraints are used automatically.
+
+.. exampleinclude:: docker-examples/customizing/stable-airflow.sh
+    :language: bash
+    :start-after: [START build]
+    :end-before: [END build]
+
+The following example builds the production image in version ``3.7`` with default extras from ``2.0.1`` PyPI
+package. The ``2.0.1`` constraints are used automatically.
+
+.. exampleinclude:: docker-examples/customizing/pypi-selected-version.sh
+    :language: bash
+    :start-after: [START build]
+    :end-before: [END build]
+
+The following example builds the production image in version ``3.8`` with additional airflow extras
+(``mssql,hdfs``) from ``2.0.1`` PyPI package, and additional dependency (``oauth2client``).
+
+.. exampleinclude:: docker-examples/customizing/pypi-extras-and-deps.sh
+    :language: bash
+    :start-after: [START build]
+    :end-before: [END build]
+
+
+The following example adds ``mpi4py`` package which requires both ``build-essential`` and ``mpi compiler``.
+
+.. exampleinclude:: docker-examples/customizing/add-build-essential-custom.sh
+    :language: bash
+    :start-after: [START build]
+    :end-before: [END build]
+
+The above image is equivalent of the "extended" image from previous chapter but it's size is only
+874 MB. Comparing to 1.1 GB of the "extended image" this is about 230 MB less, so you can achieve ~20%
+improvement in size of the image by using "customization" vs. extension. The saving can increase in case you
+have more complex dependencies to build.
+
+
+.. _image-build-optimized:
+
+Building optimized images
+.........................
+
+The following example the production image in version ``3.6`` with additional airflow extras from ``2.0.1``
+PyPI package but it includes additional apt dev and runtime dependencies.
+
+The dev dependencies are those that require ``build-essential`` and usually need to involve recompiling
+of some python dependencies so those packages might require some additional DEV dependencies to be
+present during recompilation. Those packages are not needed at runtime, so we only install them for the
+"build" time. They are not installed in the final image, thus producing much smaller images.
+In this case pandas requires recompilation so it also needs gcc and g++ as dev APT dependencies.
+The ``jre-headless`` does not require recompiling so it can be installed as the runtime APT dependency.
+
+.. exampleinclude:: docker-examples/customizing/pypi-dev-runtime-deps.sh
+    :language: bash
+    :start-after: [START build]
+    :end-before: [END build]
+
+.. _image-build-github:
+
+
+Building from GitHub
+....................
+
+This method is usually used for development purpose. But in case you have your own fork you can point
+it to your forked version of source code without having to release it to PyPI. It is enough to have
+a branch or tag in your repository and use the tag or branch in the URL that you point the installation to.
+
+In case of GitHyb builds you need to pass the constraints reference manually in case you want to use
+specific constraints, otherwise the default ``constraints-master`` is used.
+
+The following example builds the production image in version ``3.7`` with default extras from the latest master version and
+constraints are taken from latest version of the constraints-master branch in GitHub.
+
+.. exampleinclude:: docker-examples/customizing/github-master.sh
+    :language: bash
+    :start-after: [START build]
+    :end-before: [END build]
+
+The following example builds the production image with default extras from the
+latest ``v2-0-test`` version and constraints are taken from the latest version of
+the ``constraints-2-0`` branch in GitHub. Note that this command might fail occasionally as only
+the "released version" constraints when building a version and "master" constraints when building
+master are guaranteed to work.
+
+.. exampleinclude:: docker-examples/customizing/github-v2-0-test.sh
+    :language: bash
+    :start-after: [START build]
+    :end-before: [END build]
+
+You can also specify another repository to build from. If you also want to use different constraints
+repository source, you must specify it as additional ``CONSTRAINTS_GITHUB_REPOSITORY`` build arg.
+
+The following example builds the production image using ``potiuk/airflow`` fork of Airflow and constraints
+are also downloaded from that repository.
+
+.. exampleinclude:: docker-examples/customizing/github-different-repository.sh
+    :language: bash
+    :start-after: [START build]
+    :end-before: [END build]
+
+.. _image-build-custom:
+
+Using custom installation sources
+.................................
+
+You can customize more aspects of the image - such as additional commands executed before apt dependencies
+are installed, or adding extra sources to install your dependencies from. You can see all the arguments
+described below but here is an example of rather complex command to customize the image
+based on example in `this comment <https://github.com/apache/airflow/issues/8605#issuecomment-690065621>`_:
+
+In case you need to use your custom PyPI package indexes, you can also customize PYPI sources used during
+image build by adding a ``docker-context-files``/``.pypirc`` file when building the image.
+This ``.pypirc`` will not be committed to the repository (it is added to ``.gitignore``) and it will not be
+present in the final production image. It is added and used only in the build segment of the image.
+Therefore this ``.pypirc`` file can safely contain list of package indexes you want to use,
+usernames and passwords used for authentication. More details about ``.pypirc`` file can be found in the
+`pypirc specification <https://packaging.python.org/specifications/pypirc/>`_.
+
+Such customizations are independent of the way how airflow is installed.
+
+.. note::
+  Similar results could be achieved by modifying the Dockerfile manually (see below) and injecting the
+  commands needed, but by specifying the customizations via build-args, you avoid the need of
+  synchronizing the changes from future Airflow Dockerfiles. Those customizations should work with the
+  future version of Airflow's official ``Dockerfile`` at most with minimal modifications od parameter
+  names (if any), so using the build command for your customizations makes your custom image more
+  future-proof.
+
+The following - rather complex - example shows capabilities of:
+
+  * Adding airflow extras (slack, odbc)
+  * Adding PyPI dependencies (``azure-storage-blob, oauth2client, beautifulsoup4, dateparser, rocketchat_API,typeform``)
+  * Adding custom environment variables while installing ``apt`` dependencies - both DEV and RUNTIME
+    (``ACCEPT_EULA=Y'``)
+  * Adding custom curl command for adding keys and configuring additional apt sources needed to install
+    ``apt`` dependencies (both DEV and RUNTIME)
+  * Adding custom ``apt`` dependencies, both DEV (``msodbcsql17 unixodbc-dev g++) and runtime msodbcsql17 unixodbc git procps vim``)
+
+.. exampleinclude:: docker-examples/customizing/custom-sources.sh
+    :language: bash
+    :start-after: [START build]
+    :end-before: [END build]
+
+.. _image-build-secure-environments:
+
+Build images in security restricted environments
+................................................
+
+You can also make sure your image is only build using local constraint file and locally downloaded
+wheel files. This is often useful in Enterprise environments where the binary files are verified and
+vetted by the security teams. It is also the most complex way of building the image. You should be an
+expert of building and using Dockerfiles in order to use it and have to have specific needs of security if
+you want to follow that route.
+
+This builds below builds the production image  with packages and constraints used from the local
+``docker-context-files`` rather than installed from PyPI or GitHub. It also disables MySQL client
+installation as it is using external installation method.
+
+Note that as a prerequisite - you need to have downloaded wheel files. In the example below we
+first download such constraint file locally and then use ``pip download`` to get the ``.whl`` files needed
+but in most likely scenario, those wheel files should be copied from an internal repository of such .whl
+files. Note that ``AIRFLOW_VERSION_SPECIFICATION`` is only there for reference, the apache airflow ``.whl`` file
+in the right version is part of the ``.whl`` files downloaded.
+
+Note that 'pip download' will only works on Linux host as some of the packages need to be compiled from
+sources and you cannot install them providing ``--platform`` switch. They also need to be downloaded using
+the same python version as the target image.
+
+The ``pip download`` might happen in a separate environment. The files can be committed to a separate
+binary repository and vetted/verified by the security team and used subsequently to build images
+of Airflow when needed on an air-gaped system.
+
+Example of preparing the constraint files and wheel files. Note that ``mysql`` dependency is removed
+as ``mysqlclient`` is installed from Oracle's ``apt`` repository and if you want to add it, you need
+to provide this library from you repository if you want to build Airflow image in an "air-gaped" system.
+
+.. exampleinclude:: docker-examples/restricted/restricted_environments.sh
+    :language: bash
+    :start-after: [START download]
+    :end-before: [END download]
+
+After this step is finished, your ``docker-context-files`` folder will contain all the packages that
+are needed to install Airflow from.
+
+Those downloaded packages and constraint file can be pre-vetted by your security team before you attempt
+to install the image. You can also store those downloaded binary packages in your private artifact registry
+which allows for the flow where you will download the packages on one machine, submit only new packages for
+security vetting and only use the new packages when they were vetted.
+
+On a separate (air-gaped) system, all the PyPI packages can be copied to ``docker-context-files``
+where you can build the image using the packages downloaded by passing those build args:
+
+  * ``INSTALL_FROM_DOCKER_CONTEXT_FILES="true"``  - to use packages present in ``docker-context-files``
+  * ``AIRFLOW_PRE_CACHED_PIP_PACKAGES="false"``  - to not pre-cache packages from PyPI when building image
+  * ``AIRFLOW_CONSTRAINTS_LOCATION=/docker-context-files/YOUR_CONSTRAINT_FILE.txt`` - to downloaded constraint files
+  * (Optional) ``INSTALL_MYSQL_CLIENT="false"`` if you do not want to install ``MySQL``
+    client from the Oracle repositories. In this case also make sure that your
+
+Note, that the solution we have for installing python packages from local packages, only solves the problem
+of "air-gaped" python installation. The Docker image also downloads ``apt`` dependencies and ``node-modules``.
+Those type of dependencies are however more likely to be available in your "air-gaped" system via transparent
+proxies and it should automatically reach out to your private registries, however in the future the
+solution might be applied to both of those installation steps.
+
+You can also use techniques described in the previous chapter to make ``docker build`` use your private
+apt sources or private PyPI repositories (via ``.pypirc``) available which can be security-vetted.
+
+If you fulfill all the criteria, you can build the image on an air-gaped system by running command similar
+to the below:
+
+.. exampleinclude:: docker-examples/restricted/restricted_environments.sh
+    :language: bash
+    :start-after: [START build]
+    :end-before: [END build]
+
+Modifying the Dockerfile
+........................
+
+The build arg approach is a convenience method if you do not want to manually modify the ``Dockerfile``.
+Our approach is flexible enough, to be able to accommodate most requirements and
+customizations out-of-the-box. When you use it, you do not need to worry about adapting the image every
+time new version of Airflow is released. However sometimes it is not enough if you have very
+specific needs and want to build a very custom image. In such case you can simply modify the
+``Dockerfile`` manually as you see fit and store it in your forked repository. However you will have to
+make sure to rebase your changes whenever new version of Airflow is released, because we might modify
+the approach of our Dockerfile builds in the future and you might need to resolve conflicts
+and rebase your changes.
+
+There are a few things to remember when you modify the ``Dockerfile``:
+
+* We are using the widely recommended pattern of ``.dockerignore`` where everything is ignored by default
+  and only the required folders are added through exclusion (!). This allows to keep docker context small
+  because there are many binary artifacts generated in the sources of Airflow and if they are added to
+  the context, the time of building the image would increase significantly. If you want to add any new
+  folders to be available in the image you must add it here with leading ``!``.
+
+  .. code-block:: text
+
+      # Ignore everything
+      **
+
+      # Allow only these directories
+      !airflow
+      ...
+
+
+* The ``docker-context-files`` folder is automatically added to the context of the image, so if you want
+  to add individual files, binaries, requirement files etc you can add them there. The
+  ``docker-context-files`` is copied to the ``/docker-context-files`` folder of the build segment of the
+  image, so it is not present in the final image - which makes the final image smaller in case you want
+  to use those files only in the ``build`` segment. You must copy any files from the directory manually,
+  using COPY command if you want to get the files in your final image (in the main image segment).
+
+
+More details
+------------
+
+Build Args reference
+....................
+
+The detailed ``--build-arg`` reference can be found in :doc:`build-arg-ref`.
+
+
+The architecture of the images
+..............................
+
+You can read more details about the images - the context, their parameters and internal structure in the
+`IMAGES.rst <https://github.com/apache/airflow/blob/master/IMAGES.rst>`_ document.
diff --git a/docs/apache-airflow/docker-images-recipes/gcloud.Dockerfile b/docs/docker-stack/docker-images-recipes/gcloud.Dockerfile
similarity index 100%
rename from docs/apache-airflow/docker-images-recipes/gcloud.Dockerfile
rename to docs/docker-stack/docker-images-recipes/gcloud.Dockerfile
diff --git a/docs/apache-airflow/docker-images-recipes/hadoop.Dockerfile b/docs/docker-stack/docker-images-recipes/hadoop.Dockerfile
similarity index 100%
rename from docs/apache-airflow/docker-images-recipes/hadoop.Dockerfile
rename to docs/docker-stack/docker-images-recipes/hadoop.Dockerfile
diff --git a/docs/docker-stack/entrypoint.rst b/docs/docker-stack/entrypoint.rst
new file mode 100644
index 0000000..a7889c4
--- /dev/null
+++ b/docs/docker-stack/entrypoint.rst
@@ -0,0 +1,201 @@
+ .. Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+ ..   http://www.apache.org/licenses/LICENSE-2.0
+
+ .. Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+
+Entrypoint
+==========
+
+If you are using the default entrypoint of the production image,
+there are a few actions that are automatically performed when the container starts.
+In some cases, you can pass environment variables to the image to trigger some of that behaviour.
+
+The variables that control the "execution" behaviour start with ``_AIRFLOW`` to distinguish them
+from the variables used to build the image starting with ``AIRFLOW``.
+
+The image entrypoint works as follows:
+
+* In case the user is not "airflow" (with undefined user id) and the group id of the user is set to ``0`` (root),
+  then the user is dynamically added to ``/etc/passwd`` at entry using ``USER_NAME`` variable to define the user name.
+  This is in order to accommodate the
+  `OpenShift Guidelines <https://docs.openshift.com/enterprise/3.0/creating_images/guidelines.html>`_
+
+* The ``AIRFLOW_HOME`` is set by default to ``/opt/airflow/`` - this means that DAGs
+  are in default in the ``/opt/airflow/dags`` folder and logs are in the ``/opt/airflow/logs``
+
+* The working directory is ``/opt/airflow`` by default.
+
+* If ``AIRFLOW__CORE__SQL_ALCHEMY_CONN`` variable is passed to the container and it is either mysql or postgres
+  SQL alchemy connection, then the connection is checked and the script waits until the database is reachable.
+  If ``AIRFLOW__CORE__SQL_ALCHEMY_CONN_CMD`` variable is passed to the container, it is evaluated as a
+  command to execute and result of this evaluation is used as ``AIRFLOW__CORE__SQL_ALCHEMY_CONN``. The
+  ``_CMD`` variable takes precedence over the ``AIRFLOW__CORE__SQL_ALCHEMY_CONN`` variable.
+
+* If no ``AIRFLOW__CORE__SQL_ALCHEMY_CONN`` variable is set then SQLite database is created in
+  ``${AIRFLOW_HOME}/airflow.db`` and db reset is executed.
+
+* If first argument equals to "bash" - you are dropped to a bash shell or you can executes bash command
+  if you specify extra arguments. For example:
+
+  .. code-block:: bash
+
+    docker run -it apache/airflow:master-python3.6 bash -c "ls -la"
+    total 16
+    drwxr-xr-x 4 airflow root 4096 Jun  5 18:12 .
+    drwxr-xr-x 1 root    root 4096 Jun  5 18:12 ..
+    drwxr-xr-x 2 airflow root 4096 Jun  5 18:12 dags
+    drwxr-xr-x 2 airflow root 4096 Jun  5 18:12 logs
+
+* If first argument is equal to ``python`` - you are dropped in python shell or python commands are executed if
+  you pass extra parameters. For example:
+
+  .. code-block:: bash
+
+    > docker run -it apache/airflow:master-python3.6 python -c "print('test')"
+    test
+
+* If first argument equals to "airflow" - the rest of the arguments is treated as an airflow command
+  to execute. Example:
+
+  .. code-block:: bash
+
+     docker run -it apache/airflow:master-python3.6 airflow webserver
+
+* If there are any other arguments - they are simply passed to the "airflow" command
+
+  .. code-block:: bash
+
+    > docker run -it apache/airflow:master-python3.6 version
+    2.1.0.dev0
+
+* If ``AIRFLOW__CELERY__BROKER_URL`` variable is passed and airflow command with
+  scheduler, worker of flower command is used, then the script checks the broker connection
+  and waits until the Celery broker database is reachable.
+  If ``AIRFLOW__CELERY__BROKER_URL_CMD`` variable is passed to the container, it is evaluated as a
+  command to execute and result of this evaluation is used as ``AIRFLOW__CELERY__BROKER_URL``. The
+  ``_CMD`` variable takes precedence over the ``AIRFLOW__CELERY__BROKER_URL`` variable.
+
+Creating system user
+--------------------
+
+Airflow image is Open-Shift compatible, which means that you can start it with random user ID and group id 0.
+Airflow will automatically create such a user and make it's home directory point to ``/home/airflow``.
+You can read more about it in the "Support arbitrary user ids" chapter in the
+`Openshift best practices <https://docs.openshift.com/container-platform/4.1/openshift_images/create-images.html#images-create-guide-openshift_create-images>`_.
+
+Waits for Airflow DB connection
+-------------------------------
+
+In case Postgres or MySQL DB is used, the entrypoint will wait until the airflow DB connection becomes
+available. This happens always when you use the default entrypoint.
+
+The script detects backend type depending on the URL schema and assigns default port numbers if not specified
+in the URL. Then it loops until the connection to the host/port specified can be established
+It tries ``CONNECTION_CHECK_MAX_COUNT`` times and sleeps ``CONNECTION_CHECK_SLEEP_TIME`` between checks
+To disable check, set ``CONNECTION_CHECK_MAX_COUNT=0``.
+
+Supported schemes:
+
+* ``postgres://`` - default port 5432
+* ``mysql://``    - default port 3306
+* ``sqlite://``
+
+In case of SQLite backend, there is no connection to establish and waiting is skipped.
+
+Upgrading Airflow DB
+--------------------
+
+If you set ``_AIRFLOW_DB_UPGRADE`` variable to a non-empty value, the entrypoint will run
+the ``airflow db upgrade`` command right after verifying the connection. You can also use this
+when you are running airflow with internal SQLite database (default) to upgrade the db and create
+admin users at entrypoint, so that you can start the webserver immediately. Note - using SQLite is
+intended only for testing purpose, never use SQLite in production as it has severe limitations when it
+comes to concurrency.
+
+Creating admin user
+-------------------
+
+The entrypoint can also create webserver user automatically when you enter it. you need to set
+``_AIRFLOW_WWW_USER_CREATE`` to a non-empty value in order to do that. This is not intended for
+production, it is only useful if you would like to run a quick test with the production image.
+You need to pass at least password to create such user via ``_AIRFLOW_WWW_USER_PASSWORD_CMD`` or
+``_AIRFLOW_WWW_USER_PASSWORD_CMD`` similarly like for other ``*_CMD`` variables, the content of
+the ``*_CMD`` will be evaluated as shell command and it's output will be set as password.
+
+User creation will fail if none of the ``PASSWORD`` variables are set - there is no default for
+password for security reasons.
+
++-----------+--------------------------+----------------------------------------------------------------------+
+| Parameter | Default                  | Environment variable                                                 |
++===========+==========================+======================================================================+
+| username  | admin                    | ``_AIRFLOW_WWW_USER_USERNAME``                                       |
++-----------+--------------------------+----------------------------------------------------------------------+
+| password  |                          | ``_AIRFLOW_WWW_USER_PASSWORD_CMD`` or ``_AIRFLOW_WWW_USER_PASSWORD`` |
++-----------+--------------------------+----------------------------------------------------------------------+
+| firstname | Airflow                  | ``_AIRFLOW_WWW_USER_FIRSTNAME``                                      |
++-----------+--------------------------+----------------------------------------------------------------------+
+| lastname  | Admin                    | ``_AIRFLOW_WWW_USER_LASTNAME``                                       |
++-----------+--------------------------+----------------------------------------------------------------------+
+| email     | airflowadmin@example.com | ``_AIRFLOW_WWW_USER_EMAIL``                                          |
++-----------+--------------------------+----------------------------------------------------------------------+
+| role      | Admin                    | ``_AIRFLOW_WWW_USER_ROLE``                                           |
++-----------+--------------------------+----------------------------------------------------------------------+
+
+In case the password is specified, the user will be attempted to be created, but the entrypoint will
+not fail if the attempt fails (this accounts for the case that the user is already created).
+
+You can, for example start the webserver in the production image with initializing the internal SQLite
+database and creating an ``admin/admin`` Admin user with the following command:
+
+.. code-block:: bash
+
+  docker run -it -p 8080:8080 \
+    --env "_AIRFLOW_DB_UPGRADE=true" \
+    --env "_AIRFLOW_WWW_USER_CREATE=true" \
+    --env "_AIRFLOW_WWW_USER_PASSWORD=admin" \
+      apache/airflow:master-python3.8 webserver
+
+
+.. code-block:: bash
+
+  docker run -it -p 8080:8080 \
+    --env "_AIRFLOW_DB_UPGRADE=true" \
+    --env "_AIRFLOW_WWW_USER_CREATE=true" \
+    --env "_AIRFLOW_WWW_USER_PASSWORD_CMD=echo admin" \
+      apache/airflow:master-python3.8 webserver
+
+The commands above perform initialization of the SQLite database, create admin user with admin password
+and Admin role. They also forward local port ``8080`` to the webserver port and finally start the webserver.
+
+Waits for celery broker connection
+----------------------------------
+
+In case Postgres or MySQL DB is used, and one of the ``scheduler``, ``celery``, ``worker``, or ``flower``
+commands are used the entrypoint will wait until the celery broker DB connection is available.
+
+The script detects backend type depending on the URL schema and assigns default port numbers if not specified
+in the URL. Then it loops until connection to the host/port specified can be established
+It tries ``CONNECTION_CHECK_MAX_COUNT`` times and sleeps ``CONNECTION_CHECK_SLEEP_TIME`` between checks.
+To disable check, set ``CONNECTION_CHECK_MAX_COUNT=0``.
+
+Supported schemes:
+
+* ``amqp(s)://``  (rabbitmq) - default port 5672
+* ``redis://``               - default port 6379
+* ``postgres://``            - default port 5432
+* ``mysql://``               - default port 3306
+* ``sqlite://``
+
+In case of SQLite backend, there is no connection to establish and waiting is skipped.
diff --git a/docs/docker-stack/img/docker-logo.png b/docs/docker-stack/img/docker-logo.png
new file mode 100644
index 0000000..d83e54a
Binary files /dev/null and b/docs/docker-stack/img/docker-logo.png differ
diff --git a/docs/docker-stack/index.rst b/docs/docker-stack/index.rst
new file mode 100644
index 0000000..29a7daf
--- /dev/null
+++ b/docs/docker-stack/index.rst
@@ -0,0 +1,54 @@
+ .. Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+ ..   http://www.apache.org/licenses/LICENSE-2.0
+
+ .. Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+
+.. image:: /img/docker-logo.png
+    :width: 100
+
+Docker Image for Apache Airflow
+===============================
+
+.. toctree::
+    :hidden:
+
+    Home <self>
+    build
+    entrypoint
+    recipes
+
+.. toctree::
+    :hidden:
+    :caption: References
+
+    build-arg-ref
+
+For the ease of deployment in production, the community releases a production-ready reference container
+image.
+
+The docker image provided (as convenience binary package) in the
+`apache/airflow DockerHub <https://hub.docker.com/r/apache/airflow>`_ is a bare image
+that has a few external dependencies and extras installed..
+
+The Apache Airflow image provided as convenience package is optimized for size, so
+it provides just a bare minimal set of the extras and dependencies installed and in most cases
+you want to either extend or customize the image. You can see all possible extras in
+:doc:`extra-packages-ref`. The set of extras used in Airflow Production image are available in the
+`Dockerfile <https://github.com/apache/airflow/blob/2c6c7fdb2308de98e142618836bdf414df9768c8/Dockerfile#L39>`_.
+
+The production images are build in DockerHub from released version and release candidates. There
+are also images published from branches but they are used mainly for development and testing purpose.
+See `Airflow Git Branching <https://github.com/apache/airflow/blob/master/CONTRIBUTING.rst#airflow-git-branches>`_
+for details.
diff --git a/docs/docker-stack/recipes.rst b/docs/docker-stack/recipes.rst
new file mode 100644
index 0000000..8b89a3e
--- /dev/null
+++ b/docs/docker-stack/recipes.rst
@@ -0,0 +1,70 @@
+ .. Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+ ..   http://www.apache.org/licenses/LICENSE-2.0
+
+ .. Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+
+Recipes
+=======
+
+Users sometimes share interesting ways of using the Docker images. We encourage users to contribute these
+recipes to the documentation in case they prove useful to other members of the community by
+submitting a pull request. The sections below capture this knowledge.
+
+Google Cloud SDK installation
+-----------------------------
+
+Some operators, such as :class:`~airflow.providers.google.cloud.operators.kubernetes_engine.GKEStartPodOperator`,
+:class:`~airflow.providers.google.cloud.operators.dataflow.DataflowStartSqlJobOperator`, require
+the installation of `Google Cloud SDK <https://cloud.google.com/sdk>`__ (includes ``gcloud``).
+You can also run these commands with BashOperator.
+
+Create a new Dockerfile like the one shown below.
+
+.. exampleinclude:: /docker-images-recipes/gcloud.Dockerfile
+    :language: dockerfile
+
+Then build a new image.
+
+.. code-block:: bash
+
+  docker build . \
+    --build-arg BASE_AIRFLOW_IMAGE="apache/airflow:2.0.1" \
+    -t my-airflow-image
+
+
+Apache Hadoop Stack installation
+--------------------------------
+
+Airflow is often used to run tasks on Hadoop cluster. It required Java Runtime Environment (JRE) to run.
+Below are the steps to take tools that are frequently used in Hadoop-world:
+
+- Java Runtime Environment (JRE)
+- Apache Hadoop
+- Apache Hive
+- `Cloud Storage connector for Apache Hadoop <https://cloud.google.com/dataproc/docs/concepts/connectors/cloud-storage>`__
+
+
+Create a new Dockerfile like the one shown below.
+
+.. exampleinclude:: /docker-images-recipes/hadoop.Dockerfile
+    :language: dockerfile
+
+Then build a new image.
+
+.. code-block:: bash
+
+  docker build . \
+    --build-arg BASE_AIRFLOW_IMAGE="apache/airflow:2.0.1" \
+    -t my-airflow-image
diff --git a/docs/exts/airflow_intersphinx.py b/docs/exts/airflow_intersphinx.py
index ee83b8f..750579f 100644
--- a/docs/exts/airflow_intersphinx.py
+++ b/docs/exts/airflow_intersphinx.py
@@ -67,14 +67,15 @@ def _generate_provider_intersphinx_mapping():
             f'/docs/apache-airflow/{current_version}/',
             (doc_inventory if os.path.exists(doc_inventory) else cache_inventory,),
         )
+    for pkg_name in ['apache-airflow-providers', 'docker-stack']:
+        if os.environ.get('AIRFLOW_PACKAGE_NAME') == pkg_name:
+            continue
+        doc_inventory = f'{DOCS_DIR}/_build/docs/{pkg_name}/objects.inv'
+        cache_inventory = f'{DOCS_DIR}/_inventory_cache/{pkg_name}/objects.inv'
 
-    if os.environ.get('AIRFLOW_PACKAGE_NAME') != 'apache-airflow-providers':
-        doc_inventory = f'{DOCS_DIR}/_build/docs/apache-airflow-providers/objects.inv'
-        cache_inventory = f'{DOCS_DIR}/_inventory_cache/apache-airflow-providers/objects.inv'
-
-        airflow_mapping['apache-airflow-providers'] = (
+        airflow_mapping[pkg_name] = (
             # base URI
-            '/docs/apache-airflow-providers/',
+            f'/docs/{pkg_name}/',
             (doc_inventory if os.path.exists(doc_inventory) else cache_inventory,),
         )
 
diff --git a/docs/exts/docs_build/dev_index_template.html.jinja2 b/docs/exts/docs_build/dev_index_template.html.jinja2
index 0de5879..b680255 100644
--- a/docs/exts/docs_build/dev_index_template.html.jinja2
+++ b/docs/exts/docs_build/dev_index_template.html.jinja2
@@ -67,6 +67,17 @@
       </ul>
     </div>
   </div>
+  <div class="row">
+    <div class="col-md order-md-1">
+      <img src="/docs/docker-stack/_images/docker-logo.png" alt="Docker - logo" width="100" height="86">
+    </div>
+    <div class="col-md">
+      <h2><a href="/docs/docker-stack/index.html">Docker image</a></h2>
+      <p>
+      It makes efficient, lightweight, self-contained environment and guarantees that software will always run the same no matter of where it’s deployed.
+      </p>
+     </div>
+  </div>
 </div>
 </body>
 </html>
diff --git a/docs/exts/docs_build/docs_builder.py b/docs/exts/docs_build/docs_builder.py
index 6874f78..71e4acb 100644
--- a/docs/exts/docs_build/docs_builder.py
+++ b/docs/exts/docs_build/docs_builder.py
@@ -54,9 +54,9 @@ class AirflowDocsBuilder:
     @property
     def is_versioned(self):
         """Is current documentation package versioned?"""
-        # Disable versioning. This documentation does not apply to any issued product and we can update
+        # Disable versioning. This documentation does not apply to any released product and we can update
         # it as needed, i.e. with each new package of providers.
-        return self.package_name != 'apache-airflow-providers'
+        return self.package_name not in ('apache-airflow-providers', 'docker-stack')
 
     @property
     def _build_dir(self) -> str:
@@ -231,4 +231,9 @@ def get_available_providers_packages():
 def get_available_packages():
     """Get list of all available packages to build."""
     provider_package_names = get_available_providers_packages()
-    return ["apache-airflow", *provider_package_names, "apache-airflow-providers"]
+    return [
+        "apache-airflow",
+        *provider_package_names,
+        "apache-airflow-providers",
+        "docker-stack",
+    ]
diff --git a/docs/exts/docs_build/fetch_inventories.py b/docs/exts/docs_build/fetch_inventories.py
index e9da264..da66d02 100644
--- a/docs/exts/docs_build/fetch_inventories.py
+++ b/docs/exts/docs_build/fetch_inventories.py
@@ -20,10 +20,13 @@ import concurrent.futures
 import datetime
 import os
 import shutil
+from itertools import repeat
+from typing import Iterator, List, Tuple
 
 import requests
 from requests.adapters import DEFAULT_POOLSIZE
 
+from airflow.utils.helpers import partition
 from docs.exts.docs_build.docs_builder import (  # pylint: disable=no-name-in-module
     get_available_providers_packages,
 )
@@ -42,17 +45,22 @@ S3_DOC_URL_VERSIONED = S3_DOC_URL + "/docs/{package_name}/latest/objects.inv"
 S3_DOC_URL_NON_VERSIONED = S3_DOC_URL + "/docs/{package_name}/objects.inv"
 
 
-def _fetch_file(session: requests.Session, url: str, path: str):
+def _fetch_file(session: requests.Session, package_name: str, url: str, path: str) -> Tuple[str, bool]:
+    """
+    Download a file and returns status information as a tuple with package
+    name and success status(bool value).
+    """
     response = session.get(url, allow_redirects=True, stream=True)
     if not response.ok:
         print(f"Failed to fetch inventory: {url}")
-        return
+        return package_name, False
 
     os.makedirs(os.path.dirname(path), exist_ok=True)
     with open(path, 'wb') as f:
         response.raw.decode_content = True
         shutil.copyfileobj(response.raw, f)
     print(f"Fetched inventory: {url}")
+    return package_name, True
 
 
 def _is_outdated(path: str):
@@ -65,42 +73,61 @@ def _is_outdated(path: str):
 def fetch_inventories():
     """Fetch all inventories for Airflow documentation packages and store in cache."""
     os.makedirs(os.path.dirname(CACHE_DIR), exist_ok=True)
-    to_download = []
+    to_download: List[Tuple[str, str, str]] = []
 
     for pkg_name in get_available_providers_packages():
         to_download.append(
             (
+                pkg_name,
                 S3_DOC_URL_VERSIONED.format(package_name=pkg_name),
                 f'{CACHE_DIR}/{pkg_name}/objects.inv',
             )
         )
     to_download.append(
         (
+            "apache-airflow",
             S3_DOC_URL_VERSIONED.format(package_name='apache-airflow'),
             f'{CACHE_DIR}/apache-airflow/objects.inv',
         )
     )
-    to_download.append(
-        (
-            S3_DOC_URL_NON_VERSIONED.format(package_name='apache-airflow-providers'),
-            f'{CACHE_DIR}/apache-airflow-providers/objects.inv',
+    for pkg_name in ['apache-airflow-providers', 'docker-stack']:
+        to_download.append(
+            (
+                pkg_name,
+                S3_DOC_URL_NON_VERSIONED.format(package_name=pkg_name),
+                f'{CACHE_DIR}/{pkg_name}/objects.inv',
+            )
         )
-    )
     to_download.extend(
         (
+            pkg_name,
             f"{doc_url}/objects.inv",
             f'{CACHE_DIR}/{pkg_name}/objects.inv',
         )
         for pkg_name, doc_url in THIRD_PARTY_INDEXES.items()
     )
 
-    to_download = [(url, path) for url, path in to_download if _is_outdated(path)]
+    to_download = [(pkg_name, url, path) for pkg_name, url, path in to_download if _is_outdated(path)]
     if not to_download:
         print("Nothing to do")
-        return
+        return []
 
     print(f"To download {len(to_download)} inventorie(s)")
 
     with requests.Session() as session, concurrent.futures.ThreadPoolExecutor(DEFAULT_POOLSIZE) as pool:
-        for url, path in to_download:
-            pool.submit(_fetch_file, session=session, url=url, path=path)
+        download_results: Iterator[Tuple[str, bool]] = pool.map(
+            _fetch_file,
+            repeat(session, len(to_download)),
+            (pkg_name for pkg_name, _, _ in to_download),
+            (url for _, url, _ in to_download),
+            (path for _, _, path in to_download),
+        )
+    failed, success = partition(lambda d: d[1], download_results)
+    failed, success = list(failed), list(failed)
+    print(f"Result: {len(success)}, success {len(failed)} failed")
+    if failed:
+        print("Failed packages:")
+        for pkg_no, (pkg_name, _) in enumerate(failed, start=1):
+            print(f"{pkg_no}. {pkg_name}")
+
+    return [pkg_name for pkg_name, status in failed]

[airflow] 03/05: Fixes default group of Airflow user. (#14944)

Posted by po...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

potiuk pushed a commit to branch v2-0-test
in repository https://gitbox.apache.org/repos/asf/airflow.git

commit 3cd0bdc040dce59f9ebd64c0081609a2c9a8ec16
Author: Jarek Potiuk <ja...@potiuk.com>
AuthorDate: Tue Mar 23 03:20:23 2021 +0100

    Fixes default group of Airflow user. (#14944)
    
    The production image did not have root group set as default for
    the airflow user. This was not a big problem unless you extended
    the image - in which case you had to change the group manually
    when copying the images in order to keep the image OpenShift
    compatible (i.e. runnable with any user and root group).
    
    This PR fixes it by changing default group of airflow user
    to root, which also works when you extend the image.
    
    ```
    Connected.
    airflow@53f70b1e3675:/opt/airflow$ ls
    dags  logs
    airflow@53f70b1e3675:/opt/airflow$ cd dags/
    airflow@53f70b1e3675:/opt/airflow/dags$ ls -l
    total 4
    -rw-r--r-- 1 airflow root 1648 Mar 22 23:16 test_dag.py
    airflow@53f70b1e3675:/opt/airflow/dags$
    ```
---
 Dockerfile                    | 2 ++
 scripts/ci/libraries/_kind.sh | 8 ++------
 2 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/Dockerfile b/Dockerfile
index a62ce15..4b1b807 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -485,6 +485,8 @@ WORKDIR ${AIRFLOW_HOME}
 
 EXPOSE 8080
 
+RUN usermod -g 0 airflow
+
 USER ${AIRFLOW_UID}
 
 # Having the variable in final image allows to disable providers manager warnings when
diff --git a/scripts/ci/libraries/_kind.sh b/scripts/ci/libraries/_kind.sh
index f6be375..4fbfee1 100644
--- a/scripts/ci/libraries/_kind.sh
+++ b/scripts/ci/libraries/_kind.sh
@@ -255,13 +255,9 @@ function kind::build_image_for_kubernetes_tests() {
     docker build --tag "${AIRFLOW_PROD_IMAGE_KUBERNETES}" . -f - <<EOF
 FROM ${AIRFLOW_PROD_IMAGE}
 
-USER root
+COPY airflow/example_dags/ \${AIRFLOW_HOME}/dags/
 
-COPY --chown=airflow:root airflow/example_dags/ \${AIRFLOW_HOME}/dags/
-
-COPY --chown=airflow:root airflow/kubernetes_executor_templates/ \${AIRFLOW_HOME}/pod_templates/
-
-USER airflow
+COPY airflow/kubernetes_executor_templates/ \${AIRFLOW_HOME}/pod_templates/
 
 EOF
     echo "The ${AIRFLOW_PROD_IMAGE_KUBERNETES} is prepared for test kubernetes deployment."

[airflow] 04/05: Much easier to use and better documented Docker image (#14911)

Posted by po...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

potiuk pushed a commit to branch v2-0-test
in repository https://gitbox.apache.org/repos/asf/airflow.git

commit 7bb7380453be3d1e0b0f0692544e2ecd05eca4e8
Author: Jarek Potiuk <ja...@potiuk.com>
AuthorDate: Tue Mar 23 04:13:17 2021 +0100

    Much easier to use and better documented Docker image (#14911)
    
    Previously you had to specify AIRFLOW_VERSION_REFERENCE and
    AIRFLOW_CONSTRAINTS_REFERENCE to point to the right version
    of Airflow. Now those values are auto-detected if not specified
    (but you can still override them)
    
    This change allowed to simplify and restructure the Dockerfile
    documentation - following the recent change in separating out
    the docker-stack, production image building documentation has
    been improved to reflect those simplifications. It should be
    much easier to grasp by the novice users now - very clear
    distinction and separation is made between the two types of
    building your own images - customizing or extending - and it
    is now much easier to follow examples and find out how to
    build your own image. The criteria on which approach to
    choose were put first and forefront.
    
    Examples have been reviewed, fixed and put in a logical
    sequence. From the most basic ones to the most advanced,
    with clear indication where the basic aproach ends and where
    the "power-user" one starts. The examples were also separated
    out to separate files and included from there - also the
    example Docker images and build commands are executable
    and tested automatically in CI, so they are guaranteed
    to work.
    
    Finally The build arguments were split into sections - from most
    basic to most advanced and each section links to appropriate
    example section, showing how to use those parameters.
    
    Fixes: #14848
    Fixes: #14255
---
 .github/workflows/ci.yml                           |  24 ++
 Dockerfile                                         |  64 ++---
 Dockerfile.ci                                      |  15 +-
 IMAGES.rst                                         |   5 +-
 breeze                                             |   5 -
 docs/docker-stack/build-arg-ref.rst                | 267 ++++++++++++---------
 .../customizing/add-build-essential-custom.sh      |  33 +++
 .../docker-examples/customizing/custom-sources.sh  |  48 ++++
 .../customizing/github-different-repository.sh     |  31 +++
 .../docker-examples/customizing/github-master.sh   |  31 +++
 .../customizing/github-v2-0-test.sh                |  31 +++
 .../customizing/pypi-dev-runtime-deps.sh           |  34 +++
 .../customizing/pypi-extras-and-deps.sh            |  32 +++
 .../customizing/pypi-selected-version.sh           |  30 +++
 .../docker-examples/customizing/stable-airflow.sh  |  28 +++
 .../extending/add-apt-packages/Dockerfile          |  27 +++
 .../add-build-essential-extend/Dockerfile          |  28 +++
 .../extending/add-pypi-packages/Dockerfile         |  20 ++
 .../extending/embedding-dags/Dockerfile            |  22 ++
 .../extending/embedding-dags/test_dag.py           |  39 +++
 .../restricted/restricted_environments.sh          |  44 ++++
 scripts/ci/images/ci_run_prod_image_test.sh        |  50 ++++
 .../ci_test_examples_of_prod_image_building.sh     |  91 +++++++
 scripts/ci/libraries/_build_images.sh              |   1 +
 scripts/ci/libraries/_docker_engine_resources.sh   |   9 +-
 scripts/ci/libraries/_initialization.sh            |   1 +
 scripts/ci/libraries/_parallel.sh                  |  70 +++++-
 scripts/ci/testing/ci_run_airflow_testing.sh       |  59 +----
 scripts/docker/common.sh                           |  63 +++++
 scripts/docker/compile_www_assets.sh               |   5 +-
 scripts/docker/install_airflow.sh                  |  18 +-
 scripts/docker/install_airflow_from_branch_tip.sh  |  13 +-
 .../docker/install_from_docker_context_files.sh    |  26 +-
 33 files changed, 993 insertions(+), 271 deletions(-)

diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml
index b47022d..79eb7fb 100644
--- a/.github/workflows/ci.yml
+++ b/.github/workflows/ci.yml
@@ -216,6 +216,7 @@ jobs:
           fi
 
   test-openapi-client-generation:
+    timeout-minutes: 10
     name: "Test OpenAPI client generation"
     runs-on: ${{ fromJson(needs.build-info.outputs.runsOn) }}
     needs: [build-info]
@@ -229,6 +230,29 @@ jobs:
       - name: "Generate client codegen diff"
         run: ./scripts/ci/openapi/client_codegen_diff.sh
 
+  test-examples-of-prod-image-building:
+    timeout-minutes: 60
+    name: "Test examples of production image building"
+    runs-on: ${{ fromJson(needs.build-info.outputs.runsOn) }}
+    needs: [build-info]
+    if: needs.build-info.outputs.image-build == 'true'
+    steps:
+      - name: "Checkout ${{ github.ref }} ( ${{ github.sha }} )"
+        uses: actions/checkout@v2
+        with:
+          fetch-depth: 2
+          persist-credentials: false
+      - name: "Free space"
+        run: ./scripts/ci/tools/ci_free_space_on_ci.sh
+        if: |
+          needs.build-info.outputs.waitForImage == 'true'
+      - name: "Setup python"
+        uses: actions/setup-python@v2
+        with:
+          python-version: ${{needs.build-info.outputs.defaultPythonVersion}}
+      - name: "Test examples of PROD image building"
+        run: ./scripts/ci/images/ci_test_examples_of_prod_image_building.sh
+
   ci-images:
     timeout-minutes: 120
     name: "Wait for CI images"
diff --git a/Dockerfile b/Dockerfile
index 4b1b807..a98b729 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -33,7 +33,7 @@
 #                        all the build essentials. This makes the image
 #                        much smaller.
 #
-ARG AIRFLOW_VERSION="2.0.0.dev0"
+ARG AIRFLOW_VERSION="2.0.1"
 ARG AIRFLOW_EXTRAS="async,amazon,celery,cncf.kubernetes,docker,dask,elasticsearch,ftp,grpc,hashicorp,http,ldap,google,microsoft.azure,mysql,postgres,redis,sendgrid,sftp,slack,ssh,statsd,virtualenv"
 ARG ADDITIONAL_AIRFLOW_EXTRAS=""
 ARG ADDITIONAL_PYTHON_DEPS=""
@@ -45,7 +45,6 @@ ARG AIRFLOW_GID="50000"
 ARG CASS_DRIVER_BUILD_CONCURRENCY="8"
 
 ARG PYTHON_BASE_IMAGE="python:3.6-slim-buster"
-ARG PYTHON_MAJOR_MINOR_VERSION="3.6"
 
 ARG AIRFLOW_PIP_VERSION=20.2.4
 
@@ -61,9 +60,6 @@ SHELL ["/bin/bash", "-o", "pipefail", "-e", "-u", "-x", "-c"]
 ARG PYTHON_BASE_IMAGE
 ENV PYTHON_BASE_IMAGE=${PYTHON_BASE_IMAGE}
 
-ARG PYTHON_MAJOR_MINOR_VERSION
-ENV PYTHON_MAJOR_MINOR_VERSION=${PYTHON_MAJOR_MINOR_VERSION}
-
 # Make sure noninteractive debian install is used and language variables set
 ENV DEBIAN_FRONTEND=noninteractive LANGUAGE=C.UTF-8 LANG=C.UTF-8 LC_ALL=C.UTF-8 \
     LC_CTYPE=C.UTF-8 LC_MESSAGES=C.UTF-8
@@ -165,12 +161,16 @@ ENV AIRFLOW_EXTRAS=${AIRFLOW_EXTRAS}${ADDITIONAL_AIRFLOW_EXTRAS:+,}${ADDITIONAL_
 ARG CONSTRAINTS_GITHUB_REPOSITORY="apache/airflow"
 ENV CONSTRAINTS_GITHUB_REPOSITORY=${CONSTRAINTS_GITHUB_REPOSITORY}
 
-ARG AIRFLOW_CONSTRAINTS_REFERENCE="constraints-2-0"
-ARG AIRFLOW_CONSTRAINTS="constraints"
-ARG AIRFLOW_CONSTRAINTS="constraints"
-ARG AIRFLOW_CONSTRAINTS_LOCATION="https://raw.githubusercontent.com/${CONSTRAINTS_GITHUB_REPOSITORY}/${AIRFLOW_CONSTRAINTS_REFERENCE}/${AIRFLOW_CONSTRAINTS}-${PYTHON_MAJOR_MINOR_VERSION}.txt"
+ARG AIRFLOW_CONSTRAINTS="constraints-2.0"
+ENV AIRFLOW_CONSTRAINTS=${AIRFLOW_CONSTRAINTS}
+ARG AIRFLOW_CONSTRAINTS_REFERENCE=""
+ENV AIRFLOW_CONSTRAINTS_REFERENCE=${AIRFLOW_CONSTRAINTS_REFERENCE}
+ARG AIRFLOW_CONSTRAINTS_LOCATION=""
 ENV AIRFLOW_CONSTRAINTS_LOCATION=${AIRFLOW_CONSTRAINTS_LOCATION}
 
+ARG DEFAULT_CONSTRAINTS_BRANCH="constraints-master"
+ENV DEFAULT_CONSTRAINTS_BRANCH=${DEFAULT_CONSTRAINTS_BRANCH}
+
 ENV PATH=${PATH}:/root/.local/bin
 RUN mkdir -p /root/.local/bin
 
@@ -204,6 +204,26 @@ ENV AIRFLOW_PRE_CACHED_PIP_PACKAGES=${AIRFLOW_PRE_CACHED_PIP_PACKAGES}
 ARG INSTALL_PROVIDERS_FROM_SOURCES="false"
 ENV INSTALL_PROVIDERS_FROM_SOURCES=${INSTALL_PROVIDERS_FROM_SOURCES}
 
+# This is airflow version that is put in the label of the image build
+ARG AIRFLOW_VERSION
+ENV AIRFLOW_VERSION=${AIRFLOW_VERSION}
+
+# Determines the way airflow is installed. By default we install airflow from PyPI `apache-airflow` package
+# But it also can be `.` from local installation or GitHub URL pointing to specific branch or tag
+# Of Airflow. Note That for local source installation you need to have local sources of
+# Airflow checked out together with the Dockerfile and AIRFLOW_SOURCES_FROM and AIRFLOW_SOURCES_TO
+# set to "." and "/opt/airflow" respectively.
+ARG AIRFLOW_INSTALLATION_METHOD="apache-airflow"
+ENV AIRFLOW_INSTALLATION_METHOD=${AIRFLOW_INSTALLATION_METHOD}
+
+# By default latest released version of airflow is installed (when empty) but this value can be overridden
+# and we can install version according to specification (For example ==2.0.2 or <3.0.0).
+ARG AIRFLOW_VERSION_SPECIFICATION=""
+ENV AIRFLOW_VERSION_SPECIFICATION=${AIRFLOW_VERSION_SPECIFICATION}
+
+# Only copy common.sh to not invalidate cache on other script changes
+COPY scripts/docker/common.sh /scripts/docker/common.sh
+
 # Only copy install_airflow_from_branch_tip.sh to not invalidate cache on other script changes
 COPY scripts/docker/install_airflow_from_branch_tip.sh /scripts/docker/install_airflow_from_branch_tip.sh
 
@@ -236,27 +256,10 @@ COPY ${AIRFLOW_SOURCES_FROM} ${AIRFLOW_SOURCES_TO}
 ARG CASS_DRIVER_BUILD_CONCURRENCY
 ENV CASS_DRIVER_BUILD_CONCURRENCY=${CASS_DRIVER_BUILD_CONCURRENCY}
 
-# This is airflow version that is put in the label of the image build
-ARG AIRFLOW_VERSION
-ENV AIRFLOW_VERSION=${AIRFLOW_VERSION}
-
 # Add extra python dependencies
 ARG ADDITIONAL_PYTHON_DEPS=""
 ENV ADDITIONAL_PYTHON_DEPS=${ADDITIONAL_PYTHON_DEPS}
 
-# Determines the way airflow is installed. By default we install airflow from PyPI `apache-airflow` package
-# But it also can be `.` from local installation or GitHub URL pointing to specific branch or tag
-# Of Airflow. Note That for local source installation you need to have local sources of
-# Airflow checked out together with the Dockerfile and AIRFLOW_SOURCES_FROM and AIRFLOW_SOURCES_TO
-# set to "." and "/opt/airflow" respectively.
-ARG AIRFLOW_INSTALLATION_METHOD="apache-airflow"
-ENV AIRFLOW_INSTALLATION_METHOD=${AIRFLOW_INSTALLATION_METHOD}
-
-# By default latest released version of airflow is installed (when empty) but this value can be overridden
-# and we can install version according to specification (For example ==2.0.2 or <3.0.0).
-ARG AIRFLOW_VERSION_SPECIFICATION=""
-ENV AIRFLOW_VERSION_SPECIFICATION=${AIRFLOW_VERSION_SPECIFICATION}
-
 # We can set this value to true in case we want to install .whl .tar.gz packages placed in the
 # docker-context-files folder. This can be done for both - additional packages you want to install
 # and for airflow as well (you have to set INSTALL_FROM_PYPI to false in this case)
@@ -274,7 +277,7 @@ ENV INSTALL_FROM_PYPI=${INSTALL_FROM_PYPI}
 # * urllib3 - required to keep boto3 happy
 # * pytz<2021.0: required by snowflake provider
 # * pyjwt<2.0.0: flask-jwt-extended requires it
-ARG EAGER_UPGRADE_ADDITIONAL_REQUIREMENTS="chardet<4 urllib3<1.26 pytz<2021.0 pyjwt<2.0.0"
+ARG EAGER_UPGRADE_ADDITIONAL_REQUIREMENTS="chardet<4 urllib3<1.26 pyjwt<2.0.0"
 
 WORKDIR /opt/airflow
 
@@ -284,11 +287,10 @@ ARG CONTINUE_ON_PIP_CHECK_FAILURE="false"
 COPY scripts/docker/install*.sh /scripts/docker/
 
 # hadolint ignore=SC2086, SC2010
-RUN if [[ ${INSTALL_FROM_PYPI} == "true" ]]; then \
-        bash /scripts/docker/install_airflow.sh; \
-    fi; \
-    if [[ ${INSTALL_FROM_DOCKER_CONTEXT_FILES} == "true" ]]; then \
+RUN if [[ ${INSTALL_FROM_DOCKER_CONTEXT_FILES} == "true" ]]; then \
         bash /scripts/docker/install_from_docker_context_files.sh; \
+    elif [[ ${INSTALL_FROM_PYPI} == "true" ]]; then \
+        bash /scripts/docker/install_airflow.sh; \
     fi; \
     if [[ -n "${ADDITIONAL_PYTHON_DEPS}" ]]; then \
         bash /scripts/docker/install_additional_dependencies.sh; \
diff --git a/Dockerfile.ci b/Dockerfile.ci
index 9629621..49c31d6 100644
--- a/Dockerfile.ci
+++ b/Dockerfile.ci
@@ -26,9 +26,6 @@ ENV PYTHON_BASE_IMAGE=${PYTHON_BASE_IMAGE}
 ARG AIRFLOW_VERSION="2.0.0.dev0"
 ENV AIRFLOW_VERSION=$AIRFLOW_VERSION
 
-ARG PYTHON_MAJOR_MINOR_VERSION="3.6"
-ENV PYTHON_MAJOR_MINOR_VERSION=${PYTHON_MAJOR_MINOR_VERSION}
-
 # Print versions
 RUN echo "Base image: ${PYTHON_BASE_IMAGE}"
 RUN echo "Airflow version: ${AIRFLOW_VERSION}"
@@ -241,11 +238,16 @@ RUN echo "Installing with extras: ${AIRFLOW_EXTRAS}."
 ARG CONSTRAINTS_GITHUB_REPOSITORY="apache/airflow"
 ENV CONSTRAINTS_GITHUB_REPOSITORY=${CONSTRAINTS_GITHUB_REPOSITORY}
 
-ARG AIRFLOW_CONSTRAINTS_REFERENCE="constraints-${AIRFLOW_BRANCH}"
 ARG AIRFLOW_CONSTRAINTS="constraints"
-ARG AIRFLOW_CONSTRAINTS_LOCATION="https://raw.githubusercontent.com/${CONSTRAINTS_GITHUB_REPOSITORY}/${AIRFLOW_CONSTRAINTS_REFERENCE}/${AIRFLOW_CONSTRAINTS}-${PYTHON_MAJOR_MINOR_VERSION}.txt"
+ENV AIRFLOW_CONSTRAINTS=${AIRFLOW_CONSTRAINTS}
+ARG AIRFLOW_CONSTRAINTS_REFERENCE=""
+ENV AIRFLOW_CONSTRAINTS_REFERENCE=${AIRFLOW_CONSTRAINTS_REFERENCE}
+ARG AIRFLOW_CONSTRAINTS_LOCATION=""
 ENV AIRFLOW_CONSTRAINTS_LOCATION=${AIRFLOW_CONSTRAINTS_LOCATION}
 
+ARG DEFAULT_CONSTRAINTS_BRANCH="constraints-master"
+ENV DEFAULT_CONSTRAINTS_BRANCH=${DEFAULT_CONSTRAINTS_BRANCH}
+
 # By changing the CI build epoch we can force reinstalling Airflow and pip all dependencies
 # It can also be overwritten manually by setting the AIRFLOW_CI_BUILD_EPOCH environment variable.
 ARG AIRFLOW_CI_BUILD_EPOCH="3"
@@ -292,6 +294,9 @@ ENV PIP_PROGRESS_BAR=${PIP_PROGRESS_BAR}
 
 RUN pip install --no-cache-dir --upgrade "pip==${AIRFLOW_PIP_VERSION}"
 
+# Only copy common.sh to not invalidate further layers
+COPY scripts/docker/common.sh /scripts/docker/common.sh
+
 # Only copy install_airflow_from_branch_tip.sh to not invalidate cache on other script changes
 COPY scripts/docker/install_airflow_from_branch_tip.sh /scripts/docker/install_airflow_from_branch_tip.sh
 
diff --git a/IMAGES.rst b/IMAGES.rst
index 0304bd7..3e00f41 100644
--- a/IMAGES.rst
+++ b/IMAGES.rst
@@ -454,7 +454,6 @@ additional apt dev and runtime dependencies.
     --build-arg AIRFLOW_INSTALLATION_METHOD="apache-airflow" \
     --build-arg AIRFLOW_VERSION="2.0.0" \
     --build-arg AIRFLOW_VERSION_SPECIFICATION="==2.0.0" \
-    --build-arg AIRFLOW_CONSTRAINTS_REFERENCE="constraints-2-0" \
     --build-arg AIRFLOW_SOURCES_FROM="empty" \
     --build-arg AIRFLOW_SOURCES_TO="/empty" \
     --build-arg ADDITIONAL_AIRFLOW_EXTRAS="jdbc"
@@ -489,7 +488,6 @@ based on example in `this comment <https://github.com/apache/airflow/issues/8605
     --build-arg AIRFLOW_INSTALLATION_METHOD="apache-airflow" \
     --build-arg AIRFLOW_VERSION="2.0.0" \
     --build-arg AIRFLOW_VERSION_SPECIFICATION="==2.0.0" \
-    --build-arg AIRFLOW_CONSTRAINTS_REFERENCE="constraints-2-0" \
     --build-arg AIRFLOW_SOURCES_FROM="empty" \
     --build-arg AIRFLOW_SOURCES_TO="/empty" \
     --build-arg ADDITIONAL_AIRFLOW_EXTRAS="slack" \
@@ -567,7 +565,7 @@ The following build arguments (``--build-arg`` in docker build command) can be u
 |                                          |                                          | set to true. Default location from       |
 |                                          |                                          | GitHub is used in this case.             |
 +------------------------------------------+------------------------------------------+------------------------------------------+
-| ``AIRFLOW_CONSTRAINTS_REFERENCE``        | ``constraints-master``                   | reference (branch or tag) from GitHub    |
+| ``AIRFLOW_CONSTRAINTS_REFERENCE``        |                                          | reference (branch or tag) from GitHub    |
 |                                          |                                          | repository from which constraints are    |
 |                                          |                                          | used. By default it is set to            |
 |                                          |                                          | ``constraints-master`` but can be        |
@@ -575,6 +573,7 @@ The following build arguments (``--build-arg`` in docker build command) can be u
 |                                          |                                          | ``constraints-1-10`` for 1.10.* versions |
 |                                          |                                          | or it could point to specific version    |
 |                                          |                                          | for example ``constraints-2.0.0``        |
+|                                          |                                          | is empty, it is auto-detected            |
 +------------------------------------------+------------------------------------------+------------------------------------------+
 | ``INSTALL_PROVIDERS_FROM_SOURCES``       | ``true``                                 | If set to false and image is built from  |
 |                                          |                                          | sources, all provider packages are not   |
diff --git a/breeze b/breeze
index 94a8b86..4df4937 100755
--- a/breeze
+++ b/breeze
@@ -881,11 +881,6 @@ function breeze::parse_arguments() {
             INSTALL_AIRFLOW_VERSION="${2}"
             # Reference is mutually exclusive with version
             INSTALL_AIRFLOW_REFERENCE=""
-            # Skip mounting local sources when airflow is installed from remote
-            if [[ ${INSTALL_AIRFLOW_VERSION} =~ ^[0-9\.]*$ ]]; then
-                echo "Install providers from PyPI"
-                INSTALL_PROVIDERS_FROM_SOURCES="false"
-            fi
             echo "Installs version of Airflow: ${INSTALL_AIRFLOW_VERSION}"
             echo
             shift 2
diff --git a/docs/docker-stack/build-arg-ref.rst b/docs/docker-stack/build-arg-ref.rst
index 57d4da5..2ec04c8 100644
--- a/docs/docker-stack/build-arg-ref.rst
+++ b/docs/docker-stack/build-arg-ref.rst
@@ -18,99 +18,77 @@
 Image build arguments reference
 -------------------------------
 
-The following build arguments (``--build-arg`` in docker build command) can be used for production images:
+The following build arguments (``--build-arg`` in docker build command) can be used for production images.
+Those arguments are used when you want to customize the image. You can see some examples of it in
+:ref:`Building from PyPI packages<image-build-pypi>`.
+
+Basic arguments
+...............
+
+Those are the most common arguments that you use when you want to build a custom image.
 
 +------------------------------------------+------------------------------------------+------------------------------------------+
 | Build argument                           | Default value                            | Description                              |
 +==========================================+==========================================+==========================================+
 | ``PYTHON_BASE_IMAGE``                    | ``python:3.6-slim-buster``               | Base python image.                       |
 +------------------------------------------+------------------------------------------+------------------------------------------+
-| ``PYTHON_MAJOR_MINOR_VERSION``           | ``3.6``                                  | major/minor version of Python (should    |
-|                                          |                                          | match base image).                       |
-+------------------------------------------+------------------------------------------+------------------------------------------+
-| ``AIRFLOW_VERSION``                      | ``2.0.1.dev0``                           | version of Airflow.                      |
+| ``AIRFLOW_VERSION``                      | ``2.0.1``                                | version of Airflow.                      |
 +------------------------------------------+------------------------------------------+------------------------------------------+
-| ``AIRFLOW_REPO``                         | ``apache/airflow``                       | the repository from which PIP            |
-|                                          |                                          | dependencies are pre-installed.          |
+| ``AIRFLOW_EXTRAS``                       | (see Dockerfile)                         | Default extras with which airflow is     |
+|                                          |                                          | installed.                               |
 +------------------------------------------+------------------------------------------+------------------------------------------+
-| ``AIRFLOW_BRANCH``                       | ``master``                               | the branch from which PIP dependencies   |
-|                                          |                                          | are pre-installed initially.             |
+| ``ADDITIONAL_AIRFLOW_EXTRAS``            |                                          | Optional additional extras with which    |
+|                                          |                                          | airflow is installed.                    |
 +------------------------------------------+------------------------------------------+------------------------------------------+
-| ``AIRFLOW_CONSTRAINTS_LOCATION``         |                                          | If not empty, it will override the       |
-|                                          |                                          | source of the constraints with the       |
-|                                          |                                          | specified URL or file. Note that the     |
-|                                          |                                          | file has to be in docker context so      |
-|                                          |                                          | it's best to place such file in          |
-|                                          |                                          | one of the folders included in           |
-|                                          |                                          | ``.dockerignore`` file.                  |
+| ``AIRFLOW_HOME``                         | ``/opt/airflow``                         | Airflow’s HOME (that’s where logs and    |
+|                                          |                                          | SQLite databases are stored).            |
 +------------------------------------------+------------------------------------------+------------------------------------------+
-| ``AIRFLOW_CONSTRAINTS_REFERENCE``        | ``constraints-master``                   | Reference (branch or tag) from GitHub    |
-|                                          |                                          | where constraints file is taken from     |
-|                                          |                                          | It can be ``constraints-master`` but     |
-|                                          |                                          | also can be ``constraints-1-10`` for     |
-|                                          |                                          | 1.10.* installation. In case of building |
-|                                          |                                          | specific version you want to point it    |
-|                                          |                                          | to specific tag, for example             |
-|                                          |                                          | ``constraints-1.10.15``.                 |
+| ``AIRFLOW_USER_HOME_DIR``                | ``/home/airflow``                        | Home directory of the Airflow user.      |
 +------------------------------------------+------------------------------------------+------------------------------------------+
-| ``INSTALL_PROVIDERS_FROM_SOURCES``       | ``false``                                | If set to ``true`` and image is built    |
-|                                          |                                          | from sources, all provider packages are  |
-|                                          |                                          | installed from sources rather than from  |
-|                                          |                                          | packages. It has no effect when          |
-|                                          |                                          | installing from PyPI or GitHub repo.     |
+| ``AIRFLOW_PIP_VERSION``                  | ``20.2.4``                               | PIP version used.                        |
 +------------------------------------------+------------------------------------------+------------------------------------------+
-| ``AIRFLOW_EXTRAS``                       | (see Dockerfile)                         | Default extras with which airflow is     |
-|                                          |                                          | installed.                               |
+| ``PIP_PROGRESS_BAR``                     | ``on``                                   | Progress bar for PIP installation        |
 +------------------------------------------+------------------------------------------+------------------------------------------+
-| ``INSTALL_FROM_PYPI``                    | ``true``                                 | If set to true, Airflow is installed     |
-|                                          |                                          | from PyPI. if you want to install        |
-|                                          |                                          | Airflow from self-build package          |
-|                                          |                                          | you can set it to false, put package in  |
-|                                          |                                          | ``docker-context-files`` and set         |
-|                                          |                                          | ``INSTALL_FROM_DOCKER_CONTEXT_FILES`` to |
-|                                          |                                          | ``true``. For this you have to also keep |
-|                                          |                                          | ``AIRFLOW_PRE_CACHED_PIP_PACKAGES`` flag |
-|                                          |                                          | set to ``false``.                        |
+| ``AIRFLOW_UID``                          | ``50000``                                | Airflow user UID.                        |
 +------------------------------------------+------------------------------------------+------------------------------------------+
-| ``AIRFLOW_PRE_CACHED_PIP_PACKAGES``      | ``false``                                | Allows to pre-cache airflow PIP packages |
-|                                          |                                          | from the GitHub of Apache Airflow        |
-|                                          |                                          | This allows to optimize iterations for   |
-|                                          |                                          | Image builds and speeds up CI builds.    |
+| ``AIRFLOW_GID``                          | ``50000``                                | Airflow group GID. Note that most files  |
+|                                          |                                          | created on behalf of airflow user belong |
+|                                          |                                          | to the ``root`` group (0) to keep        |
+|                                          |                                          | OpenShift Guidelines compatibility.      |
 +------------------------------------------+------------------------------------------+------------------------------------------+
-| ``INSTALL_FROM_DOCKER_CONTEXT_FILES``    | ``false``                                | If set to true, Airflow, providers and   |
-|                                          |                                          | all dependencies are installed from      |
-|                                          |                                          | from locally built/downloaded            |
-|                                          |                                          | .whl and .tar.gz files placed in the     |
-|                                          |                                          | ``docker-context-files``. In certain     |
-|                                          |                                          | corporate environments, this is required |
-|                                          |                                          | to install airflow from such pre-vetted  |
-|                                          |                                          | packages rather than from PyPI. For this |
-|                                          |                                          | to work, also set ``INSTALL_FROM_PYPI``. |
-|                                          |                                          | Note that packages starting with         |
-|                                          |                                          | ``apache?airflow`` glob are treated      |
-|                                          |                                          | differently than other packages. All     |
-|                                          |                                          | ``apache?airflow`` packages are          |
-|                                          |                                          | installed with dependencies limited by   |
-|                                          |                                          | airflow constraints. All other packages  |
-|                                          |                                          | are installed without dependencies       |
-|                                          |                                          | 'as-is'. If you wish to install airflow  |
-|                                          |                                          | via 'pip download' with all dependencies |
-|                                          |                                          | downloaded, you have to rename the       |
-|                                          |                                          | apache airflow and provider packages to  |
-|                                          |                                          | not start with ``apache?airflow`` glob.  |
+| ``AIRFLOW_CONSTRAINTS_REFERENCE``        |                                          | Reference (branch or tag) from GitHub    |
+|                                          |                                          | where constraints file is taken from     |
+|                                          |                                          | It can be ``constraints-master`` but     |
+|                                          |                                          | can be ``constraints-1-10`` for 1.10.*   |
+|                                          |                                          | versions of ``constraints-2-0`` for      |
+|                                          |                                          | 2.0.* installation. In case of building  |
+|                                          |                                          | specific version you want to point it    |
+|                                          |                                          | to specific tag, for example             |
+|                                          |                                          | ``constraints-2.0.1``.                   |
+|                                          |                                          | Auto-detected if empty.                  |
 +------------------------------------------+------------------------------------------+------------------------------------------+
-| ``UPGRADE_TO_NEWER_DEPENDENCIES``        | ``false``                                | If set to true, the dependencies are     |
-|                                          |                                          | upgraded to newer versions matching      |
-|                                          |                                          | setup.py before installation.            |
+
+Image optimization options
+..........................
+
+The main advantage of Customization method of building Airflow image, is that it allows to build highly optimized image because
+the final image (RUNTIME) might not contain all the dependencies that are needed to build and install all other dependencies
+(DEV). Those arguments allow to control what is installed in the DEV image and what is installed in RUNTIME one, thus
+allowing to produce much more optimized images. See :ref:`Building optimized images<image-build-optimized>`.
+for examples of using those arguments.
+
 +------------------------------------------+------------------------------------------+------------------------------------------+
+| Build argument                           | Default value                            | Description                              |
++==========================================+==========================================+==========================================+
 | ``CONTINUE_ON_PIP_CHECK_FAILURE``        | ``false``                                | By default the image build fails if pip  |
 |                                          |                                          | check fails for it. This is good for     |
 |                                          |                                          | interactive building but on CI the       |
 |                                          |                                          | image should be built regardless - we    |
 |                                          |                                          | have a separate step to verify image.    |
 +------------------------------------------+------------------------------------------+------------------------------------------+
-| ``ADDITIONAL_AIRFLOW_EXTRAS``            |                                          | Optional additional extras with which    |
-|                                          |                                          | airflow is installed.                    |
+| ``UPGRADE_TO_NEWER_DEPENDENCIES``        | ``false``                                | If set to true, the dependencies are     |
+|                                          |                                          | upgraded to newer versions matching      |
+|                                          |                                          | setup.py before installation.            |
 +------------------------------------------+------------------------------------------+------------------------------------------+
 | ``ADDITIONAL_PYTHON_DEPS``               |                                          | Optional python packages to extend       |
 |                                          |                                          | the image with some extra dependencies.  |
@@ -149,18 +127,6 @@ The following build arguments (``--build-arg`` in docker build command) can be u
 | ``ADDITIONAL_RUNTIME_APT_ENV``           |                                          | Additional env variables defined         |
 |                                          |                                          | when installing runtime deps.            |
 +------------------------------------------+------------------------------------------+------------------------------------------+
-| ``AIRFLOW_HOME``                         | ``/opt/airflow``                         | Airflow’s HOME (that’s where logs and    |
-|                                          |                                          | SQLite databases are stored).            |
-+------------------------------------------+------------------------------------------+------------------------------------------+
-| ``AIRFLOW_UID``                          | ``50000``                                | Airflow user UID.                        |
-+------------------------------------------+------------------------------------------+------------------------------------------+
-| ``AIRFLOW_GID``                          | ``50000``                                | Airflow group GID. Note that most files  |
-|                                          |                                          | created on behalf of airflow user belong |
-|                                          |                                          | to the ``root`` group (0) to keep        |
-|                                          |                                          | OpenShift Guidelines compatibility.      |
-+------------------------------------------+------------------------------------------+------------------------------------------+
-| ``AIRFLOW_USER_HOME_DIR``                | ``/home/airflow``                        | Home directory of the Airflow user.      |
-+------------------------------------------+------------------------------------------+------------------------------------------+
 | ``CASS_DRIVER_BUILD_CONCURRENCY``        | ``8``                                    | Number of processors to use for          |
 |                                          |                                          | cassandra PIP install (speeds up         |
 |                                          |                                          | installing in case cassandra extra is    |
@@ -170,43 +136,106 @@ The following build arguments (``--build-arg`` in docker build command) can be u
 |                                          |                                          | The mysql extra is removed from extras   |
 |                                          |                                          | if the client is not installed.          |
 +------------------------------------------+------------------------------------------+------------------------------------------+
-| ``AIRFLOW_PIP_VERSION``                  | ``20.2.4``                               | PIP version used.                        |
+
+Installing Airflow using different methods
+..........................................
+
+Those parameters are useful only if you want to install Airflow using different installation methods than the default
+(installing from PyPI packages).
+
+This is usually only useful if you have your own fork of Airflow and want to build the images locally from
+those sources - either locally or directly from GitHub sources. This way you do not need to release your
+Airflow and Providers via PyPI - they can be installed directly from sources or from GitHub repository.
+Another option of installation is to build Airflow from previously prepared binary Python packages which might
+be useful if you need to build Airflow in environments that require high levels of security.
+
+You can see some examples of those in:
+  * :ref:`Building from GitHub<image-build-github>`,
+  * :ref:`Using custom installation sources<image-build-custom>`,
+  * :ref:`Build images in security restricted environments<image-build-secure-environments>`
+
 +------------------------------------------+------------------------------------------+------------------------------------------+
-| ``PIP_PROGRESS_BAR``                     | ``on``                                   | Progress bar for PIP installation        |
+| Build argument                           | Default value                            | Description                              |
++==========================================+==========================================+==========================================+
+| ``AIRFLOW_INSTALLATION_METHOD``          | ``apache-airflow``                       | Installation method of Apache Airflow.   |
+|                                          |                                          | ``apache-airflow`` for installation from |
+|                                          |                                          | PyPI. It can be GitHub repository URL    |
+|                                          |                                          | including branch or tag to install from  |
+|                                          |                                          | that repository or "." to install from   |
+|                                          |                                          | local sources. Installing from sources   |
+|                                          |                                          | requires appropriate values of the       |
+|                                          |                                          | ``AIRFLOW_SOURCES_FROM`` and             |
+|                                          |                                          | ``AIRFLOW_SOURCES_TO`` variables (see    |
+|                                          |                                          | below)                                   |
++------------------------------------------+------------------------------------------+------------------------------------------+
+| ``AIRFLOW_SOURCES_FROM``                 | ``empty``                                | Sources of Airflow. Set it to "." when   |
+|                                          |                                          | you install Airflow from local sources   |
++------------------------------------------+------------------------------------------+------------------------------------------+
+| ``AIRFLOW_SOURCES_TO``                   | ``/empty``                               | Target for Airflow sources. Set to       |
+|                                          |                                          | "/opt/airflow" when you install Airflow  |
+|                                          |                                          | from local sources.                      |
++------------------------------------------+------------------------------------------+------------------------------------------+
+| ``AIRFLOW_VERSION_SPECIFICATION``        |                                          | Optional - might be used for using limit |
+|                                          |                                          | for Airflow version installation - for   |
+|                                          |                                          | example ``<2.0.2`` for automated builds. |
++------------------------------------------+------------------------------------------+------------------------------------------+
+| ``INSTALL_PROVIDERS_FROM_SOURCES``       | ``false``                                | If set to ``true`` and image is built    |
+|                                          |                                          | from sources, all provider packages are  |
+|                                          |                                          | installed from sources rather than from  |
+|                                          |                                          | packages. It has no effect when          |
+|                                          |                                          | installing from PyPI or GitHub repo.     |
++------------------------------------------+------------------------------------------+------------------------------------------+
+| ``AIRFLOW_CONSTRAINTS_LOCATION``         |                                          | If not empty, it will override the       |
+|                                          |                                          | source of the constraints with the       |
+|                                          |                                          | specified URL or file. Note that the     |
+|                                          |                                          | file has to be in docker context so      |
+|                                          |                                          | it's best to place such file in          |
+|                                          |                                          | one of the folders included in           |
+|                                          |                                          | ``.dockerignore`` file.                  |
 +------------------------------------------+------------------------------------------+------------------------------------------+
+| ``INSTALL_FROM_DOCKER_CONTEXT_FILES``    | ``false``                                | If set to true, Airflow, providers and   |
+|                                          |                                          | all dependencies are installed from      |
+|                                          |                                          | from locally built/downloaded            |
+|                                          |                                          | .whl and .tar.gz files placed in the     |
+|                                          |                                          | ``docker-context-files``. In certain     |
+|                                          |                                          | corporate environments, this is required |
+|                                          |                                          | to install airflow from such pre-vetted  |
+|                                          |                                          | packages rather than from PyPI. For this |
+|                                          |                                          | to work, also set ``INSTALL_FROM_PYPI``. |
+|                                          |                                          | Note that packages starting with         |
+|                                          |                                          | ``apache?airflow`` glob are treated      |
+|                                          |                                          | differently than other packages. All     |
+|                                          |                                          | ``apache?airflow`` packages are          |
+|                                          |                                          | installed with dependencies limited by   |
+|                                          |                                          | airflow constraints. All other packages  |
+|                                          |                                          | are installed without dependencies       |
+|                                          |                                          | 'as-is'. If you wish to install airflow  |
+|                                          |                                          | via 'pip download' with all dependencies |
+|                                          |                                          | downloaded, you have to rename the       |
+|                                          |                                          | apache airflow and provider packages to  |
+|                                          |                                          | not start with ``apache?airflow`` glob.  |
++------------------------------------------+------------------------------------------+------------------------------------------+
+
+Pre-caching PIP dependencies
+............................
+
+When image is build from PIP, by default pre-caching of PIP dependencies is used. This is in order to speed-up incremental
+builds during development. When pre-cached PIP dependencies are used and ``setup.py`` or ``setup.cfg`` changes, the
+PIP dependencies are already pre-installed, thus resulting in much faster image rebuild. This is purely an optimization
+of time needed to build the images and should be disabled if you want to install Airflow from
+docker context files.
 
-There are build arguments that determine the installation mechanism of Apache Airflow for the
-production image. There are three types of build:
-
-* From local sources (by default for example when you use ``docker build .``)
-* You can build the image from released PyPI airflow package (used to build the official Docker image)
-* You can build the image from any version in GitHub repository(this is used mostly for system testing).
-
-+-----------------------------------+------------------------+-----------------------------------------------------------------------------------+
-| Build argument                    | Default                | What to specify                                                                   |
-+===================================+========================+===================================================================================+
-| ``AIRFLOW_INSTALLATION_METHOD``   | ``apache-airflow``     | Should point to the installation method of Apache Airflow. It can be              |
-|                                   |                        | ``apache-airflow`` for installation from packages and URL to installation from    |
-|                                   |                        | GitHub repository tag or branch or "." to install from sources.                   |
-|                                   |                        | Note that installing from local sources requires appropriate values of the        |
-|                                   |                        | ``AIRFLOW_SOURCES_FROM`` and ``AIRFLOW_SOURCES_TO`` variables as described below. |
-|                                   |                        | Only used when ``INSTALL_FROM_PYPI`` is set to ``true``.                          |
-+-----------------------------------+------------------------+-----------------------------------------------------------------------------------+
-| ``AIRFLOW_VERSION_SPECIFICATION`` |                        | Optional - might be used for package installation of different Airflow version    |
-|                                   |                        | for example"==2.0.1". For consistency, you should also set``AIRFLOW_VERSION``     |
-|                                   |                        | to the same value AIRFLOW_VERSION is resolved as label in the image created.      |
-+-----------------------------------+------------------------+-----------------------------------------------------------------------------------+
-| ``AIRFLOW_CONSTRAINTS_REFERENCE`` | ``constraints-master`` | Reference (branch or tag) from GitHub where constraints file is taken from.       |
-|                                   |                        | It can be ``constraints-master`` but also can be``constraints-1-10`` for          |
-|                                   |                        | 1.10.*  installations. In case of building specific version                       |
-|                                   |                        | you want to point it to specific tag, for example ``constraints-2.0.1``           |
-+-----------------------------------+------------------------+-----------------------------------------------------------------------------------+
-| ``AIRFLOW_WWW``                   | ``www``                | In case of Airflow 2.0 it should be "www", in case of Airflow 1.10                |
-|                                   |                        | series it should be "www_rbac".                                                   |
-+-----------------------------------+------------------------+-----------------------------------------------------------------------------------+
-| ``AIRFLOW_SOURCES_FROM``          | ``empty``              | Sources of Airflow. Set it to "." when you install airflow from                   |
-|                                   |                        | local sources.                                                                    |
-+-----------------------------------+------------------------+-----------------------------------------------------------------------------------+
-| ``AIRFLOW_SOURCES_TO``            | ``/empty``             | Target for Airflow sources. Set to "/opt/airflow" when                            |
-|                                   |                        | you want to install airflow from local sources.                                   |
-+-----------------------------------+------------------------+-----------------------------------------------------------------------------------+
++------------------------------------------+------------------------------------------+------------------------------------------+
+| Build argument                           | Default value                            | Description                              |
++==========================================+==========================================+==========================================+
+| ``AIRFLOW_BRANCH``                       | ``master``                               | the branch from which PIP dependencies   |
+|                                          |                                          | are pre-installed initially.             |
++------------------------------------------+------------------------------------------+------------------------------------------+
+| ``AIRFLOW_REPO``                         | ``apache/airflow``                       | the repository from which PIP            |
+|                                          |                                          | dependencies are pre-installed.          |
++------------------------------------------+------------------------------------------+------------------------------------------+
+| ``AIRFLOW_PRE_CACHED_PIP_PACKAGES``      | ``false``                                | Allows to pre-cache airflow PIP packages |
+|                                          |                                          | from the GitHub of Apache Airflow        |
+|                                          |                                          | This allows to optimize iterations for   |
+|                                          |                                          | Image builds and speeds up CI builds.    |
++------------------------------------------+------------------------------------------+------------------------------------------+
diff --git a/docs/docker-stack/docker-examples/customizing/add-build-essential-custom.sh b/docs/docker-stack/docker-examples/customizing/add-build-essential-custom.sh
new file mode 100755
index 0000000..7164470
--- /dev/null
+++ b/docs/docker-stack/docker-examples/customizing/add-build-essential-custom.sh
@@ -0,0 +1,33 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# This is an example docker build script. It is not intended for PRODUCTION use
+set -euo pipefail
+AIRFLOW_SOURCES="$(cd "$(dirname "${BASH_SOURCE[0]}")/../../../../" && pwd)"
+cd "${AIRFLOW_SOURCES}"
+
+# [START build]
+docker build . \
+    --build-arg PYTHON_BASE_IMAGE="python:3.6-slim-buster" \
+    --build-arg AIRFLOW_VERSION="2.0.1" \
+    --build-arg ADDITIONAL_PYTHON_DEPS="mpi4py" \
+    --build-arg ADDITIONAL_DEV_APT_DEPS="libopenmpi-dev" \
+    --build-arg ADDITIONAL_RUNTIME_APT_DEPS="openmpi-common" \
+    --tag "$(basename "$0")"
+# [END build]
+docker rmi --force "$(basename "$0")"
diff --git a/docs/docker-stack/docker-examples/customizing/custom-sources.sh b/docs/docker-stack/docker-examples/customizing/custom-sources.sh
new file mode 100755
index 0000000..242fc2e
--- /dev/null
+++ b/docs/docker-stack/docker-examples/customizing/custom-sources.sh
@@ -0,0 +1,48 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# This is an example docker build script. It is not intended for PRODUCTION use
+set -euo pipefail
+AIRFLOW_SOURCES="$(cd "$(dirname "${BASH_SOURCE[0]}")/../../../../" && pwd)"
+cd "${AIRFLOW_SOURCES}"
+
+# [START build]
+docker build . -f Dockerfile \
+    --build-arg PYTHON_BASE_IMAGE="python:3.7-slim-buster" \
+    --build-arg AIRFLOW_VERSION="2.0.1" \
+    --build-arg ADDITIONAL_AIRFLOW_EXTRAS="slack,odbc" \
+    --build-arg ADDITIONAL_PYTHON_DEPS=" \
+        azure-storage-blob \
+        oauth2client \
+        beautifulsoup4 \
+        dateparser \
+        rocketchat_API \
+        typeform" \
+    --build-arg ADDITIONAL_DEV_APT_COMMAND="curl https://packages.microsoft.com/keys/microsoft.asc | \
+    apt-key add --no-tty - && \
+    curl https://packages.microsoft.com/config/debian/10/prod.list > /etc/apt/sources.list.d/mssql-release.list" \
+    --build-arg ADDITIONAL_DEV_APT_ENV="ACCEPT_EULA=Y" \
+    --build-arg ADDITIONAL_DEV_APT_DEPS="msodbcsql17 unixodbc-dev g++" \
+    --build-arg ADDITIONAL_RUNTIME_APT_COMMAND="curl https://packages.microsoft.com/keys/microsoft.asc | \
+    apt-key add --no-tty - && \
+    curl https://packages.microsoft.com/config/debian/10/prod.list > /etc/apt/sources.list.d/mssql-release.list" \
+    --build-arg ADDITIONAL_RUNTIME_APT_ENV="ACCEPT_EULA=Y" \
+    --build-arg ADDITIONAL_RUNTIME_APT_DEPS="msodbcsql17 unixodbc git procps vim" \
+    --tag "$(basename "$0")"
+# [END build]
+docker rmi --force "$(basename "$0")"
diff --git a/docs/docker-stack/docker-examples/customizing/github-different-repository.sh b/docs/docker-stack/docker-examples/customizing/github-different-repository.sh
new file mode 100755
index 0000000..b980b5b
--- /dev/null
+++ b/docs/docker-stack/docker-examples/customizing/github-different-repository.sh
@@ -0,0 +1,31 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# This is an example docker build script. It is not intended for PRODUCTION use
+set -euo pipefail
+AIRFLOW_SOURCES="$(cd "$(dirname "${BASH_SOURCE[0]}")/../../../../" && pwd)"
+cd "${AIRFLOW_SOURCES}"
+# [START build]
+docker build . \
+    --build-arg PYTHON_BASE_IMAGE="python:3.8-slim-buster" \
+    --build-arg AIRFLOW_INSTALLATION_METHOD="https://github.com/potiuk/airflow/archive/master.tar.gz#egg=apache-airflow" \
+    --build-arg AIRFLOW_CONSTRAINTS_REFERENCE="constraints-master" \
+    --build-arg CONSTRAINTS_GITHUB_REPOSITORY="potiuk/airflow" \
+    --tag "$(basename "$0")"
+# [END build]
+docker rmi --force "$(basename "$0")"
diff --git a/docs/docker-stack/docker-examples/customizing/github-master.sh b/docs/docker-stack/docker-examples/customizing/github-master.sh
new file mode 100755
index 0000000..4237e91
--- /dev/null
+++ b/docs/docker-stack/docker-examples/customizing/github-master.sh
@@ -0,0 +1,31 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# This is an example docker build script. It is not intended for PRODUCTION use
+set -euo pipefail
+AIRFLOW_SOURCES="$(cd "$(dirname "${BASH_SOURCE[0]}")/../../../../" && pwd)"
+cd "${AIRFLOW_SOURCES}"
+
+# [START build]
+docker build . \
+    --build-arg PYTHON_BASE_IMAGE="python:3.7-slim-buster" \
+    --build-arg AIRFLOW_INSTALLATION_METHOD="https://github.com/apache/airflow/archive/master.tar.gz#egg=apache-airflow" \
+    --build-arg AIRFLOW_CONSTRAINTS_REFERENCE="constraints-master" \
+    --tag "$(basename "$0")"
+# [END build]
+docker rmi --force "$(basename "$0")"
diff --git a/docs/docker-stack/docker-examples/customizing/github-v2-0-test.sh b/docs/docker-stack/docker-examples/customizing/github-v2-0-test.sh
new file mode 100755
index 0000000..b893618
--- /dev/null
+++ b/docs/docker-stack/docker-examples/customizing/github-v2-0-test.sh
@@ -0,0 +1,31 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# This is an example docker build script. It is not intended for PRODUCTION use
+set -euo pipefail
+AIRFLOW_SOURCES="$(cd "$(dirname "${BASH_SOURCE[0]}")/../../../../" && pwd)"
+cd "${AIRFLOW_SOURCES}"
+
+# [START build]
+docker build . \
+    --build-arg PYTHON_BASE_IMAGE="python:3.8-slim-buster" \
+    --build-arg AIRFLOW_INSTALLATION_METHOD="https://github.com/apache/airflow/archive/v2-0-test.tar.gz#egg=apache-airflow" \
+    --build-arg AIRFLOW_CONSTRAINTS_REFERENCE="constraints-2-0" \
+    --tag "$(basename "$0")"
+# [END build]
+docker rmi --force "$(basename "$0")"
diff --git a/docs/docker-stack/docker-examples/customizing/pypi-dev-runtime-deps.sh b/docs/docker-stack/docker-examples/customizing/pypi-dev-runtime-deps.sh
new file mode 100755
index 0000000..43a8092
--- /dev/null
+++ b/docs/docker-stack/docker-examples/customizing/pypi-dev-runtime-deps.sh
@@ -0,0 +1,34 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# This is an example docker build script. It is not intended for PRODUCTION use
+set -euo pipefail
+AIRFLOW_SOURCES="$(cd "$(dirname "${BASH_SOURCE[0]}")/../../../../" && pwd)"
+cd "${AIRFLOW_SOURCES}"
+
+# [START build]
+docker build . \
+    --build-arg PYTHON_BASE_IMAGE="python:3.6-slim-buster" \
+    --build-arg AIRFLOW_VERSION="2.0.1" \
+    --build-arg ADDITIONAL_AIRFLOW_EXTRAS="jdbc" \
+    --build-arg ADDITIONAL_PYTHON_DEPS="pandas" \
+    --build-arg ADDITIONAL_DEV_APT_DEPS="gcc g++" \
+    --build-arg ADDITIONAL_RUNTIME_APT_DEPS="default-jre-headless" \
+    --tag "$(basename "$0")"
+# [END build]
+docker rmi --force "$(basename "$0")"
diff --git a/docs/docker-stack/docker-examples/customizing/pypi-extras-and-deps.sh b/docs/docker-stack/docker-examples/customizing/pypi-extras-and-deps.sh
new file mode 100755
index 0000000..7d150bc
--- /dev/null
+++ b/docs/docker-stack/docker-examples/customizing/pypi-extras-and-deps.sh
@@ -0,0 +1,32 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# This is an example docker build script. It is not intended for PRODUCTION use
+set -euo pipefail
+AIRFLOW_SOURCES="$(cd "$(dirname "${BASH_SOURCE[0]}")/../../../../" && pwd)"
+cd "${AIRFLOW_SOURCES}"
+
+# [START build]
+docker build . \
+    --build-arg PYTHON_BASE_IMAGE="python:3.8-slim-buster" \
+    --build-arg AIRFLOW_VERSION="2.0.1" \
+    --build-arg ADDITIONAL_AIRFLOW_EXTRAS="mssql,hdfs" \
+    --build-arg ADDITIONAL_PYTHON_DEPS="oauth2client" \
+    --tag "$(basename "$0")"
+# [END build]
+docker rmi --force "$(basename "$0")"
diff --git a/docs/docker-stack/docker-examples/customizing/pypi-selected-version.sh b/docs/docker-stack/docker-examples/customizing/pypi-selected-version.sh
new file mode 100755
index 0000000..98e06a1
--- /dev/null
+++ b/docs/docker-stack/docker-examples/customizing/pypi-selected-version.sh
@@ -0,0 +1,30 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# This is an example docker build script. It is not intended for PRODUCTION use
+set -euo pipefail
+AIRFLOW_SOURCES="$(cd "$(dirname "${BASH_SOURCE[0]}")/../../../../" && pwd)"
+cd "${AIRFLOW_SOURCES}"
+
+# [START build]
+docker build . \
+    --build-arg PYTHON_BASE_IMAGE="python:3.7-slim-buster" \
+    --build-arg AIRFLOW_VERSION="2.0.1" \
+    --tag "$(basename "$0")"
+# [END build]
+docker rmi --force "$(basename "$0")"
diff --git a/docs/docker-stack/docker-examples/customizing/stable-airflow.sh b/docs/docker-stack/docker-examples/customizing/stable-airflow.sh
new file mode 100755
index 0000000..d3471ac
--- /dev/null
+++ b/docs/docker-stack/docker-examples/customizing/stable-airflow.sh
@@ -0,0 +1,28 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# This is an example docker build script. It is not intended for PRODUCTION use
+set -euo pipefail
+AIRFLOW_SOURCES="$(cd "$(dirname "${BASH_SOURCE[0]}")/../../../../" && pwd)"
+cd "${AIRFLOW_SOURCES}"
+
+# [START build]
+docker build . \
+    --tag "$(basename "$0")"
+# [END build]
+docker rmi --force "$(basename "$0")"
diff --git a/docs/docker-stack/docker-examples/extending/add-apt-packages/Dockerfile b/docs/docker-stack/docker-examples/extending/add-apt-packages/Dockerfile
new file mode 100644
index 0000000..8fb128e
--- /dev/null
+++ b/docs/docker-stack/docker-examples/extending/add-apt-packages/Dockerfile
@@ -0,0 +1,27 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# This is an example Dockerfile. It is not intended for PRODUCTION use
+# [START Dockerfile]
+FROM apache/airflow:2.0.1
+USER root
+RUN apt-get update \
+  && apt-get install -y --no-install-recommends \
+         vim \
+  && apt-get autoremove -yqq --purge \
+  && apt-get clean \
+  && rm -rf /var/lib/apt/lists/*
+USER airflow
+# [END Dockerfile]
diff --git a/docs/docker-stack/docker-examples/extending/add-build-essential-extend/Dockerfile b/docs/docker-stack/docker-examples/extending/add-build-essential-extend/Dockerfile
new file mode 100644
index 0000000..f0dc0d1
--- /dev/null
+++ b/docs/docker-stack/docker-examples/extending/add-build-essential-extend/Dockerfile
@@ -0,0 +1,28 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# This is an example Dockerfile. It is not intended for PRODUCTION use
+# [START Dockerfile]
+FROM apache/airflow:2.0.1
+USER root
+RUN apt-get update \
+  && apt-get install -y --no-install-recommends \
+         build-essential libopenmpi-dev \
+  && apt-get autoremove -yqq --purge \
+  && apt-get clean \
+  && rm -rf /var/lib/apt/lists/*
+USER airflow
+RUN pip install --no-cache-dir mpi4py
+# [END Dockerfile]
diff --git a/docs/docker-stack/docker-examples/extending/add-pypi-packages/Dockerfile b/docs/docker-stack/docker-examples/extending/add-pypi-packages/Dockerfile
new file mode 100644
index 0000000..401e493
--- /dev/null
+++ b/docs/docker-stack/docker-examples/extending/add-pypi-packages/Dockerfile
@@ -0,0 +1,20 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# This is an example Dockerfile. It is not intended for PRODUCTION use
+# [START Dockerfile]
+FROM apache/airflow:2.0.1
+RUN pip install --no-cache-dir lxml
+# [END Dockerfile]
diff --git a/docs/docker-stack/docker-examples/extending/embedding-dags/Dockerfile b/docs/docker-stack/docker-examples/extending/embedding-dags/Dockerfile
new file mode 100644
index 0000000..9213729
--- /dev/null
+++ b/docs/docker-stack/docker-examples/extending/embedding-dags/Dockerfile
@@ -0,0 +1,22 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# This is an example Dockerfile. It is not intended for PRODUCTION use
+# [START Dockerfile]
+FROM apache/airflow:2.0.1
+
+COPY --chown=airflow:root test_dag.py /opt/airflow/dags
+
+# [END Dockerfile]
diff --git a/docs/docker-stack/docker-examples/extending/embedding-dags/test_dag.py b/docs/docker-stack/docker-examples/extending/embedding-dags/test_dag.py
new file mode 100644
index 0000000..467c8c3
--- /dev/null
+++ b/docs/docker-stack/docker-examples/extending/embedding-dags/test_dag.py
@@ -0,0 +1,39 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# [START dag]
+"""This dag only runs some simple tasks to test Airflow's task execution."""
+from datetime import datetime, timedelta
+
+from airflow.models.dag import DAG
+from airflow.operators.dummy import DummyOperator
+from airflow.utils.dates import days_ago
+
+now = datetime.now()
+now_to_the_hour = (now - timedelta(0, 0, 0, 0, 0, 3)).replace(minute=0, second=0, microsecond=0)
+START_DATE = now_to_the_hour
+DAG_NAME = 'test_dag_v1'
+
+default_args = {'owner': 'airflow', 'depends_on_past': True, 'start_date': days_ago(2)}
+dag = DAG(DAG_NAME, schedule_interval='*/10 * * * *', default_args=default_args)
+
+run_this_1 = DummyOperator(task_id='run_this_1', dag=dag)
+run_this_2 = DummyOperator(task_id='run_this_2', dag=dag)
+run_this_2.set_upstream(run_this_1)
+run_this_3 = DummyOperator(task_id='run_this_3', dag=dag)
+run_this_3.set_upstream(run_this_2)
+# [END dag]
diff --git a/docs/docker-stack/docker-examples/restricted/restricted_environments.sh b/docs/docker-stack/docker-examples/restricted/restricted_environments.sh
new file mode 100755
index 0000000..e7a3699
--- /dev/null
+++ b/docs/docker-stack/docker-examples/restricted/restricted_environments.sh
@@ -0,0 +1,44 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# This is an example docker build script. It is not intended for PRODUCTION use
+set -euo pipefail
+AIRFLOW_SOURCES="$(cd "$(dirname "${BASH_SOURCE[0]}")/../../../../" && pwd)"
+cd "${AIRFLOW_SOURCES}"
+
+# [START download]
+rm docker-context-files/*.whl docker-context-files/*.tar.gz docker-context-files/*.txt || true
+
+curl -Lo "docker-context-files/constraints-3.7.txt" \
+    https://raw.githubusercontent.com/apache/airflow/constraints-2.0.1/constraints-3.7.txt
+
+pip download --dest docker-context-files \
+    --constraint docker-context-files/constraints-3.7.txt  \
+    "apache-airflow[async,aws,azure,celery,dask,elasticsearch,gcp,kubernetes,postgres,redis,slack,ssh,statsd,virtualenv]==2.0.1"
+# [END download]
+
+# [START build]
+docker build . \
+    --build-arg PYTHON_BASE_IMAGE="python:3.7-slim-buster" \
+    --build-arg AIRFLOW_INSTALLATION_METHOD="apache-airflow" \
+    --build-arg AIRFLOW_VERSION="2.0.1" \
+    --build-arg INSTALL_MYSQL_CLIENT="false" \
+    --build-arg AIRFLOW_PRE_CACHED_PIP_PACKAGES="false" \
+    --build-arg INSTALL_FROM_DOCKER_CONTEXT_FILES="true" \
+    --build-arg AIRFLOW_CONSTRAINTS_LOCATION="/docker-context-files/constraints-3.7.txt"
+# [END build]
diff --git a/scripts/ci/images/ci_run_prod_image_test.sh b/scripts/ci/images/ci_run_prod_image_test.sh
new file mode 100755
index 0000000..3039eca
--- /dev/null
+++ b/scripts/ci/images/ci_run_prod_image_test.sh
@@ -0,0 +1,50 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# shellcheck source=scripts/ci/libraries/_initialization.sh
+. "$(dirname "${BASH_SOURCE[0]}")/../libraries/_initialization.sh"
+
+initialization::set_output_color_variables
+
+job_name=$1
+file=$2
+
+set +e
+
+if [[ ${file} == *".sh" ]]; then
+    "${file}"
+    res=$?
+elif [[ ${file} == *"Dockerfile" ]]; then
+    cd "$(dirname "${file}")" || exit 1
+    docker build . --tag "${job_name}"
+    res=$?
+    docker rmi --force "${job_name}"
+else
+    echo "Bad file ${file}. Should be either a Dockerfile or script"
+    exit 1
+fi
+# Print status to status file
+echo "${res}" >"${PARALLEL_JOB_STATUS}"
+
+echo
+# print status to log
+if [[ ${res} == "0" ]]; then
+    echo "${COLOR_GREEN}Extend PROD image test ${job_name} succeeded${COLOR_RESET}"
+else
+    echo "${COLOR_RED}Extend PROD image test ${job_name} failed${COLOR_RESET}"
+fi
+echo
diff --git a/scripts/ci/images/ci_test_examples_of_prod_image_building.sh b/scripts/ci/images/ci_test_examples_of_prod_image_building.sh
new file mode 100755
index 0000000..7e04535
--- /dev/null
+++ b/scripts/ci/images/ci_test_examples_of_prod_image_building.sh
@@ -0,0 +1,91 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# shellcheck source=scripts/ci/libraries/_script_init.sh
+. "$(dirname "${BASH_SOURCE[0]}")/../libraries/_script_init.sh"
+
+SEMAPHORE_NAME="image_tests"
+export SEMAPHORE_NAME
+
+DOCKER_EXAMPLES_DIR=${AIRFLOW_SOURCES}/docs/docker-stack/docker-examples/
+export DOCKER_EXAMPLES_DIR
+
+# Launches parallel building of images. Redirects output to log set the right directories
+# $1 - name of the job
+# $2 - bash file to execute in parallel
+function run_image_test_job() {
+    local file=$1
+
+    local job_name=$2
+    mkdir -p "${PARALLEL_MONITORED_DIR}/${SEMAPHORE_NAME}/${job_name}"
+    export JOB_LOG="${PARALLEL_MONITORED_DIR}/${SEMAPHORE_NAME}/${job_name}/stdout"
+    export PARALLEL_JOB_STATUS="${PARALLEL_MONITORED_DIR}/${SEMAPHORE_NAME}/${job_name}/status"
+    parallel --ungroup --bg --semaphore --semaphorename "${SEMAPHORE_NAME}" \
+        --jobs "${MAX_PARALLEL_IMAGE_JOBS}" \
+            "$(dirname "${BASH_SOURCE[0]}")/ci_run_prod_image_test.sh" "${job_name}" "${file}" >"${JOB_LOG}" 2>&1
+}
+
+
+function test_images() {
+    if [[ ${CI=} == "true" ]]; then
+        echo
+        echo "Skipping the script builds on CI! "
+        echo "They take very long time to build."
+        echo
+    else
+        local scripts_to_test
+        scripts_to_test=$(find "${DOCKER_EXAMPLES_DIR}" -type f -name '*.sh' )
+        for file in ${scripts_to_test}
+        do
+            local job_name
+            job_name=$(basename "${file}")
+            run_image_test_job "${file}" "${job_name}"
+        done
+    fi
+    local dockerfiles_to_test
+    dockerfiles_to_test=$(find "${DOCKER_EXAMPLES_DIR}" -type f -name 'Dockerfile' )
+    for file in ${dockerfiles_to_test}
+    do
+        local job_name
+        job_name="$(basename "$(dirname "${file}")")"
+        run_image_test_job "${file}" "${job_name}"
+    done
+
+}
+
+cd "${AIRFLOW_SOURCES}" || exit 1
+
+docker_engine_resources::get_available_cpus_in_docker
+
+# Building max for images in parlallel helps to conserve docker image space
+MAX_PARALLEL_IMAGE_JOBS=4
+export MAX_PARALLEL_IMAGE_JOBS
+
+parallel::make_sure_gnu_parallel_is_installed
+parallel::kill_stale_semaphore_locks
+parallel::initialize_monitoring
+
+start_end::group_start "Testing image building"
+
+parallel::monitor_progress
+
+test_images
+
+parallel --semaphore --semaphorename "${SEMAPHORE_NAME}" --wait
+start_end::group_end
+
+parallel::print_job_summary_and_return_status_code
diff --git a/scripts/ci/libraries/_build_images.sh b/scripts/ci/libraries/_build_images.sh
index fa11128..55801e2 100644
--- a/scripts/ci/libraries/_build_images.sh
+++ b/scripts/ci/libraries/_build_images.sh
@@ -820,6 +820,7 @@ function build_images::prepare_prod_build() {
             "--build-arg" "AIRFLOW_VERSION=${INSTALL_AIRFLOW_VERSION}"
         )
         export AIRFLOW_VERSION="${INSTALL_AIRFLOW_VERSION}"
+        export INSTALL_PROVIDERS_FROM_SOURCES="false"
         build_images::add_build_args_for_remote_install
     else
         # When no airflow version/reference is specified, production image is built either from the
diff --git a/scripts/ci/libraries/_docker_engine_resources.sh b/scripts/ci/libraries/_docker_engine_resources.sh
index 18b223d..b5283b3 100644
--- a/scripts/ci/libraries/_docker_engine_resources.sh
+++ b/scripts/ci/libraries/_docker_engine_resources.sh
@@ -28,24 +28,21 @@ function docker_engine_resources::print_overall_stats() {
 
 
 function docker_engine_resources::get_available_memory_in_docker() {
-    MEMORY_AVAILABLE_FOR_DOCKER=$(docker run --rm --entrypoint /bin/bash \
-        "${AIRFLOW_CI_IMAGE}" -c \
+    MEMORY_AVAILABLE_FOR_DOCKER=$(docker run --rm --entrypoint /bin/bash debian:buster-slim -c \
         'echo $(($(getconf _PHYS_PAGES) * $(getconf PAGE_SIZE) / (1024 * 1024)))')
     echo "${COLOR_BLUE}Memory available for Docker${COLOR_RESET}: $(numfmt --to iec $((MEMORY_AVAILABLE_FOR_DOCKER * 1024 * 1024)))"
     export MEMORY_AVAILABLE_FOR_DOCKER
 }
 
 function docker_engine_resources::get_available_cpus_in_docker() {
-    CPUS_AVAILABLE_FOR_DOCKER=$(docker run --rm --entrypoint /bin/bash \
-        "${AIRFLOW_CI_IMAGE}" -c \
+    CPUS_AVAILABLE_FOR_DOCKER=$(docker run --rm --entrypoint /bin/bash debian:buster-slim -c \
         'grep -cE "cpu[0-9]+" </proc/stat')
     echo "${COLOR_BLUE}CPUS available for Docker${COLOR_RESET}: ${CPUS_AVAILABLE_FOR_DOCKER}"
     export CPUS_AVAILABLE_FOR_DOCKER
 }
 
 function docker_engine_resources::get_available_disk_space_in_docker() {
-    DISK_SPACE_AVAILABLE_FOR_DOCKER=$(docker run --rm --entrypoint /bin/bash \
-        "${AIRFLOW_CI_IMAGE}" -c \
+    DISK_SPACE_AVAILABLE_FOR_DOCKER=$(docker run --rm --entrypoint /bin/bash debian:buster-slim -c \
         'df  / | tail -1 | awk '\''{print $4}'\')
     echo "${COLOR_BLUE}Disk space available for Docker${COLOR_RESET}: $(numfmt --to iec $((DISK_SPACE_AVAILABLE_FOR_DOCKER * 1024)))"
     export DISK_SPACE_AVAILABLE_FOR_DOCKER
diff --git a/scripts/ci/libraries/_initialization.sh b/scripts/ci/libraries/_initialization.sh
index 5e38f1e..a0723b9 100644
--- a/scripts/ci/libraries/_initialization.sh
+++ b/scripts/ci/libraries/_initialization.sh
@@ -193,6 +193,7 @@ function initialization::initialize_files_for_rebuild_check() {
         "Dockerfile.ci"
         ".dockerignore"
         "scripts/docker/compile_www_assets.sh"
+        "scripts/docker/common.sh"
         "scripts/docker/install_additional_dependencies.sh"
         "scripts/docker/install_airflow.sh"
         "scripts/docker/install_airflow_from_branch_tip.sh"
diff --git a/scripts/ci/libraries/_parallel.sh b/scripts/ci/libraries/_parallel.sh
index 09c3121..dfe1c4d 100644
--- a/scripts/ci/libraries/_parallel.sh
+++ b/scripts/ci/libraries/_parallel.sh
@@ -16,12 +16,12 @@
 # specific language governing permissions and limitations
 # under the License.
 
+
+# Require SEMAPHORE_NAME
+
 function parallel::initialize_monitoring() {
     PARALLEL_MONITORED_DIR="$(mktemp -d)"
     export PARALLEL_MONITORED_DIR
-
-    PARALLEL_JOBLOG="$(mktemp)"
-    export PARALLEL_JOBLOG
 }
 
 function parallel::make_sure_gnu_parallel_is_installed() {
@@ -53,6 +53,7 @@ function parallel::kill_stale_semaphore_locks() {
 
 # Periodical loop to print summary of all the processes run by parallel
 function parallel::monitor_loop() {
+    trap 'exit 0' TERM
     echo
     echo "Start monitoring of parallel execution in ${PARALLEL_MONITORED_DIR} directory."
     echo
@@ -79,16 +80,13 @@ function parallel::monitor_loop() {
             echo
         done
         echo
-        echo "${COLOR_YELLOW}########### Monitoring progress end: ${progress_report_number} #################${COLOR_RESET}}"
+        echo "${COLOR_YELLOW}########### Monitoring progress end: ${progress_report_number} #################${COLOR_RESET}"
         echo
         end_time=${SECONDS}
         echo "${COLOR_YELLOW}############## $((end_time - start_time)) seconds passed since start ####################### ${COLOR_RESET}"
         sleep 10
         progress_report_number=$((progress_report_number + 1))
     done
-    echo "${COLOR_BLUE}########### STATISTICS #################"
-    docker_engine_resources::print_overall_stats
-    echo "########### STATISTICS #################${COLOR_RESET}"
 }
 
 # Monitors progress of parallel execution and periodically summarizes stdout entries created by
@@ -96,8 +94,6 @@ function parallel::monitor_loop() {
 # parameter to GNU parallel execution.
 function parallel::monitor_progress() {
     echo "Parallel results are stored in: ${PARALLEL_MONITORED_DIR}"
-    echo "Parallel joblog is stored in: ${PARALLEL_JOBLOG}"
-
     parallel::monitor_loop 2>/dev/null &
 
     # shellcheck disable=SC2034
@@ -108,5 +104,59 @@ function parallel::monitor_progress() {
 
 
 function parallel::kill_monitor() {
-    kill -9 ${PARALLEL_MONITORING_PID} >/dev/null 2>&1 || true
+    kill ${PARALLEL_MONITORING_PID} >/dev/null 2>&1 || true
+}
+
+# Outputs logs for successful test type
+# $1 test type
+function parallel::output_log_for_successful_job(){
+    local job=$1
+    local log_dir="${PARALLEL_MONITORED_DIR}/${SEMAPHORE_NAME}/${job}"
+    start_end::group_start "${COLOR_GREEN}Output for successful ${job}${COLOR_RESET}"
+    echo "${COLOR_GREEN}##### The ${job} succeeded ##### ${COLOR_RESET}"
+    echo
+    cat "${log_dir}"/stdout
+    echo
+    echo "${COLOR_GREEN}##### The ${job} succeeded ##### ${COLOR_RESET}"
+    echo
+    start_end::group_end
+}
+
+# Outputs logs for failed test type
+# $1 test type
+function parallel::output_log_for_failed_job(){
+    local job=$1
+    local log_dir="${PARALLEL_MONITORED_DIR}/${SEMAPHORE_NAME}/${job}"
+    start_end::group_start "${COLOR_RED}Output: for failed ${job}${COLOR_RESET}"
+    echo "${COLOR_RED}##### The ${job} failed ##### ${COLOR_RESET}"
+    echo
+    cat "${log_dir}"/stdout
+    echo
+    echo
+    echo "${COLOR_RED}##### The ${job} failed ##### ${COLOR_RESET}"
+    echo
+    start_end::group_end
+}
+
+# Prints summary of jobs and returns status:
+# 0 - all jobs succeeded (SKIPPED_FAILED_JOBS is not counted)
+# >0 - number of failed jobs (except Quarantine)
+function parallel::print_job_summary_and_return_status_code() {
+    local return_code="0"
+    local job
+    for job_path in "${PARALLEL_MONITORED_DIR}/${SEMAPHORE_NAME}/"*
+    do
+        job="$(basename "${job_path}")"
+        status=$(cat "${PARALLEL_MONITORED_DIR}/${SEMAPHORE_NAME}/${job}/status")
+        if [[ ${status} == "0" ]]; then
+            parallel::output_log_for_successful_job "${job}"
+        else
+            parallel::output_log_for_failed_job "${job}"
+            # SKIPPED_FAILED_JOB failure does not trigger whole test failure
+            if [[ ${SKIPPED_FAILED_JOB=} != "${job}" ]]; then
+                return_code=$((return_code + 1))
+            fi
+        fi
+    done
+    return "${return_code}"
 }
diff --git a/scripts/ci/testing/ci_run_airflow_testing.sh b/scripts/ci/testing/ci_run_airflow_testing.sh
index 1cd1c36..8286874 100755
--- a/scripts/ci/testing/ci_run_airflow_testing.sh
+++ b/scripts/ci/testing/ci_run_airflow_testing.sh
@@ -20,6 +20,9 @@
 RUN_TESTS="true"
 export RUN_TESTS
 
+SKIPPED_FAILED_JOB="Quarantined"
+export SKIPPED_FAILED_JOB
+
 # shellcheck source=scripts/ci/libraries/_script_init.sh
 . "$( dirname "${BASH_SOURCE[0]}" )/../libraries/_script_init.sh"
 
@@ -167,58 +170,6 @@ function run_test_types_in_parallel() {
     start_end::group_end
 }
 
-# Outputs logs for successful test type
-# $1 test type
-function output_log_for_successful_test_type(){
-    local test_type=$1
-    local log_dir="${PARALLEL_MONITORED_DIR}/${SEMAPHORE_NAME}/${test_type}"
-    start_end::group_start "${COLOR_GREEN}Output for successful ${test_type}${COLOR_RESET}"
-    echo "${COLOR_GREEN}##### Test type ${test_type} succeeded ##### ${COLOR_RESET}"
-    echo
-    cat "${log_dir}"/stdout
-    echo
-    echo "${COLOR_GREEN}##### Test type ${test_type} succeeded ##### ${COLOR_RESET}"
-    echo
-    start_end::group_end
-}
-
-# Outputs logs for failed test type
-# $1 test type
-function output_log_for_failed_test_type(){
-    local test_type=$1
-    local log_dir="${PARALLEL_MONITORED_DIR}/${SEMAPHORE_NAME}/${test_type}"
-    start_end::group_start "${COLOR_RED}Output: for failed ${test_type}${COLOR_RESET}"
-    echo "${COLOR_RED}##### Test type ${test_type} failed ##### ${COLOR_RESET}"
-    echo
-    cat "${log_dir}"/stdout
-    echo
-    echo
-    echo "${COLOR_RED}##### Test type ${test_type} failed ##### ${COLOR_RESET}"
-    echo
-    start_end::group_end
-}
-
-# Prints summary of tests and returns status:
-# 0 - all test types succeeded (Quarantine is not counted)
-# >0 - number of failed test types (except Quarantine)
-function print_test_summary_and_return_test_status_code() {
-    local return_code="0"
-    local test_type
-    for test_type in ${TEST_TYPES}
-    do
-        status=$(cat "${PARALLEL_MONITORED_DIR}/${SEMAPHORE_NAME}/${test_type}/status")
-        if [[ ${status} == "0" ]]; then
-            output_log_for_successful_test_type "${test_type}"
-        else
-            output_log_for_failed_test_type "${test_type}"
-            # Quarantined tests failure does not trigger whole test failure
-            if [[ ${TEST_TYPE} != "Quarantined" ]]; then
-                return_code=$((return_code + 1))
-            fi
-        fi
-    done
-    return "${return_code}"
-}
 
 export MEMORY_REQUIRED_FOR_INTEGRATION_TEST_PARALLEL_RUN=33000
 
@@ -236,8 +187,6 @@ export MEMORY_REQUIRED_FOR_INTEGRATION_TEST_PARALLEL_RUN=33000
 #   * MEMORY_AVAILABLE_FOR_DOCKER - memory that is available in docker (set by cleanup_runners)
 #
 function run_all_test_types_in_parallel() {
-    local test_type
-
     cleanup_runner
 
     start_end::group_start "Determine how to run the tests"
@@ -278,7 +227,7 @@ function run_all_test_types_in_parallel() {
     fi
     set -e
     # this will exit with error code in case some of the non-Quarantined tests failed
-    print_test_summary_and_return_test_status_code
+    parallel::print_job_summary_and_return_status_code
 }
 
 build_images::prepare_ci_build
diff --git a/scripts/docker/common.sh b/scripts/docker/common.sh
new file mode 100755
index 0000000..28307e3
--- /dev/null
+++ b/scripts/docker/common.sh
@@ -0,0 +1,63 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+set -euo pipefail
+
+test -v INSTALL_MYSQL_CLIENT
+test -v AIRFLOW_INSTALL_USER_FLAG
+test -v AIRFLOW_REPO
+test -v AIRFLOW_BRANCH
+test -v AIRFLOW_PIP_VERSION
+
+set -x
+
+function common::get_airflow_version_specification() {
+    if [[ -z ${AIRFLOW_VERSION_SPECIFICATION}
+        && -n ${AIRFLOW_VERSION}
+        && ${AIRFLOW_INSTALLATION_METHOD} != "." ]]; then
+        AIRFLOW_VERSION_SPECIFICATION="==${AIRFLOW_VERSION}"
+    fi
+}
+
+function common::get_constraints_location() {
+    # auto-detect Airflow-constraint reference and location
+    if [[ -z "${AIRFLOW_CONSTRAINTS_REFERENCE}" ]]; then
+        if [[ ${AIRFLOW_VERSION} =~ [^0-9]*1[^0-9]*10[^0-9]([0-9]*) ]]; then
+            # All types of references/versions match this regexp for 1.10 series
+            # for example v1_10_test, 1.10.10, 1.10.9 etc. ${BASH_REMATCH[1]} matches last
+            # minor digit of version and it's length is 0 for v1_10_test, 1 for 1.10.9 and 2 for 1.10.10+
+            AIRFLOW_MINOR_VERSION_NUMBER=${BASH_REMATCH[1]}
+            if [[ ${#AIRFLOW_MINOR_VERSION_NUMBER} == "0" ]]; then
+                # For v1_10_* branches use constraints-1-10 branch
+                AIRFLOW_CONSTRAINTS_REFERENCE=constraints-1-10
+            else
+                AIRFLOW_CONSTRAINTS_REFERENCE=constraints-${AIRFLOW_VERSION}
+            fi
+        elif  [[ ${AIRFLOW_VERSION} =~ v?2.* ]]; then
+            AIRFLOW_CONSTRAINTS_REFERENCE=constraints-${AIRFLOW_VERSION}
+        else
+            AIRFLOW_CONSTRAINTS_REFERENCE=${DEFAULT_CONSTRAINTS_BRANCH}
+        fi
+    fi
+
+    if [[ -z ${AIRFLOW_CONSTRAINTS_LOCATION} ]]; then
+        local constraints_base="https://raw.githubusercontent.com/${CONSTRAINTS_GITHUB_REPOSITORY}/${AIRFLOW_CONSTRAINTS_REFERENCE}"
+        local python_version
+        python_version="$(python --version 2>/dev/stdout | cut -d " " -f 2 | cut -d "." -f 1-2)"
+        AIRFLOW_CONSTRAINTS_LOCATION="${constraints_base}/${AIRFLOW_CONSTRAINTS}-${python_version}.txt"
+    fi
+}
diff --git a/scripts/docker/compile_www_assets.sh b/scripts/docker/compile_www_assets.sh
index 04157b6..e303f51 100755
--- a/scripts/docker/compile_www_assets.sh
+++ b/scripts/docker/compile_www_assets.sh
@@ -17,9 +17,6 @@
 # under the License.
 # shellcheck disable=SC2086
 set -euo pipefail
-
-test -v PYTHON_MAJOR_MINOR_VERSION
-
 set -x
 
 # Installs additional dependencies passed as Argument to the Docker build command
@@ -31,7 +28,7 @@ function compile_www_assets() {
     md5sum_file="static/dist/sum.md5"
     readonly md5sum_file
     local airflow_site_package
-    airflow_site_package="/root/.local/lib/python${PYTHON_MAJOR_MINOR_VERSION}/site-packages/airflow"
+    airflow_site_package="$(python -m site --user-site)"
     local www_dir=""
     if [[ -f "${airflow_site_package}/www_rbac/package.json" ]]; then
         www_dir="${airflow_site_package}/www_rbac"
diff --git a/scripts/docker/install_airflow.sh b/scripts/docker/install_airflow.sh
index 5f1e9d9..bfcc7e9 100755
--- a/scripts/docker/install_airflow.sh
+++ b/scripts/docker/install_airflow.sh
@@ -26,17 +26,8 @@
 #                                 dependencies (with EAGER_UPGRADE_ADDITIONAL_REQUIREMENTS added)
 #
 # shellcheck disable=SC2086
-set -euo pipefail
-
-test -v AIRFLOW_INSTALLATION_METHOD
-test -v AIRFLOW_INSTALL_EDITABLE_FLAG
-test -v AIRFLOW_INSTALL_USER_FLAG
-test -v INSTALL_MYSQL_CLIENT
-test -v UPGRADE_TO_NEWER_DEPENDENCIES
-test -v CONTINUE_ON_PIP_CHECK_FAILURE
-test -v AIRFLOW_CONSTRAINTS_LOCATION
-
-set -x
+# shellcheck source=scripts/docker/common.sh
+. "$( dirname "${BASH_SOURCE[0]}" )/common.sh"
 
 function install_airflow() {
     # Sanity check for editable installation mode.
@@ -87,6 +78,11 @@ function install_airflow() {
         pip install ${AIRFLOW_INSTALL_USER_FLAG} --upgrade "pip==${AIRFLOW_PIP_VERSION}"
         pip check || ${CONTINUE_ON_PIP_CHECK_FAILURE}
     fi
+
 }
 
+common::get_airflow_version_specification
+
+common::get_constraints_location
+
 install_airflow
diff --git a/scripts/docker/install_airflow_from_branch_tip.sh b/scripts/docker/install_airflow_from_branch_tip.sh
index 3741055..6e34d05 100755
--- a/scripts/docker/install_airflow_from_branch_tip.sh
+++ b/scripts/docker/install_airflow_from_branch_tip.sh
@@ -26,16 +26,9 @@
 #
 # If INSTALL_MYSQL_CLIENT is set to false, mysql extra is removed
 #
-set -euo pipefail
+# shellcheck source=scripts/docker/common.sh
+. "$( dirname "${BASH_SOURCE[0]}" )/common.sh"
 
-test -v INSTALL_MYSQL_CLIENT
-test -v AIRFLOW_INSTALL_USER_FLAG
-test -v AIRFLOW_REPO
-test -v AIRFLOW_BRANCH
-test -v AIRFLOW_CONSTRAINTS_LOCATION
-test -v AIRFLOW_PIP_VERSION
-
-set -x
 
 function install_airflow_from_branch_tip() {
     echo
@@ -57,4 +50,6 @@ function install_airflow_from_branch_tip() {
     pip uninstall --yes apache-airflow
 }
 
+common::get_constraints_location
+
 install_airflow_from_branch_tip
diff --git a/scripts/docker/install_from_docker_context_files.sh b/scripts/docker/install_from_docker_context_files.sh
index 48aa933..d1982cf 100755
--- a/scripts/docker/install_from_docker_context_files.sh
+++ b/scripts/docker/install_from_docker_context_files.sh
@@ -22,19 +22,13 @@
 # The packages are prepared from current sources and placed in the 'docker-context-files folder
 # Then both airflow and provider packages are installed using those packages rather than
 # PyPI
-set -euo pipefail
-
-test -v AIRFLOW_EXTRAS
-test -v AIRFLOW_INSTALL_USER_FLAG
-test -v AIRFLOW_CONSTRAINTS_LOCATION
-test -v AIRFLOW_PIP_VERSION
-test -v CONTINUE_ON_PIP_CHECK_FAILURE
-test -v EAGER_UPGRADE_ADDITIONAL_REQUIREMENTS
-test -v UPGRADE_TO_NEWER_DEPENDENCIES
-
-set -x
+# shellcheck source=scripts/docker/common.sh
+. "$( dirname "${BASH_SOURCE[0]}" )/common.sh"
 
 function install_airflow_and_providers_from_docker_context_files(){
+    if [[ ${INSTALL_MYSQL_CLIENT} != "true" ]]; then
+        AIRFLOW_EXTRAS=${AIRFLOW_EXTRAS/mysql,}
+    fi
     # Find Apache Airflow packages in docker-context files
     local reinstalling_apache_airflow_package
     reinstalling_apache_airflow_package=$(ls \
@@ -68,8 +62,12 @@ function install_airflow_and_providers_from_docker_context_files(){
         echo
         echo Force re-installing airflow and providers from local files with constraints and upgrade if needed
         echo
-        # Remove provider packages from constraint files because they are locally prepared
-        curl -L "${AIRFLOW_CONSTRAINTS_LOCATION}" | grep -ve '^apache-airflow' > /tmp/constraints.txt
+        if [[ ${AIRFLOW_CONSTRAINTS_LOCATION} == "/"* ]]; then
+            grep -ve '^apache-airflow' <"${AIRFLOW_CONSTRAINTS_LOCATION}" > /tmp/constraints.txt
+        else
+            # Remove provider packages from constraint files because they are locally prepared
+            curl -L "${AIRFLOW_CONSTRAINTS_LOCATION}" | grep -ve '^apache-airflow' > /tmp/constraints.txt
+        fi
         # force reinstall airflow + provider package local files with constraints + upgrade if needed
         pip install ${AIRFLOW_INSTALL_USER_FLAG} --force-reinstall \
             ${reinstalling_apache_airflow_package} ${reinstalling_apache_airflow_providers_packages} \
@@ -106,5 +104,7 @@ install_all_other_packages_from_docker_context_files() {
     fi
 }
 
+common::get_constraints_location
+
 install_airflow_and_providers_from_docker_context_files
 install_all_other_packages_from_docker_context_files

[airflow] 01/05: Quarantine test_clit_tasks - they have a lot of errors

Posted by po...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

potiuk pushed a commit to branch v2-0-test
in repository https://gitbox.apache.org/repos/asf/airflow.git

commit d7da7f5a11ca85e1d3bf77d212b96077821f9eed
Author: Jarek Potiuk <ja...@potiuk.com>
AuthorDate: Tue Mar 23 03:57:16 2021 +0100

    Quarantine test_clit_tasks - they have a lot of errors
---
 tests/cli/commands/test_task_command.py | 1 +
 1 file changed, 1 insertion(+)

diff --git a/tests/cli/commands/test_task_command.py b/tests/cli/commands/test_task_command.py
index efceef2..84d8162 100644
--- a/tests/cli/commands/test_task_command.py
+++ b/tests/cli/commands/test_task_command.py
@@ -62,6 +62,7 @@ class TestCliTasks(unittest.TestCase):
         cls.dagbag = DagBag(include_examples=True)
         cls.parser = cli_parser.get_parser()
 
+    @pytest.mark.skip(reason="This test hangs in v2-0-test branch")
     def test_cli_list_tasks(self):
         for dag_id in self.dagbag.dags:
             args = self.parser.parse_args(['tasks', 'list', dag_id])