You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@airflow.apache.org by po...@apache.org on 2021/06/22 19:25:00 UTC

[airflow] 22/47: Adding extra requirements for build and runtime of the PROD image. (#16170)

This is an automated email from the ASF dual-hosted git repository.

potiuk pushed a commit to branch v2-1-test
in repository https://gitbox.apache.org/repos/asf/airflow.git

commit 6395dbe0c3a1871c5ff46f0805e5c920391dda22
Author: Jarek Potiuk <ja...@potiuk.com>
AuthorDate: Wed Jun 2 01:08:44 2021 +0200

    Adding extra requirements for build and runtime of the PROD image. (#16170)
    
    This PR adds capability of adding extra requirements to PROD image:
    
    1) During the build by placing requirements.txt in the
       ``docker-context-files`` folder
    
    2) During execution of the container - by passing
       _PIP_ADDITIONAL_REQUIREMENTS variable
    
    The second case is only useful durint quick test/development and
    should not be used in production.
    
    Also updated documentation to contain all development/test
    variables for docker compose and clarifying that the options
    starting with _ are ment to be only used for quick testing.
    
    (cherry picked from commit d245992d2ac0d781b6b55fd030a076ca5c799bf7)
---
 Dockerfile                                         |   7 +
 IMAGES.rst                                         |  17 +-
 dev/README_RELEASE_PROVIDER_PACKAGES.md            |   4 +-
 dev/check_files.py                                 |   4 +-
 docs/apache-airflow/index.rst                      |   1 -
 docs/apache-airflow/start/docker-compose.yaml      |  26 ++-
 docs/apache-airflow/start/docker.rst               |  38 ++--
 docs/conf.py                                       |   5 +
 docs/docker-stack/build.rst                        | 213 ++++++++++++++++-----
 .../customizing/add-build-essential-custom.sh      |   4 +-
 .../docker-examples/customizing/custom-sources.sh  |   4 +-
 .../customizing/github-different-repository.sh     |   4 +-
 .../{github-master.sh => github-main.sh}           |   4 +-
 .../customizing/github-v2-1-test.sh                |   4 +-
 .../customizing/pypi-dev-runtime-deps.sh           |   4 +-
 .../customizing/pypi-extras-and-deps.sh            |   4 +-
 .../customizing/pypi-selected-version.sh           |   4 +-
 .../docker-examples/customizing/stable-airflow.sh  |   4 +-
 .../extending/add-apt-packages/Dockerfile          |   2 +-
 .../add-build-essential-extend/Dockerfile          |   2 +-
 .../extending/add-pypi-packages/Dockerfile         |   2 +-
 .../extending/embedding-dags/Dockerfile            |   2 +-
 .../extending/writable-directory/Dockerfile        |   2 +-
 .../restricted/restricted_environments.sh          |   3 +-
 docs/docker-stack/entrypoint.rst                   | 150 +++++++++------
 docs/docker-stack/index.rst                        |  31 ++-
 docs/docker-stack/recipes.rst                      |   4 +-
 docs/helm-chart/production-guide.rst               |  27 ++-
 docs/helm-chart/quick-start.rst                    | 102 +++++++++-
 scripts/in_container/prod/entrypoint_prod.sh       |  16 ++
 30 files changed, 519 insertions(+), 175 deletions(-)

diff --git a/Dockerfile b/Dockerfile
index 0d872ac..444f0a5 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -267,6 +267,13 @@ RUN if [[ ${INSTALL_FROM_DOCKER_CONTEXT_FILES} == "true" ]]; then \
     find /root/.local -executable -print0 | xargs --null chmod g+x; \
     find /root/.local -print0 | xargs --null chmod g+rw
 
+# In case there is a requirements.txt file in "docker-context-files" it will be installed
+# during the build additionally to whatever has been installed so far. It is recommended that
+# the requirements.txt contains only dependencies with == version specification
+RUN if [[ -f /docker-context-files/requirements.txt ]]; then \
+        pip install --no-cache-dir --user -r /docker-context-files/requirements.txt; \
+    fi
+
 ARG BUILD_ID
 ARG COMMIT_SHA
 ARG AIRFLOW_IMAGE_REPOSITORY="https://github.com/apache/airflow"
diff --git a/IMAGES.rst b/IMAGES.rst
index 6b4ca31..d332b99 100644
--- a/IMAGES.rst
+++ b/IMAGES.rst
@@ -495,7 +495,7 @@ additional apt dev and runtime dependencies.
     --build-arg ADDITIONAL_PYTHON_DEPS="pandas"
     --build-arg ADDITIONAL_DEV_APT_DEPS="gcc g++"
     --build-arg ADDITIONAL_RUNTIME_APT_DEPS="default-jre-headless"
-    --tag my-image
+    --tag my-image:0.0.1
 
 
 the same image can be built using ``breeze`` (it supports auto-completion of the options):
@@ -533,7 +533,7 @@ based on example in `this comment <https://github.com/apache/airflow/issues/8605
     --build-arg ADDITIONAL_RUNTIME_APT_COMMAND="curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add --no-tty - && curl https://packages.microsoft.com/config/debian/10/prod.list > /etc/apt/sources.list.d/mssql-release.list" \
     --build-arg ADDITIONAL_RUNTIME_APT_DEPS="msodbcsql17 unixodbc git procps vim" \
     --build-arg ADDITIONAL_RUNTIME_ENV_VARS="ACCEPT_EULA=Y" \
-    --tag my-image
+    --tag my-image:0.0.1
 
 CI image build arguments
 ------------------------
@@ -664,7 +664,7 @@ This builds the CI image in version 3.7 with default extras ("all").
 
 .. code-block:: bash
 
-  docker build . -f Dockerfile.ci --build-arg PYTHON_BASE_IMAGE="python:3.7-slim-buster"
+  docker build . -f Dockerfile.ci --build-arg PYTHON_BASE_IMAGE="python:3.7-slim-buster" --tag my-image:0.0.1
 
 
 This builds the CI image in version 3.6 with "gcp" extra only.
@@ -672,7 +672,7 @@ This builds the CI image in version 3.6 with "gcp" extra only.
 .. code-block:: bash
 
   docker build . -f Dockerfile.ci --build-arg PYTHON_BASE_IMAGE="python:3.7-slim-buster" \
-    --build-arg AIRFLOW_EXTRAS=gcp
+    --build-arg AIRFLOW_EXTRAS=gcp --tag my-image:0.0.1
 
 
 This builds the CI image in version 3.6 with "apache-beam" extra added.
@@ -680,28 +680,29 @@ This builds the CI image in version 3.6 with "apache-beam" extra added.
 .. code-block:: bash
 
   docker build . -f Dockerfile.ci --build-arg PYTHON_BASE_IMAGE="python:3.7-slim-buster" \
-    --build-arg ADDITIONAL_AIRFLOW_EXTRAS="apache-beam"
+    --build-arg ADDITIONAL_AIRFLOW_EXTRAS="apache-beam" --tag my-image:0.0.1
 
 This builds the CI image in version 3.6 with "mssql" additional package added.
 
 .. code-block:: bash
 
   docker build . -f Dockerfile.ci --build-arg PYTHON_BASE_IMAGE="python:3.7-slim-buster" \
-    --build-arg ADDITIONAL_PYTHON_DEPS="mssql"
+    --build-arg ADDITIONAL_PYTHON_DEPS="mssql" --tag my-image:0.0.1
 
 This builds the CI image in version 3.6 with "gcc" and "g++" additional apt dev dependencies added.
 
 .. code-block::
 
   docker build . -f Dockerfile.ci --build-arg PYTHON_BASE_IMAGE="python:3.7-slim-buster" \
-    --build-arg ADDITIONAL_DEV_APT_DEPS="gcc g++"
+    --build-arg ADDITIONAL_DEV_APT_DEPS="gcc g++" --tag my-image:0.0.1
 
 This builds the CI image in version 3.6 with "jdbc" extra and "default-jre-headless" additional apt runtime dependencies added.
 
 .. code-block::
 
   docker build . -f Dockerfile.ci --build-arg PYTHON_BASE_IMAGE="python:3.7-slim-buster" \
-    --build-arg AIRFLOW_EXTRAS=jdbc --build-arg ADDITIONAL_RUNTIME_DEPS="default-jre-headless"
+    --build-arg AIRFLOW_EXTRAS=jdbc --build-arg ADDITIONAL_RUNTIME_DEPS="default-jre-headless" \
+    --tag my-image:0.0.1
 
 CI Image manifests
 ------------------
diff --git a/dev/README_RELEASE_PROVIDER_PACKAGES.md b/dev/README_RELEASE_PROVIDER_PACKAGES.md
index 8779ef3..5cf23e0 100644
--- a/dev/README_RELEASE_PROVIDER_PACKAGES.md
+++ b/dev/README_RELEASE_PROVIDER_PACKAGES.md
@@ -601,14 +601,14 @@ USER ${AIRFLOW_UID}
 To build an image build and run a shell, run:
 
 ```shell script
-docker build . -t my-airflow
+docker build . --tag my-image:0.0.1
 docker run  -ti \
     --rm \
     -v "$PWD/data:/opt/airflow/" \
     -v "$PWD/keys/:/keys/" \
     -p 8080:8080 \
     -e AIRFLOW__CORE__LOAD_EXAMPLES=True \
-    my-airflow bash
+    my-image:0.0.1 bash
 ```
 
 ### Additional Verification
diff --git a/dev/check_files.py b/dev/check_files.py
index a305ef8..26f825f 100644
--- a/dev/check_files.py
+++ b/dev/check_files.py
@@ -47,7 +47,7 @@ RUN pip install "apache-airflow-upgrade-check=={}"
 
 
 DOCKER_CMD = """
-docker build -t local/airflow .
+docker build --tag local/airflow .
 docker local/airflow info
 """
 
@@ -80,7 +80,7 @@ def create_docker(txt: str):
     print("\n[bold]To check installation run:[/bold]")
     print(
         """\
-        docker build -f Dockerfile.pmc -t local/airflow .
+        docker build -f Dockerfile.pmc --tag local/airflow .
         docker run local/airflow info
         """
     )
diff --git a/docs/apache-airflow/index.rst b/docs/apache-airflow/index.rst
index 4588ca8..abace7f 100644
--- a/docs/apache-airflow/index.rst
+++ b/docs/apache-airflow/index.rst
@@ -101,7 +101,6 @@ unit of work and continuity.
     changelog
     best-practices
     production-deployment
-    backport-providers
     faq
     privacy_notice
 
diff --git a/docs/apache-airflow/start/docker-compose.yaml b/docs/apache-airflow/start/docker-compose.yaml
index ffe5e1e..5a301cf 100644
--- a/docs/apache-airflow/start/docker-compose.yaml
+++ b/docs/apache-airflow/start/docker-compose.yaml
@@ -23,16 +23,21 @@
 # This configuration supports basic configuration using environment variables or an .env file
 # The following variables are supported:
 #
-# AIRFLOW_IMAGE_NAME         - Docker image name used to run Airflow.
-#                              Default: apache/airflow:master-python3.8
-# AIRFLOW_UID                - User ID in Airflow containers
-#                              Default: 50000
-# AIRFLOW_GID                - Group ID in Airflow containers
-#                              Default: 50000
-# _AIRFLOW_WWW_USER_USERNAME - Username for the administrator account.
-#                              Default: airflow
-# _AIRFLOW_WWW_USER_PASSWORD - Password for the administrator account.
-#                              Default: airflow
+# AIRFLOW_IMAGE_NAME           - Docker image name used to run Airflow.
+#                                Default: apache/airflow:master-python3.8
+# AIRFLOW_UID                  - User ID in Airflow containers
+#                                Default: 50000
+# AIRFLOW_GID                  - Group ID in Airflow containers
+#                                Default: 50000
+#
+# Those configurations are useful mostly in case of standalone testing/running Airflow in test/try-out mode
+#
+# _AIRFLOW_WWW_USER_USERNAME   - Username for the administrator account (if requested).
+#                                Default: airflow
+# _AIRFLOW_WWW_USER_PASSWORD   - Password for the administrator account (if requested).
+#                                Default: airflow
+# _PIP_ADDITIONAL_REQUIREMENTS - Additional PIP requirements to add when starting all containers.
+#                                Default: ''
 #
 # Feel free to modify this file to suit your needs.
 ---
@@ -50,6 +55,7 @@ x-airflow-common:
     AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: 'true'
     AIRFLOW__CORE__LOAD_EXAMPLES: 'true'
     AIRFLOW__API__AUTH_BACKEND: 'airflow.api.auth.backend.basic_auth'
+    _PIP_ADDITIONAL_REQUIREMENTS: ${_PIP_ADDITIONAL_REQUIREMENTS:-}
   volumes:
     - ./dags:/opt/airflow/dags
     - ./logs:/opt/airflow/logs
diff --git a/docs/apache-airflow/start/docker.rst b/docs/apache-airflow/start/docker.rst
index 26a9132..32bf4c2 100644
--- a/docs/apache-airflow/start/docker.rst
+++ b/docs/apache-airflow/start/docker.rst
@@ -15,6 +15,8 @@
     specific language governing permissions and limitations
     under the License.
 
+.. _running-airflow-in-docker:
+
 Running Airflow in Docker
 #########################
 
@@ -60,7 +62,8 @@ Some directories in the container are mounted, which means that their contents a
 - ``./logs`` - contains logs from task execution and scheduler.
 - ``./plugins`` - you can put your :doc:`custom plugins </plugins>` here.
 
-This file uses the latest Airflow image (`apache/airflow <https://hub.docker.com/r/apache/airflow>`__). If you need install a new Python library or system library, you can :doc:`customize and extend it <docker-stack:index>`.
+This file uses the latest Airflow image (`apache/airflow <https://hub.docker.com/r/apache/airflow>`__).
+If you need install a new Python library or system library, you can :doc:`build your image <docker-stack:index>`.
 
 .. _initializing_docker_compose_environment:
 
@@ -247,13 +250,26 @@ runtime user id which is unknown at the time of building the image.
 |                                | you want to use different UID than default it must  |                          |
 |                                | be set to ``0``.                                    |                          |
 +--------------------------------+-----------------------------------------------------+--------------------------+
-| ``_AIRFLOW_WWW_USER_USERNAME`` | Username for the administrator UI account.          |                          |
-|                                | If this value is specified, admin UI user gets      |                          |
-|                                | created automatically. This is only useful when     |                          |
-|                                | you want to run Airflow for a test-drive and        |                          |
-|                                | want to start a container with embedded development |                          |
-|                                | database.                                           |                          |
-+--------------------------------+-----------------------------------------------------+--------------------------+
-| ``_AIRFLOW_WWW_USER_PASSWORD`` | Password for the administrator UI account.          |                          |
-|                                | Only used when ``_AIRFLOW_WWW_USER_USERNAME`` set.  |                          |
-+--------------------------------+-----------------------------------------------------+--------------------------+
+
+Those additional variables are useful in case you are trying out/testing Airflow installation via docker compose.
+They are not intended to be used in production, but they make the environment faster to bootstrap for first time
+users with the most common customizations.
+
++----------------------------------+-----------------------------------------------------+--------------------------+
+|   Variable                       | Description                                         | Default                  |
++==================================+=====================================================+==========================+
+| ``_AIRFLOW_WWW_USER_USERNAME``   | Username for the administrator UI account.          | airflow                  |
+|                                  | If this value is specified, admin UI user gets      |                          |
+|                                  | created automatically. This is only useful when     |                          |
+|                                  | you want to run Airflow for a test-drive and        |                          |
+|                                  | want to start a container with embedded development |                          |
+|                                  | database.                                           |                          |
++----------------------------------+-----------------------------------------------------+--------------------------+
+| ``_AIRFLOW_WWW_USER_PASSWORD``   | Password for the administrator UI account.          | airflow                  |
+|                                  | Only used when ``_AIRFLOW_WWW_USER_USERNAME`` set.  |                          |
++----------------------------------+-----------------------------------------------------+--------------------------+
+| ``_PIP_ADDITIONAL_REQUIREMENTS`` | If not empty, airflow containers will attempt to    |                          |
+|                                  | install requirements specified in the variable.     |                          |
+|                                  | example: ``lxml==4.6.3 charset-normalizer==1.4.1``. |                          |
+|                                  | Available in Airflow image 2.1.1 and above.         |                          |
++----------------------------------+-----------------------------------------------------+--------------------------+
diff --git a/docs/conf.py b/docs/conf.py
index e403ffe..1e798df 100644
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -258,10 +258,15 @@ if PACKAGE_NAME == 'apache-airflow':
     html_extra_with_substitutions = [
         f"{ROOT_DIR}/docs/apache-airflow/start/docker-compose.yaml",
     ]
+    # Replace "|version|" in links
     manual_substitutions_in_generated_html = [
         "installation.html",
     ]
 
+if PACKAGE_NAME == 'docker-stack':
+    # Replace "|version|" inside ```` quotes
+    manual_substitutions_in_generated_html = ["build.html"]
+
 # -- Theme configuration -------------------------------------------------------
 # Custom sidebar templates, maps document names to template names.
 html_sidebars = {
diff --git a/docs/docker-stack/build.rst b/docs/docker-stack/build.rst
index 3a5a977..46f87fb 100644
--- a/docs/docker-stack/build.rst
+++ b/docs/docker-stack/build.rst
@@ -15,16 +15,134 @@
     specific language governing permissions and limitations
     under the License.
 
+.. _build:build_image:
+
 Building the image
 ==================
 
-Before you dive-deeply in the way how the Airflow Image is build, named and why we are doing it the
-way we do, you might want to know very quickly how you can extend or customize the existing image
-for Apache Airflow. This chapter gives you a short answer to those questions.
+Before you dive-deeply in the way how the Airflow Image is build, let us first explain why you might need
+to build the custom container image and we show a few typical ways you can do it.
+
+Why custom image ?
+------------------
+
+The Apache Airflow community, releases Docker Images which are ``reference images`` for Apache Airflow.
+However, Airflow has more than 60 community managed providers (installable via extras) and some of the
+default extras/providers installed are not used by everyone, sometimes others extras/providers
+are needed, sometimes (very often actually) you need to add your own custom dependencies,
+packages or even custom providers.
+
+In Kubernetes and Docker terms this means that you need another image with your specific requirements.
+This is why you should learn how to build your own Docker (or more properly Container) image.
+You might be tempted to use the ``reference image`` and dynamically install the new packages while
+starting your containers, but this is a bad idea for multiple reasons - starting from fragility of the build
+and ending with the extra time needed to install those packages - which has to happen every time every
+container starts. The only viable way to deal with new dependencies and requirements in production is to
+build and use your own image. You should only use installing dependencies dynamically in case of
+"hobbyist" and "quick start" scenarios when you want to iterate quickly to try things out and later
+replace it with your own images.
+
+How to build your own image
+---------------------------
+
+There are several most-typical scenarios that you will encounter and here is a quick recipe on how to achieve
+your goal quickly. In order to understand details you can read further, but for the simple cases using
+typical tools here are the simple examples.
+
+In the simplest case building your image consists of those steps:
+
+1) Create your own ``Dockerfile`` (name it ``Dockerfile``) where you add:
+
+* information what your image should be based on (for example ``FROM: apache/airflow:|version|-python3.8``
+
+* additional steps that should be executed in your image (typically in the form of ``RUN <command>``)
+
+2) Build your image. This can be done with ``docker`` CLI tools and examples below assume ``docker`` is used.
+   There are other tools like ``kaniko`` or ``podman`` that allow you to build the image, but ``docker`` is
+   so far the most popular and developer-friendly tool out there. Typical way of building the image looks
+   like follows (``my-image:0.0.1`` is the custom tag of your image containing version).
+   In case you use some kind of registry where you will be using the image from, it is usually named
+   in the form of ``registry/image-name``. The name of the image has to be configured for the deployment
+   method your image will be deployed. This can be set for example as image name in the
+   `docker-compose file <running-airflow-in-docker>`_ or in the `Helm chart <helm-chart>`_.
+
+.. code-block:: shell
+
+   docker build . -f Dockerfile --tag my-image:0.0.1
+
+3) [Optional] Test the image. Airflow contains tool that allows you to test the image. This step however,
+   requires locally checked out or extracted Airflow sources. If you happen to have the sources you can
+   test the image by running this command (in airflow root folder). The output will tell you if the image
+   is "good-to-go".
+
+.. code-block:: shell
+
+   ./scripts/ci/tools/verify_docker_image.sh PROD my-image:0.0.1
+
+4) Once you build the image locally you have usually several options to make them available for your deployment:
+
+* For ``docker-compose`` deployment, that's all you need. The image is stored in docker engine cache
+  and docker compose will use it from there.
+
+* For some - development targeted - Kubernetes deployments you can load the images directly to
+  Kubernetes clusters. Clusters such as ``kind`` or ``minikube`` have dedicated ``load`` method to load the
+  images to the cluster.
+
+* Last but not least - you can push your image to a remote registry which is the most common way
+  of storing and exposing the images, and it is most portable way of publishing the image. Both
+  Docker-Compose and Kubernetes can make use of images exposed via registries.
+
+The most common scenarios where you want to build your own image are adding a new ``apt`` package,
+adding a new ``PyPI`` dependency and embedding DAGs into the image.
+Example Dockerfiles for those scenarios are below, and you can read further
+for more complex cases which might involve either extending or customizing the image.
+
+Adding new ``apt`` package
+..........................
+
+The following example adds ``vim`` to the airflow image.
+
+.. exampleinclude:: docker-examples/extending/add-apt-packages/Dockerfile
+    :language: Dockerfile
+    :start-after: [START Dockerfile]
+    :end-before: [END Dockerfile]
+
+
+Adding a new ``PyPI`` package
+.............................
+
+The following example adds ``lxml`` python package from PyPI to the image.
+
+.. exampleinclude:: docker-examples/extending/add-pypi-packages/Dockerfile
+    :language: Dockerfile
+    :start-after: [START Dockerfile]
+    :end-before: [END Dockerfile]
+
+Embedding DAGs
+..............
+
+The following example adds ``test_dag.py`` to your image in the ``/opt/airflow/dags`` folder.
+
+.. exampleinclude:: docker-examples/extending/embedding-dags/Dockerfile
+    :language: Dockerfile
+    :start-after: [START Dockerfile]
+    :end-before: [END Dockerfile]
+
+
+.. exampleinclude:: docker-examples/extending/embedding-dags/test_dag.py
+    :language: Python
+    :start-after: [START dag]
+    :end-before: [END dag]
+
+
 
 Extending vs. customizing the image
 -----------------------------------
 
+You might want to know very quickly how you can extend or customize the existing image
+for Apache Airflow. This chapter gives you a short answer to those questions.
+
+
 Here is the comparison of the two types of building images. Here is your guide if you want to choose
 how you want to build your image.
 
@@ -132,8 +250,8 @@ You should be aware, about a few things:
 Examples of image extending
 ---------------------------
 
-An ``apt`` package example
-..........................
+Example of adding ``apt`` package
+.................................
 
 The following example adds ``vim`` to the airflow image.
 
@@ -142,8 +260,8 @@ The following example adds ``vim`` to the airflow image.
     :start-after: [START Dockerfile]
     :end-before: [END Dockerfile]
 
-A ``PyPI`` package example
-..........................
+Example of adding ``PyPI`` package
+..................................
 
 The following example adds ``lxml`` python package from PyPI to the image.
 
@@ -152,8 +270,8 @@ The following example adds ``lxml`` python package from PyPI to the image.
     :start-after: [START Dockerfile]
     :end-before: [END Dockerfile]
 
-A ``umask`` requiring example
-.............................
+Example when writable directory is needed
+.........................................
 
 The following example adds a new directory that is supposed to be writable for any arbitrary user
 running the container.
@@ -164,8 +282,8 @@ running the container.
     :end-before: [END Dockerfile]
 
 
-A ``build-essential`` requiring package example
-...............................................
+Example when you add packages requiring compilation
+...................................................
 
 The following example adds ``mpi4py`` package which requires both ``build-essential`` and ``mpi compiler``.
 
@@ -177,8 +295,8 @@ The following example adds ``mpi4py`` package which requires both ``build-essent
 The size of this image is ~ 1.1 GB when build. As you will see further, you can achieve 20% reduction in
 size of the image in case you use "Customizing" rather than "Extending" the image.
 
-DAG embedding example
-.....................
+Example when you want to embed DAGs
+...................................
 
 The following example adds ``test_dag.py`` to your image in the ``/opt/airflow/dags`` folder.
 
@@ -223,27 +341,28 @@ to add extra dependencies needed at early stages of image building.
 
 When customizing the image you can choose a number of options how you install Airflow:
 
-   * From the PyPI releases (default)
-   * From the custom installation sources - using additional/replacing the original apt or PyPI repositories
-   * From local sources. This is used mostly during development.
-   * From tag or branch, or specific commit from a GitHub Airflow repository (or fork). This is particularly
-     useful when you build image for a custom version of Airflow that you keep in your fork and you do not
-     want to release the custom Airflow version to PyPI.
-   * From locally stored binary packages for Airflow, Airflow Providers and other dependencies. This is
-     particularly useful if you want to build Airflow in a highly-secure environment where all such packages
-     must be vetted by your security team and stored in your private artifact registry. This also
-     allows to build airflow image in an air-gaped environment.
-   * Side note. Building ``Airflow`` in an ``air-gaped`` environment sounds pretty funny, doesn't it?
+* From the PyPI releases (default)
+* From the custom installation sources - using additional/replacing the original apt or PyPI repositories
+* From local sources. This is used mostly during development.
+* From tag or branch, or specific commit from a GitHub Airflow repository (or fork). This is particularly
+  useful when you build image for a custom version of Airflow that you keep in your fork and you do not
+  want to release the custom Airflow version to PyPI.
+* From locally stored binary packages for Airflow, Airflow Providers and other dependencies. This is
+  particularly useful if you want to build Airflow in a highly-secure environment where all such packages
+  must be vetted by your security team and stored in your private artifact registry. This also
+  allows to build airflow image in an air-gaped environment.
+* Side note. Building ``Airflow`` in an ``air-gaped`` environment sounds pretty funny, doesn't it?
 
 You can also add a range of customizations while building the image:
 
-   * base python image you use for Airflow
-   * version of Airflow to install
-   * extras to install for Airflow (or even removing some default extras)
-   * additional apt/python dependencies to use while building Airflow (DEV dependencies)
-   * additional apt/python dependencies to install for runtime version of Airflow (RUNTIME dependencies)
-   * additional commands and variables to set if needed during building or preparing Airflow runtime
-   * choosing constraint file to use when installing Airflow
+* base python image you use for Airflow
+* version of Airflow to install
+* extras to install for Airflow (or even removing some default extras)
+* additional apt/python dependencies to use while building Airflow (DEV dependencies)
+* add ``requirements.txt`` file to ``docker-context-files`` directory to add extra requirements
+* additional apt/python dependencies to install for runtime version of Airflow (RUNTIME dependencies)
+* additional commands and variables to set if needed during building or preparing Airflow runtime
+* choosing constraint file to use when installing Airflow
 
 Additional explanation is needed for the last point. Airflow uses constraints to make sure
 that it can be predictably installed, even if some new versions of Airflow dependencies are
@@ -262,6 +381,12 @@ of constraints that you manually prepared.
 You can read more about constraints in the documentation of the
 `Installation <http://airflow.apache.org/docs/apache-airflow/stable/installation.html#constraints-files>`_
 
+Note that if you place ``requirements.txt`` in the ``docker-context-files`` folder, it will be
+used to install all requirements declared there. It is recommended that the file
+contains specified version of dependencies to add with ``==`` version specifier, to achieve
+stable set of requirements, independent if someone releases a newer version. However you have
+to make sure to update those requirements and rebuild the images to account for latest security fixes.
+
 Examples of image customizing
 -----------------------------
 
@@ -404,13 +529,13 @@ Such customizations are independent of the way how airflow is installed.
 
 The following - rather complex - example shows capabilities of:
 
-  * Adding airflow extras (slack, odbc)
-  * Adding PyPI dependencies (``azure-storage-blob, oauth2client, beautifulsoup4, dateparser, rocketchat_API,typeform``)
-  * Adding custom environment variables while installing ``apt`` dependencies - both DEV and RUNTIME
-    (``ACCEPT_EULA=Y'``)
-  * Adding custom curl command for adding keys and configuring additional apt sources needed to install
-    ``apt`` dependencies (both DEV and RUNTIME)
-  * Adding custom ``apt`` dependencies, both DEV (``msodbcsql17 unixodbc-dev g++) and runtime msodbcsql17 unixodbc git procps vim``)
+* Adding airflow extras (slack, odbc)
+* Adding PyPI dependencies (``azure-storage-blob, oauth2client, beautifulsoup4, dateparser, rocketchat_API,typeform``)
+* Adding custom environment variables while installing ``apt`` dependencies - both DEV and RUNTIME
+  (``ACCEPT_EULA=Y'``)
+* Adding custom curl command for adding keys and configuring additional apt sources needed to install
+  ``apt`` dependencies (both DEV and RUNTIME)
+* Adding custom ``apt`` dependencies, both DEV (``msodbcsql17 unixodbc-dev g++) and runtime msodbcsql17 unixodbc git procps vim``)
 
 .. exampleinclude:: docker-examples/customizing/custom-sources.sh
     :language: bash
@@ -466,11 +591,11 @@ security vetting and only use the new packages when they were vetted.
 On a separate (air-gaped) system, all the PyPI packages can be copied to ``docker-context-files``
 where you can build the image using the packages downloaded by passing those build args:
 
-  * ``INSTALL_FROM_DOCKER_CONTEXT_FILES="true"``  - to use packages present in ``docker-context-files``
-  * ``AIRFLOW_PRE_CACHED_PIP_PACKAGES="false"``  - to not pre-cache packages from PyPI when building image
-  * ``AIRFLOW_CONSTRAINTS_LOCATION=/docker-context-files/YOUR_CONSTRAINT_FILE.txt`` - to downloaded constraint files
-  * (Optional) ``INSTALL_MYSQL_CLIENT="false"`` if you do not want to install ``MySQL``
-    client from the Oracle repositories. In this case also make sure that your
+* ``INSTALL_FROM_DOCKER_CONTEXT_FILES="true"``  - to use packages present in ``docker-context-files``
+* ``AIRFLOW_PRE_CACHED_PIP_PACKAGES="false"``  - to not pre-cache packages from PyPI when building image
+* ``AIRFLOW_CONSTRAINTS_LOCATION=/docker-context-files/YOUR_CONSTRAINT_FILE.txt`` - to downloaded constraint files
+* (Optional) ``INSTALL_MYSQL_CLIENT="false"`` if you do not want to install ``MySQL``
+  client from the Oracle repositories. In this case also make sure that your
 
 Note, that the solution we have for installing python packages from local packages, only solves the problem
 of "air-gaped" python installation. The Docker image also downloads ``apt`` dependencies and ``node-modules``.
@@ -508,7 +633,7 @@ There are a few things to remember when you modify the ``Dockerfile``:
   and only the required folders are added through exclusion (!). This allows to keep docker context small
   because there are many binary artifacts generated in the sources of Airflow and if they are added to
   the context, the time of building the image would increase significantly. If you want to add any new
-  folders to be available in the image you must add it here with leading ``!``.
+  folders to be available in the image you must add it here with leading ``!``
 
   .. code-block:: text
 
diff --git a/docs/docker-stack/docker-examples/customizing/add-build-essential-custom.sh b/docs/docker-stack/docker-examples/customizing/add-build-essential-custom.sh
index 230ff1e..0500459 100755
--- a/docs/docker-stack/docker-examples/customizing/add-build-essential-custom.sh
+++ b/docs/docker-stack/docker-examples/customizing/add-build-essential-custom.sh
@@ -28,6 +28,6 @@ docker build . \
     --build-arg ADDITIONAL_PYTHON_DEPS="mpi4py" \
     --build-arg ADDITIONAL_DEV_APT_DEPS="libopenmpi-dev" \
     --build-arg ADDITIONAL_RUNTIME_APT_DEPS="openmpi-common" \
-    --tag "$(basename "$0")"
+    --tag "my-build-essential-image:0.0.1"
 # [END build]
-docker rmi --force "$(basename "$0")"
+docker rmi --force "my-build-essential-image:0.0.1"
diff --git a/docs/docker-stack/docker-examples/customizing/custom-sources.sh b/docs/docker-stack/docker-examples/customizing/custom-sources.sh
index 22223c4..8f087b3 100755
--- a/docs/docker-stack/docker-examples/customizing/custom-sources.sh
+++ b/docs/docker-stack/docker-examples/customizing/custom-sources.sh
@@ -43,6 +43,6 @@ docker build . -f Dockerfile \
     curl https://packages.microsoft.com/config/debian/10/prod.list > /etc/apt/sources.list.d/mssql-release.list" \
     --build-arg ADDITIONAL_RUNTIME_APT_ENV="ACCEPT_EULA=Y" \
     --build-arg ADDITIONAL_RUNTIME_APT_DEPS="msodbcsql17 unixodbc git procps vim" \
-    --tag "$(basename "$0")"
+    --tag "my-custom-sources-image:0.0.1"
 # [END build]
-docker rmi --force "$(basename "$0")"
+docker rmi --force "my-custom-sources-image:0.0.1"
diff --git a/docs/docker-stack/docker-examples/customizing/github-different-repository.sh b/docs/docker-stack/docker-examples/customizing/github-different-repository.sh
index 35e685e..b38ebda 100755
--- a/docs/docker-stack/docker-examples/customizing/github-different-repository.sh
+++ b/docs/docker-stack/docker-examples/customizing/github-different-repository.sh
@@ -26,6 +26,6 @@ docker build . \
     --build-arg AIRFLOW_INSTALLATION_METHOD="https://github.com/potiuk/airflow/archive/main.tar.gz#egg=apache-airflow" \
     --build-arg AIRFLOW_CONSTRAINTS_REFERENCE="constraints-main" \
     --build-arg CONSTRAINTS_GITHUB_REPOSITORY="potiuk/airflow" \
-    --tag "$(basename "$0")"
+    --tag "github-different-repository-image:0.0.1"
 # [END build]
-docker rmi --force "$(basename "$0")"
+docker rmi --force "github-different-repository-image:0.0.1"
diff --git a/docs/docker-stack/docker-examples/customizing/github-master.sh b/docs/docker-stack/docker-examples/customizing/github-main.sh
similarity index 94%
rename from docs/docker-stack/docker-examples/customizing/github-master.sh
rename to docs/docker-stack/docker-examples/customizing/github-main.sh
index 3ce40ac..ed1dc36 100755
--- a/docs/docker-stack/docker-examples/customizing/github-master.sh
+++ b/docs/docker-stack/docker-examples/customizing/github-main.sh
@@ -26,6 +26,6 @@ docker build . \
     --build-arg PYTHON_BASE_IMAGE="python:3.7-slim-buster" \
     --build-arg AIRFLOW_INSTALLATION_METHOD="https://github.com/apache/airflow/archive/main.tar.gz#egg=apache-airflow" \
     --build-arg AIRFLOW_CONSTRAINTS_REFERENCE="constraints-main" \
-    --tag "$(basename "$0")"
+    --tag "my-github-main:0.0.1"
 # [END build]
-docker rmi --force "$(basename "$0")"
+docker rmi --force "my-github-main:0.0.1"
diff --git a/docs/docker-stack/docker-examples/customizing/github-v2-1-test.sh b/docs/docker-stack/docker-examples/customizing/github-v2-1-test.sh
index 6ec0558..b8516dd 100755
--- a/docs/docker-stack/docker-examples/customizing/github-v2-1-test.sh
+++ b/docs/docker-stack/docker-examples/customizing/github-v2-1-test.sh
@@ -26,6 +26,6 @@ docker build . \
     --build-arg PYTHON_BASE_IMAGE="python:3.8-slim-buster" \
     --build-arg AIRFLOW_INSTALLATION_METHOD="https://github.com/apache/airflow/archive/v2-1-test.tar.gz#egg=apache-airflow" \
     --build-arg AIRFLOW_CONSTRAINTS_REFERENCE="constraints-2-0" \
-    --tag "$(basename "$0")"
+    --tag "my-github-v2-1:0.0.1"
 # [END build]
-docker rmi --force "$(basename "$0")"
+docker rmi --force "my-github-v2-1:0.0.1"
diff --git a/docs/docker-stack/docker-examples/customizing/pypi-dev-runtime-deps.sh b/docs/docker-stack/docker-examples/customizing/pypi-dev-runtime-deps.sh
index 7dc43bd..32bd1fc 100755
--- a/docs/docker-stack/docker-examples/customizing/pypi-dev-runtime-deps.sh
+++ b/docs/docker-stack/docker-examples/customizing/pypi-dev-runtime-deps.sh
@@ -29,6 +29,6 @@ docker build . \
     --build-arg ADDITIONAL_PYTHON_DEPS="pandas" \
     --build-arg ADDITIONAL_DEV_APT_DEPS="gcc g++" \
     --build-arg ADDITIONAL_RUNTIME_APT_DEPS="default-jre-headless" \
-    --tag "$(basename "$0")"
+    --tag "my-pypi-dev-runtime:0.0.1"
 # [END build]
-docker rmi --force "$(basename "$0")"
+docker rmi --force "my-pypi-dev-runtime:0.0.1"
diff --git a/docs/docker-stack/docker-examples/customizing/pypi-extras-and-deps.sh b/docs/docker-stack/docker-examples/customizing/pypi-extras-and-deps.sh
index 20deef2..4373121 100755
--- a/docs/docker-stack/docker-examples/customizing/pypi-extras-and-deps.sh
+++ b/docs/docker-stack/docker-examples/customizing/pypi-extras-and-deps.sh
@@ -27,6 +27,6 @@ docker build . \
     --build-arg AIRFLOW_VERSION="2.0.2" \
     --build-arg ADDITIONAL_AIRFLOW_EXTRAS="mssql,hdfs" \
     --build-arg ADDITIONAL_PYTHON_DEPS="oauth2client" \
-    --tag "$(basename "$0")"
+    --tag "my-pypi-extras-and-deps:0.0.1"
 # [END build]
-docker rmi --force "$(basename "$0")"
+docker rmi --force "my-pypi-extras-and-deps:0.0.1"
diff --git a/docs/docker-stack/docker-examples/customizing/pypi-selected-version.sh b/docs/docker-stack/docker-examples/customizing/pypi-selected-version.sh
index bc72ac1..c8e1f39 100755
--- a/docs/docker-stack/docker-examples/customizing/pypi-selected-version.sh
+++ b/docs/docker-stack/docker-examples/customizing/pypi-selected-version.sh
@@ -25,6 +25,6 @@ cd "${AIRFLOW_SOURCES}"
 docker build . \
     --build-arg PYTHON_BASE_IMAGE="python:3.7-slim-buster" \
     --build-arg AIRFLOW_VERSION="2.0.2" \
-    --tag "$(basename "$0")"
+    --tag "my-pypi-selected-version:0.0.1"
 # [END build]
-docker rmi --force "$(basename "$0")"
+docker rmi --force "my-pypi-selected-version:0.0.1"
diff --git a/docs/docker-stack/docker-examples/customizing/stable-airflow.sh b/docs/docker-stack/docker-examples/customizing/stable-airflow.sh
index d3471ac..8037785 100755
--- a/docs/docker-stack/docker-examples/customizing/stable-airflow.sh
+++ b/docs/docker-stack/docker-examples/customizing/stable-airflow.sh
@@ -23,6 +23,6 @@ cd "${AIRFLOW_SOURCES}"
 
 # [START build]
 docker build . \
-    --tag "$(basename "$0")"
+    --tag "my-stable-airflow:0.0.1"
 # [END build]
-docker rmi --force "$(basename "$0")"
+docker rmi --force "my-stable-airflow:0.0.1"
diff --git a/docs/docker-stack/docker-examples/extending/add-apt-packages/Dockerfile b/docs/docker-stack/docker-examples/extending/add-apt-packages/Dockerfile
index f0f056b..62de197 100644
--- a/docs/docker-stack/docker-examples/extending/add-apt-packages/Dockerfile
+++ b/docs/docker-stack/docker-examples/extending/add-apt-packages/Dockerfile
@@ -15,7 +15,7 @@
 
 # This is an example Dockerfile. It is not intended for PRODUCTION use
 # [START Dockerfile]
-FROM apache/airflow:2.0.2
+FROM apache/airflow
 USER root
 RUN apt-get update \
   && apt-get install -y --no-install-recommends \
diff --git a/docs/docker-stack/docker-examples/extending/add-build-essential-extend/Dockerfile b/docs/docker-stack/docker-examples/extending/add-build-essential-extend/Dockerfile
index 938dd57..b34fdc9 100644
--- a/docs/docker-stack/docker-examples/extending/add-build-essential-extend/Dockerfile
+++ b/docs/docker-stack/docker-examples/extending/add-build-essential-extend/Dockerfile
@@ -15,7 +15,7 @@
 
 # This is an example Dockerfile. It is not intended for PRODUCTION use
 # [START Dockerfile]
-FROM apache/airflow:2.0.2
+FROM apache/airflow
 USER root
 RUN apt-get update \
   && apt-get install -y --no-install-recommends \
diff --git a/docs/docker-stack/docker-examples/extending/add-pypi-packages/Dockerfile b/docs/docker-stack/docker-examples/extending/add-pypi-packages/Dockerfile
index c21d5de..cc2559f 100644
--- a/docs/docker-stack/docker-examples/extending/add-pypi-packages/Dockerfile
+++ b/docs/docker-stack/docker-examples/extending/add-pypi-packages/Dockerfile
@@ -15,6 +15,6 @@
 
 # This is an example Dockerfile. It is not intended for PRODUCTION use
 # [START Dockerfile]
-FROM apache/airflow:2.0.2
+FROM apache/airflow
 RUN pip install --no-cache-dir lxml
 # [END Dockerfile]
diff --git a/docs/docker-stack/docker-examples/extending/embedding-dags/Dockerfile b/docs/docker-stack/docker-examples/extending/embedding-dags/Dockerfile
index d748e7b..c849697 100644
--- a/docs/docker-stack/docker-examples/extending/embedding-dags/Dockerfile
+++ b/docs/docker-stack/docker-examples/extending/embedding-dags/Dockerfile
@@ -15,7 +15,7 @@
 
 # This is an example Dockerfile. It is not intended for PRODUCTION use
 # [START Dockerfile]
-FROM apache/airflow:2.0.2
+FROM apache/airflow
 
 COPY --chown=airflow:root test_dag.py /opt/airflow/dags
 
diff --git a/docs/docker-stack/docker-examples/extending/writable-directory/Dockerfile b/docs/docker-stack/docker-examples/extending/writable-directory/Dockerfile
index 7528852..ba07f68 100644
--- a/docs/docker-stack/docker-examples/extending/writable-directory/Dockerfile
+++ b/docs/docker-stack/docker-examples/extending/writable-directory/Dockerfile
@@ -15,7 +15,7 @@
 
 # This is an example Dockerfile. It is not intended for PRODUCTION use
 # [START Dockerfile]
-FROM apache/airflow:2.0.2
+FROM apache/airflow
 RUN umask 0002; \
     mkdir -p ~/writeable-directory
 # [END Dockerfile]
diff --git a/docs/docker-stack/docker-examples/restricted/restricted_environments.sh b/docs/docker-stack/docker-examples/restricted/restricted_environments.sh
index 6ff16d1..2546dc6 100755
--- a/docs/docker-stack/docker-examples/restricted/restricted_environments.sh
+++ b/docs/docker-stack/docker-examples/restricted/restricted_environments.sh
@@ -43,5 +43,6 @@ docker build . \
     --build-arg INSTALL_MYSQL_CLIENT="false" \
     --build-arg AIRFLOW_PRE_CACHED_PIP_PACKAGES="false" \
     --build-arg INSTALL_FROM_DOCKER_CONTEXT_FILES="true" \
-    --build-arg AIRFLOW_CONSTRAINTS_LOCATION="/docker-context-files/constraints-3.7.txt"
+    --build-arg AIRFLOW_CONSTRAINTS_LOCATION="/docker-context-files/constraints-3.7.txt" \
+    --tag my-restricted-environment:0.0.1
 # [END build]
diff --git a/docs/docker-stack/entrypoint.rst b/docs/docker-stack/entrypoint.rst
index 48a774d..c386a67 100644
--- a/docs/docker-stack/entrypoint.rst
+++ b/docs/docker-stack/entrypoint.rst
@@ -120,8 +120,82 @@ takes precedence over the :envvar:`AIRFLOW__CORE__SQL_ALCHEMY_CONN` variable.
 For newer versions, the ``airflow db check`` command is used, which means that a ``select 1 as is_alive;`` query
 is executed. This also means that you can keep your password in secret backend.
 
+Waits for celery broker connection
+----------------------------------
+
+In case Postgres or MySQL DB is used, and one of the ``scheduler``, ``celery``, ``worker``, or ``flower``
+commands are used the entrypoint will wait until the celery broker DB connection is available.
+
+The script detects backend type depending on the URL schema and assigns default port numbers if not specified
+in the URL. Then it loops until connection to the host/port specified can be established
+It tries :envvar:`CONNECTION_CHECK_MAX_COUNT` times and sleeps :envvar:`CONNECTION_CHECK_SLEEP_TIME` between checks.
+To disable check, set ``CONNECTION_CHECK_MAX_COUNT=0``.
+
+Supported schemes:
+
+* ``amqp(s)://``  (rabbitmq) - default port 5672
+* ``redis://``               - default port 6379
+* ``postgres://``            - default port 5432
+* ``mysql://``               - default port 3306
+
+Waiting for connection involves checking if a matching port is open.
+The host information is derived from the variables :envvar:`AIRFLOW__CELERY__BROKER_URL` and
+:envvar:`AIRFLOW__CELERY__BROKER_URL_CMD`. If :envvar:`AIRFLOW__CELERY__BROKER_URL_CMD` variable
+is passed to the container, it is evaluated as a command to execute and result of this evaluation is used
+as :envvar:`AIRFLOW__CELERY__BROKER_URL`. The :envvar:`AIRFLOW__CELERY__BROKER_URL_CMD` variable
+takes precedence over the :envvar:`AIRFLOW__CELERY__BROKER_URL` variable.
+
+.. _entrypoint:commands:
+
+Executing commands
+------------------
+
+If first argument equals to "bash" - you are dropped to a bash shell or you can executes bash command
+if you specify extra arguments. For example:
+
+.. code-block:: bash
+
+  docker run -it apache/airflow:2.1.0-python3.6 bash -c "ls -la"
+  total 16
+  drwxr-xr-x 4 airflow root 4096 Jun  5 18:12 .
+  drwxr-xr-x 1 root    root 4096 Jun  5 18:12 ..
+  drwxr-xr-x 2 airflow root 4096 Jun  5 18:12 dags
+  drwxr-xr-x 2 airflow root 4096 Jun  5 18:12 logs
+
+If first argument is equal to ``python`` - you are dropped in python shell or python commands are executed if
+you pass extra parameters. For example:
+
+.. code-block:: bash
+
+  > docker run -it apache/airflow:2.1.0-python3.6 python -c "print('test')"
+  test
+
+If first argument equals to "airflow" - the rest of the arguments is treated as an airflow command
+to execute. Example:
+
+.. code-block:: bash
+
+   docker run -it apache/airflow:2.1.0-python3.6 airflow webserver
+
+If there are any other arguments - they are simply passed to the "airflow" command
+
+.. code-block:: bash
+
+  > docker run -it apache/airflow:2.1.0-python3.6 version
+  2.1.0
+
+Additional quick test options
+-----------------------------
+
+The options below are mostly used for quick testing the image - for example with
+quick-start docker-compose or when you want to perform a local test with new packages
+added. They are not supposed to be run in the production environment as they add additional
+overhead for execution of additional commands. Those options in production should be realized
+either as maintenance operations on the database or should be embedded in the custom image used
+(when you want to add new packages).
+
 Upgrading Airflow DB
---------------------
+....................
 
 If you set :envvar:`_AIRFLOW_DB_UPGRADE` variable to a non-empty value, the entrypoint will run
 the ``airflow db upgrade`` command right after verifying the connection. You can also use this
@@ -131,7 +205,7 @@ intended only for testing purpose, never use SQLite in production as it has seve
 comes to concurrency.
 
 Creating admin user
--------------------
+...................
 
 The entrypoint can also create webserver user automatically when you enter it. you need to set
 :envvar:`_AIRFLOW_WWW_USER_CREATE` to a non-empty value in order to do that. This is not intended for
@@ -185,66 +259,24 @@ database and creating an ``admin/admin`` Admin user with the following command:
 The commands above perform initialization of the SQLite database, create admin user with admin password
 and Admin role. They also forward local port ``8080`` to the webserver port and finally start the webserver.
 
-Waits for celery broker connection
-----------------------------------
-
-In case Postgres or MySQL DB is used, and one of the ``scheduler``, ``celery``, ``worker``, or ``flower``
-commands are used the entrypoint will wait until the celery broker DB connection is available.
-
-The script detects backend type depending on the URL schema and assigns default port numbers if not specified
-in the URL. Then it loops until connection to the host/port specified can be established
-It tries :envvar:`CONNECTION_CHECK_MAX_COUNT` times and sleeps :envvar:`CONNECTION_CHECK_SLEEP_TIME` between checks.
-To disable check, set ``CONNECTION_CHECK_MAX_COUNT=0``.
-
-Supported schemes:
-
-* ``amqp(s)://``  (rabbitmq) - default port 5672
-* ``redis://``               - default port 6379
-* ``postgres://``            - default port 5432
-* ``mysql://``               - default port 3306
-
-Waiting for connection involves checking if a matching port is open.
-The host information is derived from the variables :envvar:`AIRFLOW__CELERY__BROKER_URL` and
-:envvar:`AIRFLOW__CELERY__BROKER_URL_CMD`. If :envvar:`AIRFLOW__CELERY__BROKER_URL_CMD` variable
-is passed to the container, it is evaluated as a command to execute and result of this evaluation is used
-as :envvar:`AIRFLOW__CELERY__BROKER_URL`. The :envvar:`AIRFLOW__CELERY__BROKER_URL_CMD` variable
-takes precedence over the :envvar:`AIRFLOW__CELERY__BROKER_URL` variable.
-
-.. _entrypoint:commands:
+Installing additional requirements
+..................................
 
-Executing commands
-------------------
+Installing additional requirements can be done by specifying ``_PIP_ADDITIONAL_REQUIREMENTS`` variable.
+The variable should contain a list of requirements that should be installed additionally when entering
+the containers. Note that this option slows down starting of Airflow as every time any container starts
+it must install new packages. Therefore this option should only be used for testing. When testing is
+finished, you should create your custom image with dependencies baked in.
 
-If first argument equals to "bash" - you are dropped to a bash shell or you can executes bash command
-if you specify extra arguments. For example:
+Example:
 
 .. code-block:: bash
 
-  docker run -it apache/airflow:2.1.0-python3.6 bash -c "ls -la"
-  total 16
-  drwxr-xr-x 4 airflow root 4096 Jun  5 18:12 .
-  drwxr-xr-x 1 root    root 4096 Jun  5 18:12 ..
-  drwxr-xr-x 2 airflow root 4096 Jun  5 18:12 dags
-  drwxr-xr-x 2 airflow root 4096 Jun  5 18:12 logs
-
-If first argument is equal to ``python`` - you are dropped in python shell or python commands are executed if
-you pass extra parameters. For example:
-
-.. code-block:: bash
-
-  > docker run -it apache/airflow:2.1.0-python3.6 python -c "print('test')"
-  test
-
-If first argument equals to "airflow" - the rest of the arguments is treated as an airflow command
-to execute. Example:
-
-.. code-block:: bash
-
-   docker run -it apache/airflow:2.1.0-python3.6 airflow webserver
-
-If there are any other arguments - they are simply passed to the "airflow" command
-
-.. code-block:: bash
+  docker run -it -p 8080:8080 \
+    --env "_PIP_ADDITIONAL_REQUIREMENTS=lxml==4.6.3 charset-normalizer==1.4.1" \
+    --env "_AIRFLOW_DB_UPGRADE=true" \
+    --env "_AIRFLOW_WWW_USER_CREATE=true" \
+    --env "_AIRFLOW_WWW_USER_PASSWORD_CMD=echo admin" \
+      apache/airflow:master-python3.8 webserver
 
-  > docker run -it apache/airflow:2.1.0-python3.6 version
-  2.1.0
+This method is only available starting from Docker image of Airflow 2.1.1 and above.
diff --git a/docs/docker-stack/index.rst b/docs/docker-stack/index.rst
index 6ee77d0..46b39dc 100644
--- a/docs/docker-stack/index.rst
+++ b/docs/docker-stack/index.rst
@@ -38,15 +38,32 @@ Docker Image for Apache Airflow
 For the ease of deployment in production, the community releases a production-ready reference container
 image.
 
-The docker image provided (as convenience binary package) in the
-`apache/airflow DockerHub <https://hub.docker.com/r/apache/airflow>`_ is a bare image
-that has a few external dependencies and extras installed..
 
-The Apache Airflow image provided as convenience package is optimized for size, so
+The Apache Airflow community, releases Docker Images which are ``reference images`` for Apache Airflow.
+Every time a new version of Airflow is released, the images are prepared in the
+`apache/airflow DockerHub <https://hub.docker.com/r/apache/airflow>`_
+for all the supported Python versions.
+
+You can find the following images there (Assuming Airflow version |version|):
+
+* ``apache/airflow:latest``              - the latest released Airflow image with default Python version (3.6 currently)
+* ``apache/airflow:latest-pythonX.Y``    - the latest released Airflow image with specific Python version
+* ``apache/airflow:|version|``           - the versioned Airflow image with default Python version (3.6 currently)
+* ``apache/airflow:|version|-pythonX.Y`` - the versioned Airflow image with specific Python version
+
+Those are "reference" images. They contain the most common set of extras, dependencies and providers that are
+often used by the users and they are good to "try-things-out" when you want to just take airflow for a spin,
+
+The Apache Airflow image provided as convenience package is optimized for size, and
 it provides just a bare minimal set of the extras and dependencies installed and in most cases
-you want to either extend or customize the image. You can see all possible extras in
-:doc:`extra-packages-ref`. The set of extras used in Airflow Production image are available in the
-`Dockerfile <https://github.com/apache/airflow/blob/2c6c7fdb2308de98e142618836bdf414df9768c8/Dockerfile#L39>`_.
+you want to either extend or customize the image. You can see all possible extras in :doc:`extra-packages-ref`.
+The set of extras used in Airflow Production image are available in the
+`Dockerfile <https://github.com/apache/airflow/blob/2c6c7fdb2308de98e142618836bdf414df9768c8/Dockerfile#L37>`_.
+
+However, Airflow has more than 60 community-managed providers (installable via extras) and some of the
+default extras/providers installed are not used by everyone, sometimes others extras/providers
+are needed, sometimes (very often actually) you need to add your own custom dependencies,
+packages or even custom providers. You can learn how to do it in :ref:`Building the image <build:build_image>`.
 
 The production images are build in DockerHub from released version and release candidates. There
 are also images published from branches but they are used mainly for development and testing purpose.
diff --git a/docs/docker-stack/recipes.rst b/docs/docker-stack/recipes.rst
index 66dbbcd..f27ed51 100644
--- a/docs/docker-stack/recipes.rst
+++ b/docs/docker-stack/recipes.rst
@@ -41,7 +41,7 @@ Then build a new image.
 
   docker build . \
     --build-arg BASE_AIRFLOW_IMAGE="apache/airflow:2.0.2" \
-    -t my-airflow-image
+    --tag my-airflow-image:0.0.1
 
 
 Apache Hadoop Stack installation
@@ -67,4 +67,4 @@ Then build a new image.
 
   docker build . \
     --build-arg BASE_AIRFLOW_IMAGE="apache/airflow:2.0.2" \
-    -t my-airflow-image
+    --tag my-airflow-image:0.0.1
diff --git a/docs/helm-chart/production-guide.rst b/docs/helm-chart/production-guide.rst
index ec08212..bd61808 100644
--- a/docs/helm-chart/production-guide.rst
+++ b/docs/helm-chart/production-guide.rst
@@ -67,8 +67,31 @@ Depending on the size of you Airflow instance, you may want to adjust the follow
     # The maximum number of server connections to the result backend database from PgBouncer
     resultBackendPoolSize: 5
 
-DAG Files
----------
+Extending and customizing Airflow Image
+---------------------------------------
+
+The Apache Airflow community, releases Docker Images which are ``reference images`` for Apache Airflow.
+However, Airflow has more than 60 community managed providers (installable via extras) and some of the
+default extras/providers installed are not used by everyone, sometimes others extras/providers
+are needed, sometimes (very often actually) you need to add your own custom dependencies,
+packages or even custom providers, or add custom tools and binaries that are needed in
+your deployment.
+
+In Kubernetes and Docker terms this means that you need another image with your specific requirements.
+This is why you should learn how to build your own ``Docker`` (or more properly ``Container``) image.
+
+Typical scenarios where you would like to use your custom image:
+
+* Adding ``apt`` packages
+* Adding ``PyPI`` packages
+* Adding binary resources necessary for your deployment
+* Adding custom tools needed in your deployment
+
+See `Building the image <https://airflow.apache.org/docs/docker-stack/build.html>`_ for more
+details on how you can extend and customize the Airflow image.
+
+Managing DAG Files
+------------------
 
 See :doc:`manage-dags-files`.
 
diff --git a/docs/helm-chart/quick-start.rst b/docs/helm-chart/quick-start.rst
index b5f0846..09fa745 100644
--- a/docs/helm-chart/quick-start.rst
+++ b/docs/helm-chart/quick-start.rst
@@ -65,8 +65,17 @@ Run ``kubectl port-forward svc/airflow-webserver 8080:8080 -n airflow``
 to port-forward the Airflow UI to http://localhost:8080/ to confirm
 Airflow is working.
 
-Build a Docker image from your DAGs
------------------------------------
+Extending Airflow Image
+-----------------------
+
+The Apache Airflow community, releases Docker Images which are ``reference images`` for Apache Airflow.
+However when you try it out you want to add your own DAGS, custom dependencies,
+packages or even custom providers.
+
+The best way to achieve it, is to build your own, custom image.
+
+Adding DAGs to your image
+.........................
 
 1. Create a project
 
@@ -84,7 +93,7 @@ Build a Docker image from your DAGs
 
     .. code-block:: bash
 
-        docker build -t my-dags:0.0.1 .
+        docker build --tag my-dags:0.0.1 .
 
 
 3. Load the image into kind:
@@ -101,3 +110,90 @@ Build a Docker image from your DAGs
       helm upgrade $RELEASE_NAME --namespace $NAMESPACE \
           --set images.airflow.repository=my-dags \
           --set images.airflow.tag=0.0.1
+
+Adding ``apt`` packages to your image
+.....................................
+
+Example below adds ``vim`` apt package.
+
+1. Create a project
+
+    .. code-block:: bash
+
+        mkdir my-airflow-project && cd my-airflow-project
+        cat <<EOM > Dockerfile
+        FROM apache/airflow
+        USER root
+        RUN apt-get update \
+          && apt-get install -y --no-install-recommends \
+                 vim \
+          && apt-get autoremove -yqq --purge \
+          && apt-get clean \
+          && rm -rf /var/lib/apt/lists/*
+        USER airflow
+        EOM
+
+
+2. Then build the image:
+
+    .. code-block:: bash
+
+        docker build --tag my-image:0.0.1 .
+
+
+3. Load the image into kind:
+
+    .. code-block:: bash
+
+      kind load docker-image my-image:0.0.1
+
+4. Upgrade Helm deployment:
+
+    .. code-block:: bash
+
+      helm upgrade $RELEASE_NAME apache-airflow/airflow --namespace $NAMESPACE \
+          --set images.airflow.repository=my-image \
+          --set images.airflow.tag=0.0.1
+
+Adding ``PyPI`` packages to your image
+......................................
+
+Example below adds ``lxml`` PyPI package.
+
+1. Create a project
+
+    .. code-block:: bash
+
+        mkdir my-airflow-project && cd my-airflow-project
+        cat <<EOM > Dockerfile
+        FROM apache/airflow
+        RUN pip install --no-cache-dir lxml
+        EOM
+
+
+2. Then build the image:
+
+    .. code-block:: bash
+
+        docker build --tag my-image:0.0.1 .
+
+
+3. Load the image into kind:
+
+    .. code-block:: bash
+
+      kind load docker-image my-image:0.0.1
+
+4. Upgrade Helm deployment:
+
+    .. code-block:: bash
+
+      helm upgrade $RELEASE_NAME apache-airflow/airflow --namespace $NAMESPACE \
+          --set images.airflow.repository=my-image \
+          --set images.airflow.tag=0.0.1
+
+Further extending and customizing the image
+...........................................
+
+See `Building the image <https://airflow.apache.org/docs/docker-stack/build.html>`_ for more
+details on how you can extend and customize the Airflow image.
diff --git a/scripts/in_container/prod/entrypoint_prod.sh b/scripts/in_container/prod/entrypoint_prod.sh
index 27d1d90..123a2f8 100755
--- a/scripts/in_container/prod/entrypoint_prod.sh
+++ b/scripts/in_container/prod/entrypoint_prod.sh
@@ -311,6 +311,22 @@ if [[ -n "${_AIRFLOW_WWW_USER_CREATE=}" ]] ; then
     create_www_user
 fi
 
+if [[ -n "${_PIP_ADDITIONAL_REQUIREMENTS=}" ]] ; then
+    >&2 echo
+    >&2 echo "!!!!!  Installing additional requirements: '${_PIP_ADDITIONAL_REQUIREMENTS}' !!!!!!!!!!!!"
+    >&2 echo
+    >&2 echo "WARNING: This is a developpment/test feature only. NEVER use it in production!"
+    >&2 echo "         Instead, build a custom image as described in"
+    >&2 echo
+    >&2 echo "         https://airflow.apache.org/docs/docker-stack/build.html"
+    >&2 echo
+    >&2 echo "         Adding requirements at container startup is fragile and is done every time"
+    >&2 echo "         the container starts, so it is onlny useful for testing and trying out"
+    >&2 echo "         of adding dependencies."
+    >&2 echo
+    pip install --no-cache-dir --user "${_PIP_ADDITIONAL_REQUIREMENTS=}"
+fi
+
 
 # The `bash` and `python` commands should also verify the basic connections
 # So they are run after the DB check