You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@airflow.apache.org by ep...@apache.org on 2022/06/30 14:27:56 UTC

[airflow] 07/11: Fix inverted section levels in best-practices.rst (#23968)

This is an automated email from the ASF dual-hosted git repository.

ephraimanierobi pushed a commit to branch v2-3-test
in repository https://gitbox.apache.org/repos/asf/airflow.git

commit 96a2bcacd80f18051c9f7a0ad815a9a104104927
Author: Kengo Seki <se...@apache.org>
AuthorDate: Sat May 28 01:56:13 2022 +0900

    Fix inverted section levels in best-practices.rst (#23968)
    
    This PR fixes inverted levels in the sections added to the "Best Practices" document in #21879.
    
    (cherry picked from commit 8e7b76de9a726a8d085fe5b875331b66bf3cd045)
---
 docs/apache-airflow/best-practices.rst | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/docs/apache-airflow/best-practices.rst b/docs/apache-airflow/best-practices.rst
index d4606548d3..aadeccfeac 100644
--- a/docs/apache-airflow/best-practices.rst
+++ b/docs/apache-airflow/best-practices.rst
@@ -590,7 +590,7 @@ For connection, use :envvar:`AIRFLOW_CONN_{CONN_ID}`.
         assert "cat" == Connection.get("my_conn").login
 
 Metadata DB maintenance
------------------------
+^^^^^^^^^^^^^^^^^^^^^^^
 
 Over time, the metadata database will increase its storage footprint as more DAG and task runs and event logs accumulate.
 
@@ -599,15 +599,15 @@ You can use the Airflow CLI to purge old data with the command ``airflow db clea
 See :ref:`db clean usage<cli-db-clean>` for more details.
 
 Upgrades and downgrades
------------------------
+^^^^^^^^^^^^^^^^^^^^^^^
 
 Backup your database
-^^^^^^^^^^^^^^^^^^^^
+--------------------
 
 It's always a wise idea to backup the metadata database before undertaking any operation modifying the database.
 
 Disable the scheduler
-^^^^^^^^^^^^^^^^^^^^^
+---------------------
 
 You might consider disabling the Airflow cluster while you perform such maintenance.
 
@@ -618,13 +618,13 @@ A *better* way (though it's a bit more manual) is to use the ``dags pause`` comm
 .. _integration-test-dags:
 
 Add "integration test" DAGs
-^^^^^^^^^^^^^^^^^^^^^^^^^^^
+---------------------------
 
 It can be helpful to add a couple "integration test" DAGs that use all the common services in your ecosystem (e.g. S3, Snowflake, Vault) but with dummy resources or "dev" accounts.  These test DAGs can be the ones you turn on *first* after an upgrade, because if they fail, it doesn't matter and you can revert to your backup without negative consequences.  However, if they succeed, they should prove that your cluster is able to run tasks with the libraries and services that you need to use.
 
 For example, if you use an external secrets backend, make sure you have a task that retrieves a connection.  If you use KubernetesPodOperator, add a task that runs ``sleep 30; echo "hello"``.  If you need to write to s3, do so in a test task.  And if you need to access a database, add a task that does ``select 1`` from the server.
 
 Prune data before upgrading
-^^^^^^^^^^^^^^^^^^^^^^^^^^^
+---------------------------
 
 Some database migrations can be time-consuming.  If your metadata database is very large, consider pruning some of the old data with the :ref:`db clean<cli-db-clean>` command prior to performing the upgrade.  *Use with caution.*