You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kyuubi.apache.org by ch...@apache.org on 2022/11/16 14:03:47 UTC

[incubator-kyuubi-website] branch master updated: Add 1.6.1 docs (#94)

This is an automated email from the ASF dual-hosted git repository.

chengpan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-kyuubi-website.git


The following commit(s) were added to refs/heads/master by this push:
     new f1535ac  Add 1.6.1 docs (#94)
f1535ac is described below

commit f1535acb0db134569ed20a900731e812db41a976
Author: cxzl25 <cx...@users.noreply.github.com>
AuthorDate: Wed Nov 16 22:03:40 2022 +0800

    Add 1.6.1 docs (#94)
---
 content/docs/latest                                |     1 +
 .../_sources/appendix/index.rst.txt                |    24 -
 .../_sources/appendix/terminology.md.txt           |   164 -
 .../_sources/changelog/v1.5.1-incubating.md.txt    |    11 -
 .../_sources/changelog/v1.5.2-incubating.md.txt    |    16 -
 .../_sources/changelog/v1.6.0-incubating.md.txt    |   618 --
 .../client/advanced/configurations.rst.txt         |    17 -
 .../client/advanced/features/engine_pool.rst.txt   |    18 -
 .../advanced/features/engine_resouces.rst.txt      |    18 -
 .../advanced/features/engine_share_level.rst.txt   |    18 -
 .../client/advanced/features/engine_ttl.rst.txt    |    18 -
 .../client/advanced/features/engine_type.rst.txt   |    18 -
 .../client/advanced/features/index.rst.txt         |    30 -
 .../client/advanced/features/plan_only.rst.txt     |    18 -
 .../client/advanced/features/scala.rst.txt         |    18 -
 .../_sources/client/advanced/index.rst.txt         |    25 -
 .../_sources/client/advanced/kerberos.md.txt       |   224 -
 .../_sources/client/advanced/logging.rst.txt       |    17 -
 .../_sources/client/bi_tools/datagrip.md.txt       |    57 -
 .../_sources/client/bi_tools/dbeaver.rst.txt       |   125 -
 .../_sources/client/bi_tools/hue.md.txt            |   130 -
 .../_sources/client/bi_tools/index.rst.txt         |    32 -
 .../_sources/client/bi_tools/powerbi.rst.txt       |    21 -
 .../_sources/client/bi_tools/superset.rst.txt      |    21 -
 .../_sources/client/bi_tools/tableau.rst.txt       |    21 -
 .../_sources/client/cli/hive_beeline.rst.txt       |    31 -
 .../_sources/client/cli/index.rst.txt              |    23 -
 .../_sources/client/cli/kyuubi_beeline.rst.txt     |    22 -
 .../_sources/client/index.rst.txt                  |    39 -
 .../_sources/client/jdbc/hive_jdbc.md.txt          |    82 -
 .../_sources/client/jdbc/index.rst.txt             |    25 -
 .../_sources/client/jdbc/kyuubi_jdbc.rst.txt       |   160 -
 .../_sources/client/jdbc/mysql_jdbc.rst.txt        |    26 -
 .../_sources/client/odbc/index.rst.txt             |    24 -
 .../_sources/client/python/index.rst.txt           |    24 -
 .../_sources/client/python/pyhive.rst.txt          |    22 -
 .../_sources/client/rest/index.rst.txt             |    24 -
 .../_sources/client/rest/rest_api.md.txt           |   124 -
 .../_sources/client/thrift/index.rst.txt           |    24 -
 .../_sources/client/ui/index.rst.txt               |    24 -
 .../_sources/community/CONTRIBUTING.md.txt         |    61 -
 .../_sources/community/collaborators.md.txt        |    22 -
 .../_sources/community/index.rst.txt               |    27 -
 .../_sources/community/release.md.txt              |   283 -
 .../connector/flink/flink_table_store.rst.txt      |   111 -
 .../_sources/connector/flink/hudi.rst.txt          |   117 -
 .../_sources/connector/flink/iceberg.rst.txt       |   121 -
 .../_sources/connector/flink/index.rst.txt         |    24 -
 .../_sources/connector/hive/index.rst.txt          |    20 -
 .../_sources/connector/index.rst.txt               |    42 -
 .../_sources/connector/spark/delta_lake.rst.txt    |    95 -
 .../spark/delta_lake_with_azure_blob.rst.txt       |   345 -
 .../connector/spark/flink_table_store.rst.txt      |    90 -
 .../_sources/connector/spark/hudi.rst.txt          |   112 -
 .../_sources/connector/spark/iceberg.rst.txt       |   124 -
 .../_sources/connector/spark/index.rst.txt         |    42 -
 .../_sources/connector/spark/kudu.md.txt           |   185 -
 .../_sources/connector/spark/tidb.rst.txt          |   103 -
 .../_sources/connector/spark/tpcds.rst.txt         |   108 -
 .../_sources/connector/spark/tpch.rst.txt          |   104 -
 .../connector/trino/flink_table_store.rst.txt      |    94 -
 .../_sources/connector/trino/iceberg.rst.txt       |    92 -
 .../_sources/connector/trino/index.rst.txt         |    23 -
 .../_sources/deployment/engine_lifecycle.md.txt    |    59 -
 .../deployment/engine_on_kubernetes.md.txt         |   121 -
 .../_sources/deployment/engine_on_yarn.md.txt      |   258 -
 .../_sources/deployment/hive_metastore.md.txt      |   210 -
 .../_sources/deployment/index.rst.txt              |    53 -
 .../deployment/kyuubi_on_kubernetes.md.txt         |   103 -
 .../_sources/deployment/settings.md.txt            |   620 --
 .../_sources/deployment/spark/aqe.md.txt           |   264 -
 .../deployment/spark/dynamic_allocation.md.txt     |   237 -
 .../deployment/spark/incremental_collection.md.txt |   121 -
 .../_sources/deployment/spark/index.rst.txt        |    32 -
 .../_sources/develop_tools/build_document.md.txt   |    74 -
 .../_sources/develop_tools/building.md.txt         |    86 -
 .../_sources/develop_tools/debugging.md.txt        |   110 -
 .../_sources/develop_tools/developer.md.txt        |    63 -
 .../_sources/develop_tools/distribution.md.txt     |    56 -
 .../_sources/develop_tools/idea_setup.md.txt       |    96 -
 .../_sources/develop_tools/index.rst.txt           |    31 -
 .../_sources/develop_tools/testing.md.txt          |    54 -
 .../extensions/engines/flink/index.rst.txt         |    25 -
 .../_sources/extensions/engines/hive/index.rst.txt |    25 -
 .../_sources/extensions/engines/index.rst.txt      |    30 -
 .../extensions/engines/spark/functions.md.txt      |    31 -
 .../extensions/engines/spark/index.rst.txt         |    26 -
 .../_sources/extensions/engines/spark/rules.md.txt |    82 -
 .../engines/spark/z-order-benchmark.md.txt         |   240 -
 .../extensions/engines/spark/z-order.md.txt        |   121 -
 .../extensions/engines/trino/index.rst.txt         |    26 -
 .../_sources/extensions/index.rst.txt              |    39 -
 .../extensions/server/applications.rst.txt         |   154 -
 .../extensions/server/authentication.rst.txt       |    83 -
 .../extensions/server/configuration.rst.txt        |    73 -
 .../_sources/extensions/server/events.rst.txt      |    22 -
 .../_sources/extensions/server/index.rst.txt       |    29 -
 .../docs/r1.6.0-incubating/_sources/index.rst.txt  |   145 -
 .../_sources/monitor/events.md.txt                 |    19 -
 .../_sources/monitor/index.rst.txt                 |    28 -
 .../_sources/monitor/logging.md.txt                |   268 -
 .../_sources/monitor/metrics.md.txt                |    95 -
 .../_sources/monitor/trouble_shooting.md.txt       |   265 -
 .../_sources/overview/index.rst.txt                |    25 -
 .../_sources/overview/kyuubi_vs_hive.md.txt        |    53 -
 .../overview/kyuubi_vs_thriftserver.md.txt         |   258 -
 .../_sources/quick_start/index.rst.txt             |    29 -
 .../_sources/quick_start/quick_start.md.txt        |   524 -
 .../quick_start/quick_start_with_helm.md.txt       |   106 -
 .../quick_start/quick_start_with_jdbc.md.txt       |    93 -
 .../quick_start/quick_start_with_jupyter.md.txt    |    20 -
 .../r1.6.0-incubating/_sources/requirements.txt    |    26 -
 .../_sources/security/authentication.rst.txt       |    46 -
 .../_sources/security/authorization/index.rst.txt  |    22 -
 .../security/authorization/spark/build.md.txt      |   103 -
 .../security/authorization/spark/index.rst.txt     |    27 -
 .../security/authorization/spark/install.md.txt    |   142 -
 .../security/authorization/spark/overview.rst.txt  |    62 -
 .../security/hadoop_credentials_manager.md.txt     |    85 -
 .../_sources/security/index.rst.txt                |    28 -
 .../_sources/security/jdbc.md.txt                  |    49 -
 .../_sources/security/kerberos.rst.txt             |   118 -
 .../_sources/security/kinit.md.txt                 |   107 -
 .../_sources/security/ldap.rst.txt                 |    21 -
 .../r1.6.0-incubating/_sources/tools/index.rst.txt |    27 -
 .../_sources/tools/kyuubi-admin.rst.txt            |    71 -
 .../_sources/tools/kyuubi-ctl.md.txt               |   162 -
 .../_sources/tools/spark_block_cleaner.md.txt      |   129 -
 content/docs/r1.6.1-incubating/404.html            |  1196 ++
 .../r1.6.1-incubating/_images/aqe_switch_join.png  |   Bin 0 -> 38666 bytes
 .../_images/azure_create_azure_access_key.png      |   Bin 0 -> 61300 bytes
 .../_images/azure_create_new_container.png         |   Bin 0 -> 96292 bytes
 .../azure_spark_connection_test_storage.png        |   Bin 0 -> 41914 bytes
 .../_images/blog-adaptive-query-execution-2.png    |   Bin 0 -> 96428 bytes
 .../_images/blog-adaptive-query-execution-3.png    |   Bin 0 -> 28412 bytes
 .../_images/blog-adaptive-query-execution-6.png    |   Bin 0 -> 45949 bytes
 .../r1.6.1-incubating/_images/cloudera_manager.png |   Bin 0 -> 31447 bytes
 .../r1.6.1-incubating/_images/configuration.png    |   Bin 0 -> 60289 bytes
 .../_images/configure_database_connection.png      |   Bin 0 -> 609661 bytes
 .../_images/configure_database_connection_ha.png   |   Bin 0 -> 610635 bytes
 .../_images/datasource_and_driver.png              |   Bin 0 -> 57829 bytes
 .../_images/dra_executor_add_ratio.png             |   Bin 0 -> 14894 bytes
 .../_images/dra_executor_added.png                 |   Bin 0 -> 15171 bytes
 .../_images/dra_executor_removal.png               |   Bin 0 -> 15336 bytes
 .../r1.6.1-incubating/_images/dra_task_fin.png     |   Bin 0 -> 16225 bytes
 .../r1.6.1-incubating/_images/dra_task_pending.png |   Bin 0 -> 14941 bytes
 content/docs/r1.6.1-incubating/_images/editor.png  |   Bin 0 -> 26132 bytes
 .../r1.6.1-incubating/_images/flink_jobs_page.png  |   Bin 0 -> 65991 bytes
 content/docs/r1.6.1-incubating/_images/hang.png    |   Bin 0 -> 18064 bytes
 .../docs/r1.6.1-incubating/_images/idea_debug.png  |   Bin 0 -> 64079 bytes
 .../_images/incremental_collection.png             |   Bin 0 -> 119034 bytes
 .../_images/kyuubi_architecture_new.png            |   Bin 0 -> 127660 bytes
 .../_images/kyuubi_kerberos_authentication.png     |   Bin 0 -> 144968 bytes
 .../docs/r1.6.1-incubating/_images/kyuubi_logo.png |   Bin 0 -> 23347 bytes
 .../_images/kyuubi_start_status_spark_UI.png       |   Bin 0 -> 63909 bytes
 .../_images/localshufflereader.png                 |   Bin 0 -> 75119 bytes
 .../docs/r1.6.1-incubating/_images/metadata.png    |   Bin 0 -> 891987 bytes
 .../_images/new_database_connection.png            |   Bin 0 -> 719371 bytes
 .../r1.6.1-incubating/_images/select_database.png  |   Bin 0 -> 55223 bytes
 .../r1.6.1-incubating/_images/spark_jobs_page.png  |   Bin 0 -> 226250 bytes
 .../r1.6.1-incubating/_images/spark_sql_cdh6.png   |   Bin 0 -> 123038 bytes
 .../r1.6.1-incubating/_images/spark_sql_docker.png |   Bin 0 -> 110905 bytes
 content/docs/r1.6.1-incubating/_images/start.png   |   Bin 0 -> 35392 bytes
 content/docs/r1.6.1-incubating/_images/sts.png     |   Bin 0 -> 56068 bytes
 .../r1.6.1-incubating/_images/trino-query-page.png |   Bin 0 -> 133468 bytes
 .../docs/r1.6.1-incubating/_images/workspace.png   |   Bin 0 -> 16371 bytes
 .../r1.6.1-incubating/_images/zorder-workflow.png  |   Bin 0 -> 49985 bytes
 content/docs/r1.6.1-incubating/_static/basic.css   |   906 ++
 .../docs/r1.6.1-incubating/_static/css/custom.css  |    47 +
 content/docs/r1.6.1-incubating/_static/doctools.js |   358 +
 .../_static/documentation_options.js               |    14 +
 content/docs/r1.6.1-incubating/_static/file.png    |   Bin 0 -> 286 bytes
 .../_static/images/logo_binder.svg                 |    19 +
 .../_static/images/logo_colab.png                  |   Bin 0 -> 7601 bytes
 .../_static/images/logo_deepnote.svg               |     1 +
 .../_static/images/logo_jupyterhub.svg             |     1 +
 .../docs/r1.6.1-incubating/_static/jquery-3.5.1.js | 10872 +++++++++++++++++++
 content/docs/r1.6.1-incubating/_static/jquery.js   |     2 +
 .../docs/r1.6.1-incubating/_static/kyuubi_logo.png |   Bin 0 -> 23347 bytes
 .../r1.6.1-incubating/_static/kyuubi_logo_red.png  |   Bin 0 -> 1179 bytes
 .../r1.6.1-incubating/_static/language_data.js     |   297 +
 .../_static/locales/ar/LC_MESSAGES/booktheme.po    |    81 +
 .../_static/locales/bg/LC_MESSAGES/booktheme.po    |    81 +
 .../_static/locales/bn/LC_MESSAGES/booktheme.po    |    66 +
 .../_static/locales/ca/LC_MESSAGES/booktheme.po    |    69 +
 .../_static/locales/cs/LC_MESSAGES/booktheme.po    |    81 +
 .../_static/locales/da/LC_MESSAGES/booktheme.po    |    81 +
 .../_static/locales/de/LC_MESSAGES/booktheme.po    |    81 +
 .../_static/locales/el/LC_MESSAGES/booktheme.po    |    81 +
 .../_static/locales/eo/LC_MESSAGES/booktheme.po    |    81 +
 .../_static/locales/es/LC_MESSAGES/booktheme.po    |    81 +
 .../_static/locales/et/LC_MESSAGES/booktheme.po    |    81 +
 .../_static/locales/fi/LC_MESSAGES/booktheme.po    |    81 +
 .../_static/locales/fr/LC_MESSAGES/booktheme.po    |    81 +
 .../_static/locales/hr/LC_MESSAGES/booktheme.po    |    81 +
 .../_static/locales/id/LC_MESSAGES/booktheme.po    |    81 +
 .../_static/locales/it/LC_MESSAGES/booktheme.po    |    81 +
 .../_static/locales/iw/LC_MESSAGES/booktheme.po    |    81 +
 .../_static/locales/ja/LC_MESSAGES/booktheme.po    |    81 +
 .../_static/locales/ko/LC_MESSAGES/booktheme.po    |    81 +
 .../_static/locales/lt/LC_MESSAGES/booktheme.po    |    81 +
 .../_static/locales/lv/LC_MESSAGES/booktheme.po    |    81 +
 .../_static/locales/ml/LC_MESSAGES/booktheme.po    |    69 +
 .../_static/locales/mr/LC_MESSAGES/booktheme.po    |    69 +
 .../_static/locales/ms/LC_MESSAGES/booktheme.po    |    69 +
 .../_static/locales/nl/LC_MESSAGES/booktheme.po    |    81 +
 .../_static/locales/no/LC_MESSAGES/booktheme.po    |    81 +
 .../_static/locales/pl/LC_MESSAGES/booktheme.po    |    81 +
 .../_static/locales/pt/LC_MESSAGES/booktheme.po    |    81 +
 .../_static/locales/ro/LC_MESSAGES/booktheme.po    |    81 +
 .../_static/locales/ru/LC_MESSAGES/booktheme.po    |    81 +
 .../_static/locales/sk/LC_MESSAGES/booktheme.po    |    81 +
 .../_static/locales/sl/LC_MESSAGES/booktheme.po    |    81 +
 .../_static/locales/sr/LC_MESSAGES/booktheme.po    |    81 +
 .../_static/locales/sv/LC_MESSAGES/booktheme.po    |    81 +
 .../_static/locales/ta/LC_MESSAGES/booktheme.po    |    69 +
 .../_static/locales/te/LC_MESSAGES/booktheme.po    |    69 +
 .../_static/locales/tg/LC_MESSAGES/booktheme.po    |    81 +
 .../_static/locales/th/LC_MESSAGES/booktheme.po    |    81 +
 .../_static/locales/tl/LC_MESSAGES/booktheme.po    |    69 +
 .../_static/locales/tr/LC_MESSAGES/booktheme.po    |    81 +
 .../_static/locales/uk/LC_MESSAGES/booktheme.po    |    81 +
 .../_static/locales/ur/LC_MESSAGES/booktheme.po    |    69 +
 .../_static/locales/vi/LC_MESSAGES/booktheme.po    |    81 +
 .../_static/locales/zh_CN/LC_MESSAGES/booktheme.po |    78 +
 .../_static/locales/zh_TW/LC_MESSAGES/booktheme.po |    81 +
 content/docs/r1.6.1-incubating/_static/minus.png   |   Bin 0 -> 90 bytes
 content/docs/r1.6.1-incubating/_static/plus.png    |   Bin 0 -> 90 bytes
 .../docs/r1.6.1-incubating/_static/pygments.css    |    74 +
 .../_static/sbt-webpack-macros.html                |    11 +
 .../_static/scripts/pydata-sphinx-theme.js         |    32 +
 .../_static/scripts/sphinx-book-theme.js           |     2 +
 .../_static/scripts/sphinx-book-theme.js.map       |     1 +
 .../docs/r1.6.1-incubating/_static/searchtools.js  |   525 +
 .../_static/styles/pydata-sphinx-theme.css         |     6 +
 .../_static/styles/sphinx-book-theme.css           |     8 +
 .../r1.6.1-incubating/_static/styles/theme.css     |   134 +
 .../r1.6.1-incubating/_static/underscore-1.13.1.js |  2042 ++++
 .../docs/r1.6.1-incubating/_static/underscore.js   |     6 +
 .../_static/vendor/fontawesome/5.13.0/LICENSE.txt  |    34 +
 .../vendor/fontawesome/5.13.0/css/all.min.css      |     5 +
 .../fontawesome/5.13.0/webfonts/fa-brands-400.eot  |   Bin 0 -> 133034 bytes
 .../fontawesome/5.13.0/webfonts/fa-brands-400.svg  |  3570 ++++++
 .../fontawesome/5.13.0/webfonts/fa-brands-400.ttf  |   Bin 0 -> 132728 bytes
 .../fontawesome/5.13.0/webfonts/fa-brands-400.woff |   Bin 0 -> 89824 bytes
 .../5.13.0/webfonts/fa-brands-400.woff2            |   Bin 0 -> 76612 bytes
 .../fontawesome/5.13.0/webfonts/fa-regular-400.eot |   Bin 0 -> 34390 bytes
 .../fontawesome/5.13.0/webfonts/fa-regular-400.svg |   803 ++
 .../fontawesome/5.13.0/webfonts/fa-regular-400.ttf |   Bin 0 -> 34092 bytes
 .../5.13.0/webfonts/fa-regular-400.woff            |   Bin 0 -> 16800 bytes
 .../5.13.0/webfonts/fa-regular-400.woff2           |   Bin 0 -> 13584 bytes
 .../fontawesome/5.13.0/webfonts/fa-solid-900.eot   |   Bin 0 -> 202902 bytes
 .../fontawesome/5.13.0/webfonts/fa-solid-900.svg   |  4938 +++++++++
 .../fontawesome/5.13.0/webfonts/fa-solid-900.ttf   |   Bin 0 -> 202616 bytes
 .../fontawesome/5.13.0/webfonts/fa-solid-900.woff  |   Bin 0 -> 103300 bytes
 .../fontawesome/5.13.0/webfonts/fa-solid-900.woff2 |   Bin 0 -> 79444 bytes
 .../r1.6.1-incubating/_static/webpack-macros.html  |    29 +
 content/docs/r1.6.1-incubating/appendix/index.html |  1315 +++
 .../r1.6.1-incubating/appendix/terminology.html    |  1658 +++
 .../changelog/v1.5.1-incubating.html               |  1277 +++
 .../changelog/v1.5.2-incubating.html               |  1277 +++
 .../changelog/v1.6.0-incubating.html               |  1277 +++
 .../changelog/v1.6.1-incubating.html               |  1277 +++
 .../client/advanced/configurations.html            |  1292 +++
 .../client/advanced/features/engine_pool.html      |  1292 +++
 .../client/advanced/features/engine_resouces.html  |  1276 +++
 .../advanced/features/engine_share_level.html      |  1292 +++
 .../client/advanced/features/engine_ttl.html       |  1292 +++
 .../client/advanced/features/engine_type.html      |  1292 +++
 .../client/advanced/features/index.html            |  1302 +++
 .../client/advanced/features/plan_only.html        |  1292 +++
 .../client/advanced/features/scala.html            |  1292 +++
 .../r1.6.1-incubating/client/advanced/index.html   |  1318 +++
 .../client/advanced/kerberos.html                  |  1690 +++
 .../r1.6.1-incubating/client/advanced/logging.html |  1292 +++
 .../client/bi_tools/datagrip.html                  |  1503 +++
 .../r1.6.1-incubating/client/bi_tools/dbeaver.html |  1525 +++
 .../r1.6.1-incubating/client/bi_tools/hue.html     |  1513 +++
 .../r1.6.1-incubating/client/bi_tools/index.html   |  1308 +++
 .../r1.6.1-incubating/client/bi_tools/powerbi.html |  1296 +++
 .../client/bi_tools/superset.html                  |  1296 +++
 .../r1.6.1-incubating/client/bi_tools/tableau.html |  1296 +++
 .../r1.6.1-incubating/client/cli/hive_beeline.html |  1347 +++
 .../docs/r1.6.1-incubating/client/cli/index.html   |  1301 +++
 .../client/cli/kyuubi_beeline.html                 |  1296 +++
 content/docs/r1.6.1-incubating/client/index.html   |  1348 +++
 .../r1.6.1-incubating/client/jdbc/hive_jdbc.html   |  1471 +++
 .../docs/r1.6.1-incubating/client/jdbc/index.html  |  1299 +++
 .../r1.6.1-incubating/client/jdbc/kyuubi_jdbc.html |  1588 +++
 .../r1.6.1-incubating/client/jdbc/mysql_jdbc.html  |  1300 +++
 .../docs/r1.6.1-incubating/client/odbc/index.html  |  1294 +++
 .../r1.6.1-incubating/client/python/index.html     |  1303 +++
 .../r1.6.1-incubating/client/python/pyhive.html    |  1296 +++
 .../r1.6.1-incubating/client/python/pyspark.html   |  1565 +++
 .../docs/r1.6.1-incubating/client/rest/index.html  |  1300 +++
 .../r1.6.1-incubating/client/rest/rest_api.html    |  1828 ++++
 .../r1.6.1-incubating/client/thrift/index.html     |  1294 +++
 .../docs/r1.6.1-incubating/client/ui/index.html    |  1294 +++
 .../r1.6.1-incubating/community/CONTRIBUTING.html  |  1416 +++
 .../r1.6.1-incubating/community/collaborators.html |  1309 +++
 .../docs/r1.6.1-incubating/community/index.html    |  1316 +++
 .../docs/r1.6.1-incubating/community/release.html  |  1783 +++
 .../connector/flink/flink_table_store.html         |  1428 +++
 .../r1.6.1-incubating/connector/flink/hudi.html    |  1435 +++
 .../r1.6.1-incubating/connector/flink/iceberg.html |  1436 +++
 .../r1.6.1-incubating/connector/flink/index.html   |  1311 +++
 .../r1.6.1-incubating/connector/hive/index.html    |  1294 +++
 .../docs/r1.6.1-incubating/connector/index.html    |  1340 +++
 .../connector/spark/delta_lake.html                |  1427 +++
 .../spark/delta_lake_with_azure_blob.html          |  1811 +++
 .../connector/spark/flink_table_store.html         |  1419 +++
 .../r1.6.1-incubating/connector/spark/hudi.html    |  1427 +++
 .../r1.6.1-incubating/connector/spark/iceberg.html |  1447 +++
 .../r1.6.1-incubating/connector/spark/index.html   |  1354 +++
 .../r1.6.1-incubating/connector/spark/kudu.html    |  1625 +++
 .../r1.6.1-incubating/connector/spark/tidb.html    |  1434 +++
 .../r1.6.1-incubating/connector/spark/tpcds.html   |  1436 +++
 .../r1.6.1-incubating/connector/spark/tpch.html    |  1432 +++
 .../connector/trino/flink_table_store.html         |  1421 +++
 .../r1.6.1-incubating/connector/trino/iceberg.html |  1410 +++
 .../r1.6.1-incubating/connector/trino/index.html   |  1306 +++
 .../deployment/engine_lifecycle.html               |  1465 +++
 .../deployment/engine_on_kubernetes.html           |  1519 +++
 .../deployment/engine_on_yarn.html                 |  1886 ++++
 .../deployment/engine_share_level.html}            |  1662 ++-
 .../deployment/high_availability_guide.html}       |  1527 ++-
 .../deployment/hive_metastore.html                 |  1686 +++
 .../docs/r1.6.1-incubating/deployment/index.html   |  1442 +++
 .../deployment/kyuubi_on_kubernetes.html           |  1478 +++
 .../deployment/migration-guide.html                |  1366 +++
 .../r1.6.1-incubating/deployment/settings.html     |  3958 +++++++
 .../r1.6.1-incubating/deployment/spark/aqe.html    |  1818 ++++
 .../deployment/spark/dynamic_allocation.html       |  1607 +++
 .../deployment/spark/incremental_collection.html   |  1457 +++
 .../r1.6.1-incubating/deployment/spark/index.html  |  1323 +++
 .../develop_tools/build_document.html              |  1428 +++
 .../r1.6.1-incubating/develop_tools/building.html  |  1470 +++
 .../r1.6.1-incubating/develop_tools/debugging.html |  1508 +++
 .../r1.6.1-incubating/develop_tools/developer.html |  1418 +++
 .../develop_tools/distribution.html                |  1335 +++
 .../develop_tools/idea_setup.html                  |  1463 +++
 .../r1.6.1-incubating/develop_tools/index.html     |  1342 +++
 .../r1.6.1-incubating/develop_tools/testing.html   |  1393 +++
 .../extensions/engines/flink/index.html            |  1301 +++
 .../extensions/engines/hive/index.html             |  1303 +++
 .../extensions/engines/index.html                  |  1319 +++
 .../extensions/engines/spark/functions.html        |  1349 +++
 .../extensions/engines/spark/index.html            |  1301 +++
 .../extensions/engines/spark/rules.html            |  1515 +++
 .../engines/spark/z-order-benchmark.html           |  1648 +++
 .../extensions/engines/spark/z-order.html          |  1599 +++
 .../extensions/engines/trino/index.html            |  1301 +++
 .../docs/r1.6.1-incubating/extensions/index.html   |  1321 +++
 .../extensions/server/applications.html            |  1478 +++
 .../extensions/server/authentication.html          |  1394 +++
 .../extensions/server/configuration.html           |  1401 +++
 .../extensions/server/events.html                  |  1300 +++
 .../r1.6.1-incubating/extensions/server/index.html |  1303 +++
 content/docs/r1.6.1-incubating/genindex.html       |  1200 ++
 content/docs/r1.6.1-incubating/index.html          |  1549 +++
 .../Babel-2.11.0.dist-info/entry_points.html       |  1316 +++
 .../Babel-2.11.0.dist-info/top_level.html          |  1301 +++
 .../Jinja2-3.1.2.dist-info/LICENSE.html            |  1325 +++
 .../Jinja2-3.1.2.dist-info/entry_points.html       |  1302 +++
 .../Jinja2-3.1.2.dist-info/top_level.html          |  1301 +++
 .../Markdown-3.3.7.dist-info/LICENSE.html          |  1327 +++
 .../Markdown-3.3.7.dist-info/entry_points.html     |  1321 +++
 .../Markdown-3.3.7.dist-info/top_level.html        |  1301 +++
 .../MarkupSafe-2.1.1.dist-info/LICENSE.html        |  1325 +++
 .../MarkupSafe-2.1.1.dist-info/top_level.html      |  1301 +++
 .../PyYAML-6.0.dist-info/top_level.html            |  1302 +++
 .../Pygments-2.13.0.dist-info/entry_points.html    |  1302 +++
 .../Pygments-2.13.0.dist-info/top_level.html       |  1301 +++
 .../Sphinx-4.5.0.dist-info/entry_points.html       |  1307 +++
 .../Sphinx-4.5.0.dist-info/top_level.html          |  1301 +++
 .../alabaster-0.7.12.dist-info/DESCRIPTION.html    |  1288 +++
 .../alabaster-0.7.12.dist-info/LICENSE.html        |  1329 +++
 .../alabaster-0.7.12.dist-info/entry_points.html   |  1302 +++
 .../alabaster-0.7.12.dist-info/top_level.html      |  1301 +++
 .../beautifulsoup4-4.11.1.dist-info/top_level.html |  1303 +++
 .../certifi-2022.9.24.dist-info/top_level.html     |  1301 +++
 .../entry_points.html                              |  1302 +++
 .../top_level.html                                 |  1301 +++
 .../commonmark-0.9.1.dist-info/entry_points.html   |  1302 +++
 .../commonmark-0.9.1.dist-info/top_level.html      |  1301 +++
 .../docutils-0.17.1.dist-info/COPYING.html         |  1440 +++
 .../docutils-0.17.1.dist-info/top_level.html       |  1301 +++
 .../docutils/parsers/rst/include/README.html       |  1286 +++
 .../docutils/parsers/rst/include/isoamsa.html      |  1300 +++
 .../docutils/parsers/rst/include/isoamsb.html      |  1300 +++
 .../docutils/parsers/rst/include/isoamsc.html      |  1300 +++
 .../docutils/parsers/rst/include/isoamsn.html      |  1300 +++
 .../docutils/parsers/rst/include/isoamso.html      |  1300 +++
 .../docutils/parsers/rst/include/isoamsr.html      |  1300 +++
 .../docutils/parsers/rst/include/isobox.html       |  1300 +++
 .../docutils/parsers/rst/include/isocyr1.html      |  1300 +++
 .../docutils/parsers/rst/include/isocyr2.html      |  1300 +++
 .../docutils/parsers/rst/include/isodia.html       |  1300 +++
 .../docutils/parsers/rst/include/isogrk1.html      |  1300 +++
 .../docutils/parsers/rst/include/isogrk2.html      |  1300 +++
 .../docutils/parsers/rst/include/isogrk3.html      |  1300 +++
 .../docutils/parsers/rst/include/isogrk4-wide.html |  1300 +++
 .../docutils/parsers/rst/include/isogrk4.html      |  1300 +++
 .../docutils/parsers/rst/include/isolat1.html      |  1300 +++
 .../docutils/parsers/rst/include/isolat2.html      |  1300 +++
 .../docutils/parsers/rst/include/isomfrk-wide.html |  1300 +++
 .../docutils/parsers/rst/include/isomfrk.html      |  1300 +++
 .../docutils/parsers/rst/include/isomopf-wide.html |  1300 +++
 .../docutils/parsers/rst/include/isomopf.html      |  1300 +++
 .../docutils/parsers/rst/include/isomscr-wide.html |  1300 +++
 .../docutils/parsers/rst/include/isomscr.html      |  1300 +++
 .../docutils/parsers/rst/include/isonum.html       |  1300 +++
 .../docutils/parsers/rst/include/isopub.html       |  1300 +++
 .../docutils/parsers/rst/include/isotech.html      |  1300 +++
 .../docutils/parsers/rst/include/mmlalias.html     |  1300 +++
 .../parsers/rst/include/mmlextra-wide.html         |  1300 +++
 .../docutils/parsers/rst/include/mmlextra.html     |  1300 +++
 .../docutils/parsers/rst/include/s5defs.html       |  1300 +++
 .../docutils/parsers/rst/include/xhtml1-lat1.html  |  1300 +++
 .../parsers/rst/include/xhtml1-special.html        |  1300 +++
 .../parsers/rst/include/xhtml1-symbol.html         |  1300 +++
 .../docutils/writers/html4css1/template.html       |  1308 +++
 .../docutils/writers/html5_polyglot/template.html  |  1308 +++
 .../docutils/writers/pep_html/template.html        |  1335 +++
 .../docutils/writers/s5_html/themes/README.html    |  1305 +++
 .../site-packages/idna-3.4.dist-info/LICENSE.html  |  1325 +++
 .../imagesize-1.4.1.dist-info/LICENSE.html         |  1289 +++
 .../imagesize-1.4.1.dist-info/top_level.html       |  1301 +++
 .../packaging-21.3.dist-info/top_level.html        |  1301 +++
 .../pip-22.3.1.dist-info/LICENSE.html              |  1317 +++
 .../pip-22.3.1.dist-info/entry_points.html         |  1304 +++
 .../pip-22.3.1.dist-info/top_level.html            |  1301 +++
 .../site-packages/pip/_vendor/vendor.html          |  1327 +++
 .../entry_points.html                              |  1302 +++
 .../static/vendor/fontawesome/5.13.0/LICENSE.html  |  1301 +++
 .../pytz-2022.6.dist-info/LICENSE.html             |  1316 +++
 .../pytz-2022.6.dist-info/top_level.html           |  1301 +++
 .../recommonmark-0.7.1.dist-info/entry_points.html |  1307 +++
 .../recommonmark-0.7.1.dist-info/top_level.html    |  1301 +++
 .../requests-2.28.1.dist-info/top_level.html       |  1301 +++
 .../setuptools-65.5.1.dist-info/entry_points.html  |  1354 +++
 .../setuptools-65.5.1.dist-info/top_level.html     |  1303 +++
 .../snowballstemmer-2.2.0.dist-info/top_level.html |  1301 +++
 .../entry_points.html                              |  1300 +++
 .../license_files/LICENSE.html                     |  1317 +++
 .../autosummary/templates/autosummary/base.html    |  1301 +++
 .../autosummary/templates/autosummary/class.html   |  1301 +++
 .../autosummary/templates/autosummary/module.html  |  1311 +++
 .../entry_points.html                              |  1302 +++
 .../assets/translations/README.html                |  1407 +++
 .../top_level.html                                 |  1301 +++
 .../top_level.html                                 |  1302 +++
 .../namespace_packages.html                        |  1301 +++
 .../top_level.html                                 |  1301 +++
 .../namespace_packages.html                        |  1301 +++
 .../top_level.html                                 |  1301 +++
 .../namespace_packages.html                        |  1301 +++
 .../top_level.html                                 |  1301 +++
 .../namespace_packages.html                        |  1301 +++
 .../top_level.html                                 |  1301 +++
 .../namespace_packages.html                        |  1301 +++
 .../top_level.html                                 |  1301 +++
 .../namespace_packages.html                        |  1301 +++
 .../top_level.html                                 |  1301 +++
 .../urllib3-1.26.12.dist-info/LICENSE.html         |  1317 +++
 .../urllib3-1.26.12.dist-info/top_level.html       |  1301 +++
 .../wheel-0.38.4.dist-info/LICENSE.html            |  1317 +++
 .../wheel-0.38.4.dist-info/entry_points.html       |  1304 +++
 .../wheel-0.38.4.dist-info/top_level.html          |  1301 +++
 content/docs/r1.6.1-incubating/monitor/events.html |  1291 +++
 content/docs/r1.6.1-incubating/monitor/index.html  |  1314 +++
 .../docs/r1.6.1-incubating/monitor/logging.html    |  1724 +++
 .../docs/r1.6.1-incubating/monitor/metrics.html    |  1751 +++
 .../monitor/trouble_shooting.html                  |  1659 +++
 content/docs/r1.6.1-incubating/objects.inv         |   Bin 0 -> 4717 bytes
 .../overview/architecture.html}                    |  1612 ++-
 content/docs/r1.6.1-incubating/overview/index.html |  1321 +++
 .../r1.6.1-incubating/overview/kyuubi_vs_hive.html |  1448 +++
 .../overview/kyuubi_vs_thriftserver.html           |  1848 ++++
 .../docs/r1.6.1-incubating/quick_start/index.html  |  1315 +++
 .../r1.6.1-incubating/quick_start/quick_start.html |  2095 ++++
 .../quick_start/quick_start_with_helm.html         |  1525 +++
 .../quick_start/quick_start_with_jdbc.html         |  1426 +++
 .../quick_start/quick_start_with_jupyter.html      |  1291 +++
 content/docs/r1.6.1-incubating/requirements.html   |  1325 +++
 content/docs/r1.6.1-incubating/search.html         |  1229 +++
 content/docs/r1.6.1-incubating/searchindex.js      |     1 +
 .../r1.6.1-incubating/security/authentication.html |  1340 +++
 .../security/authorization/index.html              |  1302 +++
 .../security/authorization/spark/build.html        |  1522 +++
 .../security/authorization/spark/index.html        |  1315 +++
 .../security/authorization/spark/install.html      |  1553 +++
 .../security/authorization/spark/overview.html     |  1399 +++
 .../security/hadoop_credentials_manager.html       |  1570 +++
 content/docs/r1.6.1-incubating/security/index.html |  1320 +++
 content/docs/r1.6.1-incubating/security/jdbc.html  |  1410 +++
 .../docs/r1.6.1-incubating/security/kerberos.html  |  1471 +++
 content/docs/r1.6.1-incubating/security/kinit.html |  1478 +++
 content/docs/r1.6.1-incubating/security/ldap.html  |  1296 +++
 content/docs/r1.6.1-incubating/tools/index.html    |  1317 +++
 .../docs/r1.6.1-incubating/tools/kyuubi-admin.html |  1392 +++
 .../docs/r1.6.1-incubating/tools/kyuubi-ctl.html   |  1591 +++
 .../tools/spark_block_cleaner.html                 |  1554 +++
 503 files changed, 366562 insertions(+), 11819 deletions(-)

diff --git a/content/docs/latest b/content/docs/latest
new file mode 120000
index 0000000..0ce3a31
--- /dev/null
+++ b/content/docs/latest
@@ -0,0 +1 @@
+r1.6.1-incubating
\ No newline at end of file
diff --git a/content/docs/r1.6.0-incubating/_sources/appendix/index.rst.txt b/content/docs/r1.6.0-incubating/_sources/appendix/index.rst.txt
deleted file mode 100644
index fdb40cf..0000000
--- a/content/docs/r1.6.0-incubating/_sources/appendix/index.rst.txt
+++ /dev/null
@@ -1,24 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-Appendixes
-==========
-
-.. toctree::
-    :maxdepth: 3
-    :numbered: 4
-
-
-    terminology
diff --git a/content/docs/r1.6.0-incubating/_sources/appendix/terminology.md.txt b/content/docs/r1.6.0-incubating/_sources/appendix/terminology.md.txt
deleted file mode 100644
index 77d4dea..0000000
--- a/content/docs/r1.6.0-incubating/_sources/appendix/terminology.md.txt
+++ /dev/null
@@ -1,164 +0,0 @@
-<!--
- - Licensed to the Apache Software Foundation (ASF) under one or more
- - contributor license agreements.  See the NOTICE file distributed with
- - this work for additional information regarding copyright ownership.
- - The ASF licenses this file to You under the Apache License, Version 2.0
- - (the "License"); you may not use this file except in compliance with
- - the License.  You may obtain a copy of the License at
- -
- -   http://www.apache.org/licenses/LICENSE-2.0
- -
- - Unless required by applicable law or agreed to in writing, software
- - distributed under the License is distributed on an "AS IS" BASIS,
- - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- - See the License for the specific language governing permissions and
- - limitations under the License.
- -->
-
-# Terminologies
-
-## Kyuubi
-
-Kyuubi is a unified multi-tenant JDBC interface for large-scale data processing and analytics, built on top of Apache Spark.
-
-### JDBC
-
-> The Java Database Connectivity (JDBC) API is the industry standard for database-independent connectivity between the Java programming language and a wide range of databases SQL databases and other tabular data sources,
-> such as spreadsheets or flat files.
-> The JDBC API provides a call-level API for SQL-based database access.
-
-> JDBC technology allows you to use the Java programming language to exploit "Write Once, Run Anywhere" capabilities for applications that require access to enterprise data.
-> With a JDBC technology-enabled driver, you can connect all corporate data even in a heterogeneous environment.
-
-<p align=right>
-<em>
-<a href="https://www.oracle.com/java/technologies/javase/javase-tech-database.html">https://www.oracle.com/java/technologies/javase/javase-tech-database.html</a>
-</em>
-</p>
-
-Typically, there is a gap between business development and big data analytics.
-If the two are forcefully coupled, it would make the corresponding system difficult to operate and optimize.
-On the flip side, if decoupled, the values of both can be maximized.
-Business experts can stay focused on their own business development,
-while Big Data engineers can continuously optimize server-side performance and stability.
-Kyuubi combines the two seamlessly through an easy-to-use JDBC interface.
-
-#### Apache Hive
-
-> The Apache Hive ™ data warehouse software facilitates reading, writing, and managing large datasets residing in distributed storage using SQL. Structure can be projected onto data already in storage. A command line tool and JDBC driver are provided to connect users to Hive.
-
-<p align=right>
-<em>
-<a href="https://hive.apache.org/">https://hive.apache.org</a>
-</em>
-</p>
-
-Kyuubi supports Hive JDBC driver, which helps you seamlessly migrate your slow queries from Hive to Spark SQL.
-
-#### Apache Thrift
-
-> The Apache Thrift software framework, for scalable cross-language services development, combines a software stack with a code generation engine to build services that work efficiently and seamlessly between C++, Java, Python, PHP, Ruby, Erlang, Perl, Haskell, C#, Cocoa, JavaScript, Node.js, Smalltalk, OCaml and Delphi and other languages.
-
-<p align=right>
-<em>
-<a href="https://thrift.apache.org/">https://thrift.apache.org</a>
-</em>
-</p>
-
-### Server
-
-Server is a daemon process that handles concurrent connection and query requests and converting these requests into various operations against the **query engines** to complete the responses to clients.
-
-_**Aliases: Kyuubi Server / Kyuubi Instance / k.i.**_
-
-### ServerSpace
-
-A ServerSpace is used to register servers and expose them together as a service layer to clients.
-
-### Engine
-
-An engine handles all queries through Kyuubi servers.
-It is created in one Kyuubi server and can be shared with other Kyuubi servers by registering itself to an engine namespace.
-All its capabilities are mainly powered by Spark SQL.
-
-_**Aliases: Query Engine / Engine Instance / e.i.**_
-
-### EngineSpace
-
-An EngineSpace is internally used by servers to register and interact with engines.
-
-#### Apache Spark
-
-> [Apache Spark™](https://spark.apache.org/) is a unified analytics engine for large-scale data processing.
-
-<p align=right>
-<em>
-<a href="https://spark.apache.org">https://spark.apache.org</a>
-</em>
-</p>
-
-### Multi Tenancy
-
-Kyuubi guarantees end-to-end multi-tenant isolation and sharing in the following pipeline
-
-```
-Client --> Kyuubi --> Query Engine(Spark) --> Resource Manager --> Data Storage Layer
-```
-
-### High Availability / Load Balance
-
-As an enterprise service, SLA commitment is essential. Deploying Kyuubi in High Availability (HA) mode helps you guarantee that.
-
-#### Apache Zookeeper
-
-> Apache ZooKeeper is an effort to develop and maintain an open-source server which enables highly reliable distributed coordination.
-
-<p align=right>
-<em>
-<a href="https://zookeeper.apache.org/">https://zookeeper.apache.org</a>
-</em>
-</p>
-
-#### Apache Curator
-
-> Apache Curator is a Java/JVM client library for Apache ZooKeeper, a distributed coordination service. It includes a highlevel API framework and utilities to make using Apache ZooKeeper much easier and more reliable. It also includes recipes for common use cases and extensions such as service discovery and a Java 8 asynchronous DSL.
-
-<p align=right>
-<em>
-<a href="https://curator.apache.org/">https://curator.apache.org</a>
-</em>
-</p>
-
-## DataLake & LakeHouse
-
-Kyuubi unifies DataLake & LakeHouse access in the simplest pure SQL way, meanwhile it's also the securest way with authentication and SQL standard authorization.
-
-### Apache Iceberg
-
-> Apache Iceberg is an open table format for huge analytic datasets. Iceberg adds tables to Trino and Spark that use a high-performance format that works just like a SQL table.
-
-<p align=right>
-<em>
-<a href="http://iceberg.apache.org/">http://iceberg.apache.org/</a>
-</em>
-</p>
-
-### Delta Lake
-
-> Delta Lake is an open-source storage layer that brings ACID transactions to Apache Spark™ and big data workloads.
-
-<p align=right>
-<em>
-<a href="https://delta.io/">https://delta.io</a>
-</em>
-</p>
-
-### Apache Hudi
-
-> Apache Hudi ingests & manages storage of large analytical datasets over DFS (hdfs or cloud stores).
-
-<p align=right>
-<em>
-<a href="https://hudi.apache.org/">https://hudi.apache.org</a>
-</em>
-</p>
diff --git a/content/docs/r1.6.0-incubating/_sources/changelog/v1.5.1-incubating.md.txt b/content/docs/r1.6.0-incubating/_sources/changelog/v1.5.1-incubating.md.txt
deleted file mode 100644
index 405eba7..0000000
--- a/content/docs/r1.6.0-incubating/_sources/changelog/v1.5.1-incubating.md.txt
+++ /dev/null
@@ -1,11 +0,0 @@
-## Changelog for Apache Kyuubi(Incubating) v1.5.1-incubating
-
-[[KYUUBI #2354] Fix NPE in process builder log capture thread](https://github.com/apache/incubator-kyuubi/commit/5e76334e)  
-[[KYUUBI #2296] Fix operation log file handler leak](https://github.com/apache/incubator-kyuubi/commit/809ea2a6)  
-[[KYUUBI #2266] The default value of frontend.connection.url.use.hostname should be set to true to be consistent with previous versions](https://github.com/apache/incubator-kyuubi/commit/d3e25f08)  
-[[KYUUBI #2255]The engine state of Spark's EngineEvent is hardcoded with 0](https://github.com/apache/incubator-kyuubi/commit/2af8bbb4)  
-[[KYUUBI #2008][FOLLOWUP] Support engine type and subdomain in kyuubi-ctl](https://github.com/apache/incubator-kyuubi/commit/d1a2dda0)  
-[[KYUUBI #2156][FOLLOWUP] Fix configuration format in document](https://github.com/apache/incubator-kyuubi/commit/5225b540)  
-[[KYUUBI #2156] Change log to reflect exactly why getting token failed](https://github.com/apache/incubator-kyuubi/commit/21ca7540)  
-[[KYUUBI #2134] Respect Spark bundled log4j in extension modules](https://github.com/apache/incubator-kyuubi/commit/30dc84b5)  
-[[KYUUBI #2150] [DOCS] Fix Getting Started With Kyuubi on Kubernetes](https://github.com/apache/incubator-kyuubi/commit/e232a83a)  
diff --git a/content/docs/r1.6.0-incubating/_sources/changelog/v1.5.2-incubating.md.txt b/content/docs/r1.6.0-incubating/_sources/changelog/v1.5.2-incubating.md.txt
deleted file mode 100644
index a5bc2c8..0000000
--- a/content/docs/r1.6.0-incubating/_sources/changelog/v1.5.2-incubating.md.txt
+++ /dev/null
@@ -1,16 +0,0 @@
-## Changelog for Apache Kyuubi(Incubating) v1.5.2-incubating
-
-[[KYUUBI #2841] [1.5] Revert "[KYUUBI #2211] [Improvement] Add CHANGELOG.md to codebase for maintaining release notes"](https://github.com/apache/incubator-kyuubi/commit/2b23c0dc)  
-[[KYUUBI #2746][INFRA][1.5] Improve NOTICE of binary release](https://github.com/apache/incubator-kyuubi/commit/35a4c488)  
-[[KYUUBI-2422] Wrap close session with try-finally (#2836)](https://github.com/apache/incubator-kyuubi/commit/cbca761a)  
-[[KYUUBI #2227] Fix operation log dir not deleted issue](https://github.com/apache/incubator-kyuubi/commit/27bfa683)  
-[[KYUUBI #2208] Fixed session close operator log session dir not deleted](https://github.com/apache/incubator-kyuubi/commit/5a2bcb80)  
-[[KYUUBI #2211] [Improvement] Add CHANGELOG.md to codebase for maintaining release notes](https://github.com/apache/incubator-kyuubi/commit/9fce6266)  
-[[KYUUBI #2736] Upgrade Jackson 2.13.3](https://github.com/apache/incubator-kyuubi/commit/9466a1ab)  
-[[KYUUBI #2720] Fix KyuubiDatabaseMetaData#supportsCatalogs*](https://github.com/apache/incubator-kyuubi/commit/268d1b27)  
-[[KYUUBI #2686][1.5] Fix lock bug if engine initialization timeout](https://github.com/apache/incubator-kyuubi/commit/7e9511a4)  
-[[KYUUBI #2640] Implement TGetInfoType CLI_ODBC_KEYWORDS](https://github.com/apache/incubator-kyuubi/commit/51067384)  
-[[KYUUBI #2450][FOLLOWUP] Remove opHandle from opHandleSet when exception occurs](https://github.com/apache/incubator-kyuubi/commit/a2c0f783)  
-[[KYUUBI #2478] Backport HIVE-19018 to Kyuubi Beeline](https://github.com/apache/incubator-kyuubi/commit/fbe38de7)  
-[[KYUUBI #2484] Add conf to SessionEvent and display it in EngineSessionPage](https://github.com/apache/incubator-kyuubi/commit/87f81e3c)  
-[[KYUUBI #2450] Update lastAccessTime in getStatus and add opHandle to opHandleSet before run](https://github.com/apache/incubator-kyuubi/commit/8b143689)  
diff --git a/content/docs/r1.6.0-incubating/_sources/changelog/v1.6.0-incubating.md.txt b/content/docs/r1.6.0-incubating/_sources/changelog/v1.6.0-incubating.md.txt
deleted file mode 100644
index e08bbd1..0000000
--- a/content/docs/r1.6.0-incubating/_sources/changelog/v1.6.0-incubating.md.txt
+++ /dev/null
@@ -1,618 +0,0 @@
-## Changelog for Apache Kyuubi(Incubating) v1.6.0-incubating
-
-[Revert "[KYUUBI #3020] kyuubi ldap add new config property kyuubi.authentication.ldap.bindpw and kyuubi.authentication.ldap.attrs"](https://github.com/apache/incubator-kyuubi/commit/1f57b108)  
-[Revert "[KYUUBI #3020][FOLLOWUP] Refactor the code style"](https://github.com/apache/incubator-kyuubi/commit/2191dea0)  
-[[KYUUBI #3316] [BUILD] Enable spark-3.3 profile for license check on GA](https://github.com/apache/incubator-kyuubi/commit/2b713f6d)  
-[[KYUUBI #3254] Supplement the licenses of support etcd discovery](https://github.com/apache/incubator-kyuubi/commit/e2c66515)  
-[[KYUUBI #3301] Construct lifetimeTerminatingChecker only when needed](https://github.com/apache/incubator-kyuubi/commit/7fd54499)  
-[[KYUUBI #3272] Synchronize graceful shutdown with main stop sequence](https://github.com/apache/incubator-kyuubi/commit/fe79deee)  
-[[KYUUBI #3281] [MINOR] Use AccessControlException instead of RuntimeException if check privilege failed](https://github.com/apache/incubator-kyuubi/commit/7c157714)  
-[[KYUUBI #3222][FOLLOWUP] Fixing placeholder and config of user in JDBC Authentication Provider](https://github.com/apache/incubator-kyuubi/commit/8b65e0e4)  
-[[KYUUBI #3275] [KYUUBI 3269] [DOCS] Doc for JDBC authentication provider](https://github.com/apache/incubator-kyuubi/commit/04f31153)  
-[[KYUUBI #3217] [DOCS] Doc for using Marcos in row-level filter in Authz](https://github.com/apache/incubator-kyuubi/commit/e76f8f7b)  
-[[KYUUBI #3297] [MINOR] Null is replaced by KyuubiSQLException.featureNotSupported()](https://github.com/apache/incubator-kyuubi/commit/1e3dc52f)  
-[[KYUUBI #3244] Bump Hudi 0.12.0](https://github.com/apache/incubator-kyuubi/commit/84b164ee)  
-[[KYUUBI #3287] Exclude reload4j from hadoop-minikdc](https://github.com/apache/incubator-kyuubi/commit/1348abb1)  
-[[KYUUBI #3222][FOLLOWUP] Introdude JdbcUtils to simplify code](https://github.com/apache/incubator-kyuubi/commit/9d83e632)  
-[[KYUUBI #3020][FOLLOWUP] Refactor the code style](https://github.com/apache/incubator-kyuubi/commit/a8e201a8)  
-[[KYUUBI #3020] kyuubi ldap add new config property kyuubi.authentication.ldap.bindpw and kyuubi.authentication.ldap.attrs](https://github.com/apache/incubator-kyuubi/commit/ef109e18)  
-[[KYUUBI #3226] Privileges should be checked only once in `RuleAuthorization`](https://github.com/apache/incubator-kyuubi/commit/6ac28198)  
-[[KYUUBI #3156] Expose REST frontend connection metrics](https://github.com/apache/incubator-kyuubi/commit/d15ca518)  
-[[KYUUBI #3241][DOCS] Update `Develop Tools / Building a Runnable Distribution`](https://github.com/apache/incubator-kyuubi/commit/dab0583e)  
-[[KYUUBI #3255] Add miss engine type config docs](https://github.com/apache/incubator-kyuubi/commit/a3b6c675)  
-[[KYUUBI #3214] Plan only mode should unset when mode value is incorrect](https://github.com/apache/incubator-kyuubi/commit/92f3a532)  
-[[KYUUBI #3239] [Subtask] DorisSQLEngine - Add integration tests](https://github.com/apache/incubator-kyuubi/commit/3cf5a20b)  
-[[KYUUBI #3252] Fix the problem that |release| in the document was not replaced correctly](https://github.com/apache/incubator-kyuubi/commit/ecda4188)  
-[[KYUUBI #3247] Minor clean up Kyuubi JDBC code](https://github.com/apache/incubator-kyuubi/commit/3b990d71)  
-[[KYUUBI #3222]  JDBC Authentication Provider for server](https://github.com/apache/incubator-kyuubi/commit/a63587b1)  
-[[KYUUBI #3243] Move trait Logging#initializeLogging to object Logging](https://github.com/apache/incubator-kyuubi/commit/8c2e7746)  
-[[KYUUBI #3245] Add spark-3.3 profile in building.md](https://github.com/apache/incubator-kyuubi/commit/e584880f)  
-[[KYUUBI #3184] OperationResource rowset api should have default values for maxrows and fetchorientation](https://github.com/apache/incubator-kyuubi/commit/1466165a)  
-[[KYUUBI #3220] Make kyuubi.engine.ui.stop.enabled false in HistoryServer](https://github.com/apache/incubator-kyuubi/commit/76b44c98)  
-[[KYUUBI #1776][FOLLOWUP] Fill empty td tag for `Failure Reason` column in EngineTable](https://github.com/apache/incubator-kyuubi/commit/f60e9d47)  
-[[KYUUBI #3157][DOC] Modify logging doc due to using log4j2 instead of log4j](https://github.com/apache/incubator-kyuubi/commit/bbe7a4d7)  
-[[KYUUBI #3228] [Subtask] Connectors for Spark SQL Query Engine -> TPC-DS](https://github.com/apache/incubator-kyuubi/commit/5ed671c8)  
-[[KYUUBI #3138] [Subtask] DorisSQLEngine - Add jdbc engine to dist](https://github.com/apache/incubator-kyuubi/commit/c473634e)  
-[[KYUUBI #3230] Flink SQL engine supports run across versions](https://github.com/apache/incubator-kyuubi/commit/db0047d5)  
-[[KYUUBI #3072][DOC] Add a doc of Flink Table Store for Flink SQL engine](https://github.com/apache/incubator-kyuubi/commit/06d43cb3)  
-[[KYUUBI #3170] Expose thrift binary connection metrics](https://github.com/apache/incubator-kyuubi/commit/23ad7801)  
-[[KYUUBI #3227] SparkConfParser supports parse bytes and time](https://github.com/apache/incubator-kyuubi/commit/9bdff9ba)  
-[[KYUUBI #3219] Error renew delegation tokens: Unknown version of delegation token 8](https://github.com/apache/incubator-kyuubi/commit/6c4a8b08)  
-[[KYUUBI #3080][DOC] Add a doc of the Flink Table Store for the Trino SQL Engine](https://github.com/apache/incubator-kyuubi/commit/e847ab35)  
-[[KYUUBI #3107] [Subtask] DorisSQLEngine - Add process builder (#3123)](https://github.com/apache/incubator-kyuubi/commit/33b70cfe)  
-[[KYUUBI #3211] [Subtask] Connectors for Spark SQL Query Engine -> TPC-H](https://github.com/apache/incubator-kyuubi/commit/f36508b5)  
-[[KYUUBI #3206] Change Flink default version to 1.15](https://github.com/apache/incubator-kyuubi/commit/86964fef)  
-[[KYUUBI #833] Check if `spark.kubernetes.executor.podNamePrefix` is invalid](https://github.com/apache/incubator-kyuubi/commit/b3723392)  
-[[KYUUBI #3098] Unify the event log code path](https://github.com/apache/incubator-kyuubi/commit/d0865255)  
-[[KYUUBI #3210] [DOCS] Mention Kyuubi Spark SQL extension supports Spark 3.3](https://github.com/apache/incubator-kyuubi/commit/6aa898e5)  
-[[KYUUBI #3204] Fix duplicated ldapServer#close in LdapAuthenticationProviderImplSuite](https://github.com/apache/incubator-kyuubi/commit/bedc22cb)  
-[[KYUUBI #3209] Support configure TPC-H connector in runtime](https://github.com/apache/incubator-kyuubi/commit/c9cc9b7e)  
-[[KYUUBI #3200] Make KyuubiSessionEvent.sessionId clear](https://github.com/apache/incubator-kyuubi/commit/875fedd1)  
-[[KYUUBI #3186] Support applying Row-level Filter and Data Masking policies for DatasourceV2 in Authz module](https://github.com/apache/incubator-kyuubi/commit/64b1d920)  
-[[KYUUBI #2584][INFRA] Migrate CI to Ubuntu 22.04](https://github.com/apache/incubator-kyuubi/commit/6061098a)  
-[[KYUUBI #3172][FLINK] Fix failed test cases in Flink 1.15](https://github.com/apache/incubator-kyuubi/commit/a75de1b5)  
-[[KYUUBI #3203] [DOCS] Fix typo](https://github.com/apache/incubator-kyuubi/commit/e48205d7)  
-[[KYUUBI #3192] Refactor TPCDSConf](https://github.com/apache/incubator-kyuubi/commit/7720c9f6)  
-[[KYUUBI #3180] Add component version util](https://github.com/apache/incubator-kyuubi/commit/3cdf84e9)  
-[[KYUUBI #3162] Bump Hadoop 3.3.4](https://github.com/apache/incubator-kyuubi/commit/782e5fb9)  
-[[KYUUBI #3191] [DOCS] Add missing binary scala version in engine jar name](https://github.com/apache/incubator-kyuubi/commit/b8162f15)  
-[[KYUUBI #3199] [BUILD] Fix travis JAVA_HOME](https://github.com/apache/incubator-kyuubi/commit/9d4d2948)  
-[[KYUUBI #3194][Scala-2.13] Refine deprecated config](https://github.com/apache/incubator-kyuubi/commit/fdb91686)  
-[[KYUUBI #3198] [DOCS] Fix index of Hudi Flink connector](https://github.com/apache/incubator-kyuubi/commit/c64b7648)  
-[[KYUUBI #3189] [BUILD] Bump jetcd 0.7.3 and pin Netty dependencies](https://github.com/apache/incubator-kyuubi/commit/a46d6550)  
-[[KYUUBI #3190] [BUILD] Use jdk_switcher to setup JAVA_HOME](https://github.com/apache/incubator-kyuubi/commit/ea47cbc1)  
-[[KYUUBI #3145] Bump log4j from 2.17.2 to 2.18.0](https://github.com/apache/incubator-kyuubi/commit/3618002b)  
-[[KYUUBI #3135] Bump gRPC from 1.47.0 to 1.48.0](https://github.com/apache/incubator-kyuubi/commit/b87ee983)  
-[[KYUUBI #3178] Add application operation docs](https://github.com/apache/incubator-kyuubi/commit/30da9068)  
-[[KYUUBI #3174] Update MATURITY for C30, RE50, CO20, CO40, CO50, CS10, IN10](https://github.com/apache/incubator-kyuubi/commit/024fa2db)  
-[[KYUUBI #3175] Add session conf advisor docs](https://github.com/apache/incubator-kyuubi/commit/3e860145)  
-[[KYUUBI #3141] Trino engine etcd support](https://github.com/apache/incubator-kyuubi/commit/c6caeb83)  
-[[KYUUBI #3150] Expose metadata request metrics](https://github.com/apache/incubator-kyuubi/commit/0089f2f0)  
-[[KYUUBI #3070][DOC] Add a doc of the Hudi connector for the Flink SQL Engine](https://github.com/apache/incubator-kyuubi/commit/38c7c160)  
-[[KYUUBI #3104] Support SSL for Etcd](https://github.com/apache/incubator-kyuubi/commit/c17829bf)  
-[[KYUUBI #3152] Introduce JDBC parameters to control connection timeout](https://github.com/apache/incubator-kyuubi/commit/65ccf78b)  
-[[KYUUBI #3136] Change Map to a case class ApplicationInfo as the application info holder](https://github.com/apache/incubator-kyuubi/commit/24b93840)  
-[[KYUUBI #2240][SUB-TASK] Skip add metadata manager if frontend does not support rest](https://github.com/apache/incubator-kyuubi/commit/7aca75b1)  
-[[KYUUBI #3131] Improve operation state change logging](https://github.com/apache/incubator-kyuubi/commit/210d3567)  
-[[KYUUBI #3158] Fix npe issue when formatting the kyuubi-ctl output](https://github.com/apache/incubator-kyuubi/commit/0ddf7e38)  
-[[KYUUBI #3160][DOCS] `Dependencies` links in `Connectors for Spark SQL Query Engine` pages jump to wrong place #3160 (#3161)](https://github.com/apache/incubator-kyuubi/commit/3729a998)  
-[[KYUUBI #3154][Subtask] Connectors for Spark SQL Query Engine -> TiDB/TiKV](https://github.com/apache/incubator-kyuubi/commit/da87ca55)  
-[[KYUUBI #3082] Add iceberg connector doc for Trino SQL Engine](https://github.com/apache/incubator-kyuubi/commit/60cb4bd0)  
-[[KYUUBI #3071][DOC] Add iceberg connector for Flink SQL Engine](https://github.com/apache/incubator-kyuubi/commit/0f21aa94)  
-[[KYUUBI #3153] Move batch util class to kyuubi rest sdk for programing friendly](https://github.com/apache/incubator-kyuubi/commit/34e7d1ad)  
-[[KYUUBI #3067][DOC] Add Flink Table Store connector doc for Spark SQL Engine](https://github.com/apache/incubator-kyuubi/commit/91a25349)  
-[[KYUUBI #3148] Change etcd docker image to recover arm64 CI](https://github.com/apache/incubator-kyuubi/commit/137e818c)  
-[[KYUUBI #3119] [TEST] Using  more light-weight SparkPi for batch related tests](https://github.com/apache/incubator-kyuubi/commit/6b414083)  
-[[KYUUBI #3023] Kyuubi Hive JDBC: Replace UGI-based Kerberos authentication w/ JAAS](https://github.com/apache/incubator-kyuubi/commit/87782097)  
-[[KYUUBI #3144] Remove deprecated KyuubiDriver in services manifest](https://github.com/apache/incubator-kyuubi/commit/d5dae096)  
-[[KYUUBI #3143] Check class loadable before applying SLF4JBridgeHandler](https://github.com/apache/incubator-kyuubi/commit/d07d7cc2)  
-[[KYUUBI #3139] Override toString method for rest dto classes](https://github.com/apache/incubator-kyuubi/commit/7b15f2ed)  
-[[KYUUBI #3087] Convert the kyuubi batch conf with `spark.` prefix so that spark could identify](https://github.com/apache/incubator-kyuubi/commit/eb96db54)  
-[[KYUUBI #3133] Always run Flink statement in sync mode](https://github.com/apache/incubator-kyuubi/commit/976af3d9)  
-[[KYUUBI #3121] [CI] Fix GA oom issue](https://github.com/apache/incubator-kyuubi/commit/0e910197)  
-[[KYUUBI #3126] Using markdown 3.3.7 for kyuubi document build](https://github.com/apache/incubator-kyuubi/commit/64090f50)  
-[[KYUUBI #3106] Correct `RelMetadataProvider` used in flink-sql-engine](https://github.com/apache/incubator-kyuubi/commit/0b3f6b73)  
-[[KYUUBI #3069][DOC] Add Iceberg connector doc for Spark SQL Engine](https://github.com/apache/incubator-kyuubi/commit/5c1ea6e5)  
-[[KYUUBI #3068][DOC] Add the Hudi connector doc for Spark SQL Query Engine](https://github.com/apache/incubator-kyuubi/commit/f1312ea4)  
-[[KYUUBI #3108][DOC] Fix path errors in the build document](https://github.com/apache/incubator-kyuubi/commit/4b640b72)  
-[[KYUUBI #3111] Replace HashMap with singletonMap](https://github.com/apache/incubator-kyuubi/commit/a5a97489)  
-[[KYUUBI #3113] Bump up delta lake version from 2.0.0rc1 to 2.0.0](https://github.com/apache/incubator-kyuubi/commit/65c3d2fb)  
-[[KYUUBI #3101] [Subtask][#3100] Build the content for extension points documentation](https://github.com/apache/incubator-kyuubi/commit/6c8024c8)  
-[[KYUUBI #3102] Fix multi endpoints for etcd](https://github.com/apache/incubator-kyuubi/commit/16f41694)  
-[[KYUUBI #3008] Bump prometheus from 0.14.1 to 0.16.0](https://github.com/apache/incubator-kyuubi/commit/b0685f9b)  
-[[KYUUBI #3095] Move TPC-DS/TPC-H queries to unique folder](https://github.com/apache/incubator-kyuubi/commit/7f592ecf)  
-[[KYUUBI #3094] Code refactor on Kyuubi Hive JDBC driver](https://github.com/apache/incubator-kyuubi/commit/eb705bd1)  
-[[KYUUBI #3092] Replace apache commons Base64 w/ JDK](https://github.com/apache/incubator-kyuubi/commit/47f8f9cc)  
-[[KYUUBI #3093] Fix Kyuubi Hive JDBC driver SPNEGO header](https://github.com/apache/incubator-kyuubi/commit/77b6ee0d)  
-[[KYUUBI #3050] Bump Apache Iceberg 0.14.0](https://github.com/apache/incubator-kyuubi/commit/69996224)  
-[[KYUUBI #3044] Bump Spark 3.2.2](https://github.com/apache/incubator-kyuubi/commit/720bc00c)  
-[[KYUUBI #3052][FOLLOWUP] Do not use the ip in proxy http header for authentication to prevent CVE](https://github.com/apache/incubator-kyuubi/commit/d75f48ea)  
-[[KYUUBI #3051] Support to get the  real client ip address for thrift connection when using VIP as kyuubi server load balancer](https://github.com/apache/incubator-kyuubi/commit/8f3d7898)  
-[[KYUUBI #3046][Metrics] Add meter metrics for recording the rate of the operation state for each kyuubi operation](https://github.com/apache/incubator-kyuubi/commit/4bb06542)  
-[[KYUUBI #3045][FOLLOWUP] Correct the common options and add docs for kyuubi-admin command](https://github.com/apache/incubator-kyuubi/commit/99934591)  
-[[KYUUBI #3076][Subtask][#3039] Add the docs for rest api - Batch Resource](https://github.com/apache/incubator-kyuubi/commit/9cb8041d)  
-[[KYUUBI #3077] Remove meaningless statement override in LaunchEngine](https://github.com/apache/incubator-kyuubi/commit/6a6044be)  
-[[KYUUBI #3018] [Subtask] DorisSQLEngine - GetColumns Operation](https://github.com/apache/incubator-kyuubi/commit/419d725c)  
-[[KYUUBI #3073] CredentialsManager should use appUser to renew credential](https://github.com/apache/incubator-kyuubi/commit/82d61c9f)  
-[[KYUUBI #3065] Support to retry the killApplicationByTag for JpsApplicationOperation](https://github.com/apache/incubator-kyuubi/commit/0857786e)  
-[[KYUUBI #3054] Add description of the discovery client in the conf doc](https://github.com/apache/incubator-kyuubi/commit/ce72a502)  
-[[KYUUBI #3043][FOLLOWUP] Restore accidentally removed public APIs of kyuubi-hive-jdbc module](https://github.com/apache/incubator-kyuubi/commit/642b2769)  
-[[KYUUBI #3060] [Subtask][#3059] Build content of the connector document section](https://github.com/apache/incubator-kyuubi/commit/48647623)  
-[[KYUUBI #3045] Support to do admin rest request with kyuubi-adminctl](https://github.com/apache/incubator-kyuubi/commit/a7d190dd)  
-[[KYUUBI #3055] Expose client ip address into batch request conf](https://github.com/apache/incubator-kyuubi/commit/4c3a9ed0)  
-[[KYUUBI #3052] Support to get the real client ip address for http connection when using VIP as kyuubi server load balancer](https://github.com/apache/incubator-kyuubi/commit/a3973a0b)  
-[[KYUUBI #3043] Clean up Kyuubi Hive JDBC client](https://github.com/apache/incubator-kyuubi/commit/b99f25f2)  
-[[KYUUBI #3047] Fallback krb5 conf to OS if not configured](https://github.com/apache/incubator-kyuubi/commit/a5f733b4)  
-[[KYUUBI #2644] Add etcd discovery client for HA](https://github.com/apache/incubator-kyuubi/commit/32970ce6)  
-[[KYUUBI #3040] [Subtask][#3039] Build the skeleton of client side documentation](https://github.com/apache/incubator-kyuubi/commit/9060bf22)  
-[[KYUUBI #2974][FEATURE] EOL Support for Spark 3.0](https://github.com/apache/incubator-kyuubi/commit/c1158acc)  
-[[KYUUBI #3042] Kyuubi Hive JDBC should throw KyuubiSQLException](https://github.com/apache/incubator-kyuubi/commit/1bc3916d)  
-[[KYUUBI #3037] Handles configuring the JUL -> SLF4J bridge](https://github.com/apache/incubator-kyuubi/commit/c5d29260)  
-[[KYUUBI #3033][Bug] Kyuubi failed to start due to PID directory not exists](https://github.com/apache/incubator-kyuubi/commit/d3446675)  
-[[KYUUBI #3028][FLINK] Bump Flink versions to 1.14.5 and 1.15.1](https://github.com/apache/incubator-kyuubi/commit/1fbe16fc)  
-[[KYUUBI #2478][FOLLOWUP] Fix bin/beeline without -u exits unexpectedly](https://github.com/apache/incubator-kyuubi/commit/e4929949)  
-[[KYUUBI #3025] Fix the kyuubi restful href link format issue](https://github.com/apache/incubator-kyuubi/commit/95cb57e8)  
-[[KYUUBI #3007] Bump scopt from 4.0.1 to 4.1.0](https://github.com/apache/incubator-kyuubi/commit/c652bba4)  
-[[KYUUBI #3019] Backport HIVE-21538 - Beeline: password source though the console reader did not pass to connection param](https://github.com/apache/incubator-kyuubi/commit/a6499c6c)  
-[[KYUUBI #3017] kyuubi-ctl should print error message to right place](https://github.com/apache/incubator-kyuubi/commit/3c75e9de)  
-[[KYUUBI #3010] Bump Jetty from 9.4.41.v20210516 to 9.4.48.v20220622](https://github.com/apache/incubator-kyuubi/commit/1f59a592)  
-[[KYUUBI #3011] Bump swagger from 2.1.11 to 2.2.1](https://github.com/apache/incubator-kyuubi/commit/dc6e764f)  
-[[KYUUBI #3009] Bump Jersey from 2.35 to 2.36](https://github.com/apache/incubator-kyuubi/commit/a2431d0c)  
-[[KYUUBI #2801][FOLLOWUP] Also check whether the batch main resource path is in local dir allow list](https://github.com/apache/incubator-kyuubi/commit/c922ae28)  
-[[KYUUBI #3012] Remove unused thrift request max attempts and related ut](https://github.com/apache/incubator-kyuubi/commit/13e618cf)  
-[[KYUUBI #3005] [DOCS] Correct spelling errors and optimizations in 'Building Kyuubi Documentation' part](https://github.com/apache/incubator-kyuubi/commit/3203829f)  
-[[KYUUBI #3004] Clean up JDBC shaded client pom and license](https://github.com/apache/incubator-kyuubi/commit/3e5a92ef)  
-[[KYUUBI #2895] Show final info in trino engine](https://github.com/apache/incubator-kyuubi/commit/66a45f3e)  
-[[KYUUBI #2984] Refactor TPCDS configurations using SparkConfParser](https://github.com/apache/incubator-kyuubi/commit/9e2aaffc)  
-[[KYUUBI #2996] Remove Hive storage-api dependencies from Kyuubi Hive JDBC](https://github.com/apache/incubator-kyuubi/commit/4b8dc796)  
-[[KYUUBI #2997] Use spark shim set current namespace](https://github.com/apache/incubator-kyuubi/commit/407fc8db)  
-[[KYUUBI #2994] Remove Hive common dependencies from Kyuubi Hive JDBC](https://github.com/apache/incubator-kyuubi/commit/774934fe)  
-[[KYUUBI #2850][FOLLOWUP] Replace log4j2.properties by log4j2.xml](https://github.com/apache/incubator-kyuubi/commit/c7e2b322)  
-[[KYUUBI #2801] Add local dir allow list and check the application access path URI](https://github.com/apache/incubator-kyuubi/commit/b585fb42)  
-[[KYUUBI #2953] Support to interrupt the thrift request if remote engine is broken](https://github.com/apache/incubator-kyuubi/commit/03e55e0c)  
-[[KYUUBI #2999] Fix Kyuubi Hive Beeline dependencies](https://github.com/apache/incubator-kyuubi/commit/5eb83b4c)  
-[[KYUUBI #2850][FOLLOWUP] Fix default log4j2 configuration](https://github.com/apache/incubator-kyuubi/commit/8dddfeb0)  
-[[KYUUBI #2977] [BATCH] Using KyuubiApplicationManger#tagApplication help tag batch application](https://github.com/apache/incubator-kyuubi/commit/b174d0c1)  
-[[KYUUBI #2993] Fix typo in KyuubiConf and mark more config entries server only](https://github.com/apache/incubator-kyuubi/commit/163e0f82)  
-[[KYUUBI #2987] Remove Hive shims-common and shims-0.23 dependencies from Kyuubi Hive JDBC](https://github.com/apache/incubator-kyuubi/commit/d8d6903f)  
-[[KYUUBI #2985] Prompt configuration when starting engine timeout](https://github.com/apache/incubator-kyuubi/commit/8d09c83b)  
-[[KYUUBI #2989] Remove HS2 active-passive support in Kyuubi Hive JDBC client](https://github.com/apache/incubator-kyuubi/commit/56c01616)  
-[[KYUUBI #2983] Remove Hive llap-client dependencies from Kyuubi Hive JDBC](https://github.com/apache/incubator-kyuubi/commit/e41ef566)  
-[[KYUUBI #2981] Improve TPC-DS scan performance](https://github.com/apache/incubator-kyuubi/commit/3a80f33b)  
-[[KYUUBI #2917] Remove Hive service dependencies from Kyuubi Hive JDBC](https://github.com/apache/incubator-kyuubi/commit/145a18db)  
-[[KYUUBI #2868] [K8S] Add KubernetesApplicationOperation](https://github.com/apache/incubator-kyuubi/commit/3bc299d0)  
-[[KYUUBI #2979] Fix helm icon url](https://github.com/apache/incubator-kyuubi/commit/cfe380ae)  
-[[KYUUBI #2978] [SUB-TASK][KPIP-4] If batch app status not found from cluster manager, fall back to metadata store](https://github.com/apache/incubator-kyuubi/commit/b115c8dd)  
-[[KYUUBI #2975] Code improvement in rest client](https://github.com/apache/incubator-kyuubi/commit/bd2f5b23)  
-[[KYUUBI #2964] [SUB-TASK][KPIP-4] Refine the batch response and render](https://github.com/apache/incubator-kyuubi/commit/6f308c43)  
-[[KYUUBI #2976] Expose session name into kyuubi engine tab](https://github.com/apache/incubator-kyuubi/commit/2d0bb9f2)  
-[[KYUUBI #2850][FOLLOWUP] Provide log4j2.xml.template in binary and use log4j2-defaults.xml](https://github.com/apache/incubator-kyuubi/commit/cec8b03f)  
-[[KYUUBI #2963] Bump Delta 2.0.0rc1](https://github.com/apache/incubator-kyuubi/commit/b7cd6f97)  
-[[KYUUBI #2918][Bug] Kyuubi integrated Ranger failed to query: table stats must be specified](https://github.com/apache/incubator-kyuubi/commit/8d4d00fe)  
-[[KYUUBI #2966] Remove TProtocolVersion from SessionHandle/OperationHandle](https://github.com/apache/incubator-kyuubi/commit/a9908a1b)  
-[[KYUUBI #2972] Using stdout for the output of kyuubi-ctl](https://github.com/apache/incubator-kyuubi/commit/33872057)  
-[[KYUUBI #2973] Decorate LOG in the RetryableRestClient with static final](https://github.com/apache/incubator-kyuubi/commit/5412d1b9)  
-[[KYUUBI #2962] [SUB-TASK][KPIP-4] Throw exception if the metadata update count is zero](https://github.com/apache/incubator-kyuubi/commit/10affbf6)  
-[[KYUUBI #2956] Support to config the connect/socket timeout of rest client for kyuubi-ctl](https://github.com/apache/incubator-kyuubi/commit/6e4b5582)  
-[[KYUUBI #2960] TFrontendService.SERVER_VERSION shall be HIVE_CLI_SERVICE_PROTOCOL_V11](https://github.com/apache/incubator-kyuubi/commit/cf27278f)  
-[[KYUUBI #2957] [SUB-TASK][KPIP-4] Use canonical host name for kyuubi instance](https://github.com/apache/incubator-kyuubi/commit/defae6bd)  
-[[KYUUBI #2952] Remove OperationType from OperationHandle for simplification](https://github.com/apache/incubator-kyuubi/commit/6c44a7bb)  
-[[KYUUBI #2955] BatchRequest args fix: need toString operation for different data types](https://github.com/apache/incubator-kyuubi/commit/fbb434c4)  
-[[KYUUBI #2949] Flaky test: execute statement - analysis exception](https://github.com/apache/incubator-kyuubi/commit/994dc6ce)  
-[[KYUUBI #2951] No need to extend CompositeService for MetadataManager](https://github.com/apache/incubator-kyuubi/commit/c8f18f00)  
-[[KYUUBI #2948] Remove thrift request timeout for KyuubiSyncThriftClient](https://github.com/apache/incubator-kyuubi/commit/6a9d5ff2)  
-[[KYUUBI #2943][Bug][K8S] Remove Start Local Kyuubi Server For Kyuubi On K8S Test](https://github.com/apache/incubator-kyuubi/commit/caa3ed2a)  
-[[KYUUBI #2924] Correct the frontend server start state](https://github.com/apache/incubator-kyuubi/commit/de2e11c2)  
-[[KYUUBI #886][FOLLOWUP] Support to reload hadoop conf for KyuubiTHttpFrontendService](https://github.com/apache/incubator-kyuubi/commit/9bc0aa67)  
-[[KYUUBI #2929] Kyuubi integrated Ranger does not support the CTAS syntax](https://github.com/apache/incubator-kyuubi/commit/7460e745)  
-[[KYUUBI #2935] Support spnego authentication for thrift http transport mode](https://github.com/apache/incubator-kyuubi/commit/ceb66bd6)  
-[[KYUUBI #2894] Add synchronized for the ciphers of internal security accessor](https://github.com/apache/incubator-kyuubi/commit/37c0d425)  
-[[KYUUBI #2927] Fix the thread in  ScheduleThreadExecutorPool can't be shutdown immediately](https://github.com/apache/incubator-kyuubi/commit/f629992f)  
-[[KYUUBI #2876] Bump Hudi 0.11.1](https://github.com/apache/incubator-kyuubi/commit/125730a7)  
-[[KYUUBI #2922] Clean up SparkConsoleProgressBar when SQL execution fails](https://github.com/apache/incubator-kyuubi/commit/aba785f3)  
-[[KYUUBI #2919] Fix typo and wording for JDBCMetadataStoreConf](https://github.com/apache/incubator-kyuubi/commit/825c70db)  
-[[KYUUBI #2920] Fix typo for mysql metadata schema](https://github.com/apache/incubator-kyuubi/commit/f3610b2b)  
-[[KYUUBI #2890] Get the db From Sparksession When TableIdentifier's Database Field Is Empty](https://github.com/apache/incubator-kyuubi/commit/062d8746)  
-[[KYUUBI #2915] Revert "[KYUUBI #2000][DEPS] Bump Hadoop 3.3.2"](https://github.com/apache/incubator-kyuubi/commit/3435e2ae)  
-[[KYUUBI #2911] [SUB-TASK][KPIP-4] If the kyuubi instance unreachable, support to backfill state from resource manager and mark batch closed by remote kyuubi instance](https://github.com/apache/incubator-kyuubi/commit/089cf412)  
-[[KYUUBI #2912] [INFRA][DOCS] Improve release md](https://github.com/apache/incubator-kyuubi/commit/07080f35)  
-[[KYUUBI #2905][DOCS] Update the number of new committers in MATURITY.md](https://github.com/apache/incubator-kyuubi/commit/2305159a)  
-[[KYUUBI #2745] [Subtask] DorisSQLEngine - GetTables Operation](https://github.com/apache/incubator-kyuubi/commit/ea7ca789)  
-[[KYUUBI #2628][FOLLOWUP] Support waitCompletion for submit batch](https://github.com/apache/incubator-kyuubi/commit/c664c84f)  
-[[KYUUBI #2827] [BUILD][TEST] Decouple integration tests from kyuubi-server](https://github.com/apache/incubator-kyuubi/commit/2fd4e3a8)  
-[[KYUUBI #2898] Bump maven-surefire-plugin 3.0.0-M7](https://github.com/apache/incubator-kyuubi/commit/7f0c53a0)  
-[[KYUUBI #2628][FOLLOWUP] Reuse the kyuubi-ctl batch commands for SubmitBatchCommand](https://github.com/apache/incubator-kyuubi/commit/e1f74673)  
-[[KYUUBI #2897] Remove Hive metastore dependencies from Kyuubi Hive JDBC](https://github.com/apache/incubator-kyuubi/commit/62b6987a)  
-[[KYUUBI #2782] Decouple Kyuubi Hive JDBC from Hive Serde](https://github.com/apache/incubator-kyuubi/commit/e3bf6044)  
-[[KYUUBI #2854] Add exception field in KyuubiSessionEvent](https://github.com/apache/incubator-kyuubi/commit/99959b89)  
-[[KYUUBI #2873] [INFRA][DOCS] Improve release template script](https://github.com/apache/incubator-kyuubi/commit/9de3365f)  
-[[KYUUBI #2761] Flaky Test: engine.jdbc.doris.StatementSuite - test select](https://github.com/apache/incubator-kyuubi/commit/4e975507)  
-[[KYUUBI #2834] [SUB-TASK][KPIP-4] Support to retry the metadata requests on transient issue and unblock main thread](https://github.com/apache/incubator-kyuubi/commit/7baf9895)  
-[[KYUUBI #2543] Add `maxPartitionBytes` configuration for TPC-DS connecter](https://github.com/apache/incubator-kyuubi/commit/7b24ee93)  
-[[KYUUBI #886] Add HTTP transport mode support to KYUUBI - no Kerberos support](https://github.com/apache/incubator-kyuubi/commit/1ea245d2)  
-[[KYUUBI #2628][FOLLOWUP] Refine kyuubi-ctl batch commands](https://github.com/apache/incubator-kyuubi/commit/27330ddb)  
-[[KYUUBI #2859][SUB-TASK][KPIP-4] Support `--conf` for kyuubi-ctl](https://github.com/apache/incubator-kyuubi/commit/57e37334)  
-[[KYUUBI #2888] Bump Spark-3.3.0](https://github.com/apache/incubator-kyuubi/commit/eaffb27c)  
-[[KYUUBI #2708] Open engine session and renew engine credentials in the one](https://github.com/apache/incubator-kyuubi/commit/37229d41)  
-[[KYUUBI #2668][FOLLOWUP] Add log4j for rest client test](https://github.com/apache/incubator-kyuubi/commit/b987a680)  
-[[KYUUBI #2881] [SUB-TASK][KPIP-4] Rest client supports retry request if catch net exception](https://github.com/apache/incubator-kyuubi/commit/dea68bc0)  
-[[KYUUBI #2883] [Bug] java.lang.NoClassDefFoundError: org/apache/hadoop/hive/common/ValidWriteIdList in HiveDelegationTokenProvider#initialize](https://github.com/apache/incubator-kyuubi/commit/bdceaaf1)  
-[[KYUUBI #2813] Bump Iceberg 0.13.2](https://github.com/apache/incubator-kyuubi/commit/87dd1df5)  
-[[KYUUBI #2628][SUB-TASK][KPIP-4] Implement kyuubi-ctl for batch job operation](https://github.com/apache/incubator-kyuubi/commit/cb483385)  
-[[KYUUBI #2872] Catch the exception for the iterator job when incremental collect is enabled](https://github.com/apache/incubator-kyuubi/commit/383a7a84)  
-[[KYUUBI #2870] Fix sf0 query error in TPCH](https://github.com/apache/incubator-kyuubi/commit/88388951)  
-[[KYUUBI #2861][FOLLOWUP][GA] Daily publish snapshot with profile spark-3.3](https://github.com/apache/incubator-kyuubi/commit/e7a872fd)  
-[[KYUUBI #2865] Bump Spark 3.3.0-rc6](https://github.com/apache/incubator-kyuubi/commit/5510421b)  
-[[KYUUBI #2863] Unify the logic of tpch and tpcds to generate golden file](https://github.com/apache/incubator-kyuubi/commit/9403566d)  
-[[KYUUBI #2862] [BUILD] Release script supports Spark 3.3](https://github.com/apache/incubator-kyuubi/commit/c4955a8d)  
-[[KYUUBI #2861] [GA] Daily publish snapshot with profile spark-3.3](https://github.com/apache/incubator-kyuubi/commit/67ad2556)  
-[[KYUUBI #2624] Support isExtended for FilteredShowTablesCommand in AuthZ module.](https://github.com/apache/incubator-kyuubi/commit/ee8eceb2)  
-[[KYUUBI #2858] Support skipTests for kyuubi rest client module](https://github.com/apache/incubator-kyuubi/commit/b3fcc9ed)  
-[[KYUUBI #2848] Global temp view should only exist in session catalog](https://github.com/apache/incubator-kyuubi/commit/60b0cd18)  
-[[KYUUBI #2849] Close the engine alive pool gracefully](https://github.com/apache/incubator-kyuubi/commit/ce56d700)  
-[[KYUUBI #2851] Log session name when opening/closing session](https://github.com/apache/incubator-kyuubi/commit/21abfd2b)  
-[[KYUUBI #2247] Change log4j2 properties to xml](https://github.com/apache/incubator-kyuubi/commit/0acf9717)  
-[[KYUUBI #2704] verify TPC-DS query output](https://github.com/apache/incubator-kyuubi/commit/df1ebbad)  
-[[KYUUBI #2846] Add v1.5.2-incubating changelog](https://github.com/apache/incubator-kyuubi/commit/d958a2c8)  
-[[KYUUBI #2842] [TEST] Optimize the output of ExceptionThrowingDelegationTokenProvider in the Test](https://github.com/apache/incubator-kyuubi/commit/c2874818)  
-[[KYUUBI #2829] Make secret id static and remove thrift protocol from RPC handles](https://github.com/apache/incubator-kyuubi/commit/12a48ba2)  
-[[KYUUBI #2820][FOLLOWUP] Fix duplicate SPNEGO typo](https://github.com/apache/incubator-kyuubi/commit/3bfebd24)  
-[[KYUUBI #2839] Refactor changelog](https://github.com/apache/incubator-kyuubi/commit/2c7a5651)  
-[[KYUUBI #2781] Fix KyuubiDataSource#getConnection to set user and password](https://github.com/apache/incubator-kyuubi/commit/85d0656b)  
-[[KYUUBI #2845] [GA] Stop daily publish on branch-1.3](https://github.com/apache/incubator-kyuubi/commit/24c74e8c)  
-[[KYUUBI #2805] Add TPC-H queries verification](https://github.com/apache/incubator-kyuubi/commit/032d9ca7)  
-[[KYUUBI #2837] [BUILD] Support publish to private repo](https://github.com/apache/incubator-kyuubi/commit/ead33d79)  
-[[KYUUBI #2820][SUB-TASK][KPIP-4] Support to redirect getLocalLog and closeBatchSession requests across kyuubi instances](https://github.com/apache/incubator-kyuubi/commit/f8e20e3c)  
-[[KYUUBI #2830] Imporve Z-Order with Spark3.3](https://github.com/apache/incubator-kyuubi/commit/9d706e55)  
-[[KYUUBI #2746][INFRA] Improve NOTICE of binary release](https://github.com/apache/incubator-kyuubi/commit/a06a2ca4)  
-[[KYUUBI #2825] [BUILD] Remove kyuubi-flink-sql-engine from kyuubi-server dependencies](https://github.com/apache/incubator-kyuubi/commit/ddd60fc4)  
-[[KYUUBI #2211] [Improvement] Add CHANGELOG.md to codebase for maintaining release notes](https://github.com/apache/incubator-kyuubi/commit/dd96983b)  
-[[KYUUBI #2643][FOLLOWUP] Using javax AuthenticationException instead of hadoop AuthenticationException](https://github.com/apache/incubator-kyuubi/commit/daa5bfed)  
-[[KYUUBI #2373][SUB-TASK][KPIP-4] Support to recovery batch session on Kyuubi instances restart](https://github.com/apache/incubator-kyuubi/commit/e7257251)  
-[[KYUUBI #2824] [TEST] Replace test tag ExtendedSQLTest by Slow](https://github.com/apache/incubator-kyuubi/commit/afb08c74)  
-[[KYUUBI #2822] [GA] Set log level to info](https://github.com/apache/incubator-kyuubi/commit/8539a568)  
-[[KYUUBI #2817] Bump Spark 3.3.0-rc5](https://github.com/apache/incubator-kyuubi/commit/9ee4a9e3)  
-[[KYUUBI #2676] Flaky Test: SparkOperationProgressSuite: test operation progress](https://github.com/apache/incubator-kyuubi/commit/411992cd)  
-[[KYUUBI #2812] [SUB-TASK][KPIP-4] Refine the batch info response](https://github.com/apache/incubator-kyuubi/commit/0cb6f162)  
-[[KYUUBI #2469] Support RangerDefaultAuditHandler for AuthZ module](https://github.com/apache/incubator-kyuubi/commit/c56a13af)  
-[[KYUUBI #2807] Trino, Hive and JDBC Engine support session conf in newExecuteStatementOperation](https://github.com/apache/incubator-kyuubi/commit/aabc53ec)  
-[[KYUUBI #2814] Set JAVA_HOME in travis via javac](https://github.com/apache/incubator-kyuubi/commit/9265cf3e)  
-[[KYUUBI #2804] add flaky test report template](https://github.com/apache/incubator-kyuubi/commit/aaf14a12)  
-[[KYUUBI #2800][FOLLOWUP] Return CloseBatchReponse for kyuubi rest client deleteBatch](https://github.com/apache/incubator-kyuubi/commit/d881d318)  
-[[KYUUBI #2800] Refine batch mode code path](https://github.com/apache/incubator-kyuubi/commit/bb98aa75)  
-[[KYUUBI #2802] Retry opening the TSocket in KyuubiSyncThriftClient](https://github.com/apache/incubator-kyuubi/commit/21845266)  
-[[KYUUBI #2742] Introduce admin resource for service admin - refresh frontend hadoop conf without restart](https://github.com/apache/incubator-kyuubi/commit/b0495f3c)  
-[[KYUUBI #2794] Change KyuubiRestException to extend RuntimeException](https://github.com/apache/incubator-kyuubi/commit/9ed652e9)  
-[[KYUUBI #2793][DOCS] Add debugging engine](https://github.com/apache/incubator-kyuubi/commit/a3718f9b)  
-[[KYUUBI #2788] Add excludeDatabases for TPC-H catalogs](https://github.com/apache/incubator-kyuubi/commit/05ee1964)  
-[[KYUUBI #2780] Refine stylecheck](https://github.com/apache/incubator-kyuubi/commit/6cd2ad9e)  
-[[KYUUBI #2789] Kyuubi Spark TPC-H Connector - Add tiny scale](https://github.com/apache/incubator-kyuubi/commit/74ff5cf3)  
-[[KYUUBI #2765][SUB-TASK][KPIP-4] Refactor current kyuubi-ctl](https://github.com/apache/incubator-kyuubi/commit/a4622301)  
-[[KYUUBI #2717][FOLLOWUP] Fix BatchRestApiSuite due to jdbc state store UPPER the batch type](https://github.com/apache/incubator-kyuubi/commit/80d45e42)  
-[[KYUUBI #2717] [SUB-TASK][KPIP-4] Introduce jdbc session state store for batch session multiple HA](https://github.com/apache/incubator-kyuubi/commit/73c6b1b1)  
-[[KYUUBI #2643][FOLLOWUP] Generate spnego auth token dynamically per request](https://github.com/apache/incubator-kyuubi/commit/a04afabd)  
-[[KYUUBI #2741] Add kyuubi-spark-connector-common module](https://github.com/apache/incubator-kyuubi/commit/d24c18db)  
-[[KYUUBI #2775] Add excludeDatabases for TPC-DS catalogs](https://github.com/apache/incubator-kyuubi/commit/d2ceb041)  
-[[KYUUBI #2643][FOLLOWUP] Refine the rest sdk](https://github.com/apache/incubator-kyuubi/commit/fc51cfb3)  
-[[KYUUBI #2553] Kyuubi Spark TPC-DS Connector - Add tiny scale](https://github.com/apache/incubator-kyuubi/commit/8578bcd4)  
-[[KYUUBI #2772] Kyuubi Spark TPC-H Connector - use log4j1](https://github.com/apache/incubator-kyuubi/commit/d49377e7)  
-[[KYUUBI #2643] [SUB-TASK][KPIP-4] Implement kyuubi rest sdk for batch job operation](https://github.com/apache/incubator-kyuubi/commit/b817fcf7)  
-[[KYUUBI #2763] Expected error code for invalid basic/spnego authentication should be SC_FORBIDDEN](https://github.com/apache/incubator-kyuubi/commit/60d559ef)  
-[[KYUUBI #2768] Use the default DB passed in by session in Flink](https://github.com/apache/incubator-kyuubi/commit/a8270163)  
-[[KYUUBI #2764] [DOCS] Fix tables in docs being coverd by right toc sidebar](https://github.com/apache/incubator-kyuubi/commit/414d1a86)  
-[[KYUUBI #2760] Add adapter layer in Kyuubi Hive JDBC module](https://github.com/apache/incubator-kyuubi/commit/42f18378)  
-[[KYUUBI #2751] [DOC] Replace sphinx_rtd_theme with sphinx_book_theme](https://github.com/apache/incubator-kyuubi/commit/e1921fc8)  
-[[KYUUBI #2754] [GA] Separate log archive name](https://github.com/apache/incubator-kyuubi/commit/b1895913)  
-[[KYUUBI #2755] [Subtask] DorisSQLEngine - add jdbc label](https://github.com/apache/incubator-kyuubi/commit/1ca56c17)  
-[[KYUUBI #2752] Kyuubi Spark TPC-DS Connector - configurable catalog's name by initialize method](https://github.com/apache/incubator-kyuubi/commit/9766db12)  
-[[KYUUBI #2721] Implement dedicated set/get catalog/database operators](https://github.com/apache/incubator-kyuubi/commit/9b502307)  
-[[KYUUBI #2664] Kyuubi Spark TPC-H Connector - SupportsReportStatistics](https://github.com/apache/incubator-kyuubi/commit/dbe315e8)  
-[[KYUUBI #2471][FOLLOWUP] Remove unexpected test-function.jar](https://github.com/apache/incubator-kyuubi/commit/ff1d7ec7)  
-[[KYUUBI #2743] colorfully kyuubi logo support](https://github.com/apache/incubator-kyuubi/commit/c0f0089f)  
-[[KYUUBI #2736] Upgrade Jackson 2.13.3](https://github.com/apache/incubator-kyuubi/commit/a8943bc3)  
-[[KYUUBI #2735] Test Spark 3.3.0-rc3](https://github.com/apache/incubator-kyuubi/commit/e9797c02)  
-[[KYUUBI #2665] Kyuubi Spark TPC-H Connector - SupportsNamespaces](https://github.com/apache/incubator-kyuubi/commit/49352b5a)  
-[[KYUUBI #2543] Add TPCDSTable generate benchmark](https://github.com/apache/incubator-kyuubi/commit/25383698)  
-[[KYUUBI #2658] [Subtask] DorisSQLEngine with execute statement support](https://github.com/apache/incubator-kyuubi/commit/7f945017)  
-[[KYUUBI #2730] [WIP][KYUUBI #2238] Support Flink 1.15](https://github.com/apache/incubator-kyuubi/commit/c84ea87c)  
-[[KYUUBI #2663] Kyuubi Spark TPC-H Connector - Initial implementation](https://github.com/apache/incubator-kyuubi/commit/81c48b0c)  
-[[KYUUBI #2631] Rename high availability config key to support multi discovery client](https://github.com/apache/incubator-kyuubi/commit/3b81a495)  
-[[KYUUBI #2733] [CI] Cross version verification for spark-3.3](https://github.com/apache/incubator-kyuubi/commit/7a789a25)  
-[[KYUUBI #2285] trino's result fetching method is changed to a streaming iterator mode to avoid hold data at server side](https://github.com/apache/incubator-kyuubi/commit/3114b393)  
-[[KYUUBI #2718] [KYUUBI#2405] Support Flink StringData Data Type](https://github.com/apache/incubator-kyuubi/commit/5b9d92e9)  
-[[KYUUBI #2719] [SUB-TASK][KPIP-4] Support internal rest request authentication to enable http request redirection across kyuubi instances](https://github.com/apache/incubator-kyuubi/commit/f1cf95fe)  
-[[KYUUBI #2720] Fix KyuubiDatabaseMetaData#supportsCatalogs*](https://github.com/apache/incubator-kyuubi/commit/95784751)  
-[[KYUUBI #2706] Spark extensions support Spark-3.3](https://github.com/apache/incubator-kyuubi/commit/85cbea40)  
-[[KYUUBI #2714] Log4j2 layout pattern add date](https://github.com/apache/incubator-kyuubi/commit/7584e3ab)  
-[[KYUUBI #2686][FOLLOWUP] Avoid potential flaky test](https://github.com/apache/incubator-kyuubi/commit/45531f01)  
-[[KYUUBI #2594][FOLLOWUP] Fix flaky Test - support engine alive probe to fast fail on engine broken](https://github.com/apache/incubator-kyuubi/commit/8905bded)  
-[[KYUUBI #2701] Kyuubi Spark TPC-DS Connector - Rework SupportsReportStatistics and code refactor](https://github.com/apache/incubator-kyuubi/commit/b673b2f5)  
-[[KYUUBI #2712] Bump Spark master to 3.4.0-SNAPSHOT](https://github.com/apache/incubator-kyuubi/commit/7d81fd08)  
-[[KYUUBI #2541] Set nullable in table schema](https://github.com/apache/incubator-kyuubi/commit/93753292)  
-[[KYUUBI #2709] Improve TPCDSTable display in Spark Web UI](https://github.com/apache/incubator-kyuubi/commit/0c5b0d1a)  
-[[KYUUBI #2619] Add profile spark-3.3](https://github.com/apache/incubator-kyuubi/commit/85d68b20)  
-[[KYUUBI #2702] Fix TPC-DS columns name and add TPC-DS queries verification](https://github.com/apache/incubator-kyuubi/commit/aa4ac58c)  
-[[KYUUBI #2348] Add it test for trino engine](https://github.com/apache/incubator-kyuubi/commit/33c81624)  
-[[KYUUBI #2686] Fix lock bug if engine initialization timeout](https://github.com/apache/incubator-kyuubi/commit/c210fdae)  
-[[KYUUBI #2690] Make ProcessBuilder.commands immutable](https://github.com/apache/incubator-kyuubi/commit/010a34d1)  
-[[KYUUBI #2696] [TEST] Stop NoopServer should not throw exception](https://github.com/apache/incubator-kyuubi/commit/5585dd01)  
-[[KYUUBI #2700] Handle SPARK-37929 breaking change in TPCDSCatalog](https://github.com/apache/incubator-kyuubi/commit/18e9d09e)  
-[[KYUUBI #2683] Add INFO log in ServiceDiscovery.stopGracefully](https://github.com/apache/incubator-kyuubi/commit/866e4d1f)  
-[[KYUUBI #2694] EngineEvent.toString outputs application tags](https://github.com/apache/incubator-kyuubi/commit/27030d39)  
-[[KYUUBI #2594] Fix flaky Test - support engine alive probe to fast fail on engine broken](https://github.com/apache/incubator-kyuubi/commit/56efdf8c)  
-[[KYUUBI #2668][FOLLOWUP] Remove unused Option because the collection is never null](https://github.com/apache/incubator-kyuubi/commit/b2495e96)  
-[[KYUUBI #2675] Fix compatibility for spark authz with spark v3.3](https://github.com/apache/incubator-kyuubi/commit/60b9f6bc)  
-[[KYUUBI #2680] Remove SwaggerScalaModelConverter after rest dto classes rewritten in Java](https://github.com/apache/incubator-kyuubi/commit/98ed40f5)  
-[[KYUUBI #2642] Fix flaky test - JpsApplicationOperation with spark local mode](https://github.com/apache/incubator-kyuubi/commit/c1fb7bfb)  
-[[KYUUBI #2668] [SUB-TASK][KPIP-4] Rewrite the rest DTO classes in java](https://github.com/apache/incubator-kyuubi/commit/31fdd7ec)  
-[[KYUUBI #2670] Delete the useless judgment in the extractURLComponents method of Utils.java](https://github.com/apache/incubator-kyuubi/commit/32165362)  
-[[KYUUBI #2672] Check if the table exists](https://github.com/apache/incubator-kyuubi/commit/5588cd50)  
-[[KYUUBI #2641] Client should not assume launch engine has completed on exception](https://github.com/apache/incubator-kyuubi/commit/b40bcbda)  
-[[KYUUBI #2666] Backport HIVE-24694 to Kyuubi Hive JDBC](https://github.com/apache/incubator-kyuubi/commit/27cf57bd)  
-[[KYUUBI #2661] [SUB-TASK][KPIP-4] Rename GET /batches/$batchId/log to GET /batches/$batchId/localLog](https://github.com/apache/incubator-kyuubi/commit/a163f3a8)  
-[[KYUUBI #2650] Add FilteredShowColumnsCommand to AuthZ module](https://github.com/apache/incubator-kyuubi/commit/2facc0b6)  
-[[KYUUBI #2576][FOLLOWUP] Bump Hudi 0.11.0](https://github.com/apache/incubator-kyuubi/commit/f16ac8be)  
-[[KYUUBI #2540] Kyuubi Spark TPC-DS Connector - SupportsNamespaces](https://github.com/apache/incubator-kyuubi/commit/b088f39f)  
-[[KYUUBI #2655] Using the defined app keys for JpsApplicationOperation](https://github.com/apache/incubator-kyuubi/commit/31878859)  
-[[KYUUBI #2640] Implement TGetInfoType CLI_ODBC_KEYWORDS](https://github.com/apache/incubator-kyuubi/commit/8de2f5f1)  
-[[KYUUBI #2601] Add a config to support different service discovery client class implementation](https://github.com/apache/incubator-kyuubi/commit/50584f2a)  
-[[KYUUBI #2471] Fix the bug of dynamically loading external packages](https://github.com/apache/incubator-kyuubi/commit/1ab68974)  
-[[KYUUBI #2636] Refine BatchesResourceSuite](https://github.com/apache/incubator-kyuubi/commit/a5bb93e5)  
-[[KYUUBI #2634] [SUB-TASK][KPIP-4] Enhance the response error msg](https://github.com/apache/incubator-kyuubi/commit/e4e88355)  
-[[KYUUBI #2616] Remove embedded mode support in Kyuubi Hive JDBC driver](https://github.com/apache/incubator-kyuubi/commit/52817e81)  
-[[KYUUBI #2605] Make SQLOperationListener configurable](https://github.com/apache/incubator-kyuubi/commit/9a2fc86b)  
-[[KYUUBI #2474] [Improvement] Add FilteredShowFunctionsCommand to Authz module](https://github.com/apache/incubator-kyuubi/commit/d30f078c)  
-[[KYUUBI #2604] Hive Backend Engine - Multi tenancy support](https://github.com/apache/incubator-kyuubi/commit/f2b9776e)  
-[[KYUUBI #2576] Bump Hudi 0.11.0](https://github.com/apache/incubator-kyuubi/commit/cb5f49e3)  
-[[KYUUBI #2473][FOLLOWUP] Simplify FilteredShowNamespaceExec](https://github.com/apache/incubator-kyuubi/commit/bea53092)  
-[[KYUUBI #2621] Always use Hadoop shaded client](https://github.com/apache/incubator-kyuubi/commit/981b4161)  
-[[KYUUBI #2615] Add support HIVE_CLI_SERVICE_PROTOCOL_V11](https://github.com/apache/incubator-kyuubi/commit/25471506)  
-[[KYUUBI #2473] [Improvement] Add FilteredShowDatabasesCommand to AuthZ module](https://github.com/apache/incubator-kyuubi/commit/42936aa2)  
-[[KYUUBI #2626] Replace literal by FetchType.LOG](https://github.com/apache/incubator-kyuubi/commit/32b38cc6)  
-[[KYUUBI #2614] Add commons-io to beeline module since jdbc upgraded to 3.1.3](https://github.com/apache/incubator-kyuubi/commit/098ae16c)  
-[[KYUUBI #2539][Subtask] Kyuubi Spark TPC-DS Connector - SupportsReportStatistics](https://github.com/apache/incubator-kyuubi/commit/d3b9c77c)  
-[[KYUUBI #2591] Redact secret information from ProcBuilder log](https://github.com/apache/incubator-kyuubi/commit/1fee068c)  
-[[KYUUBI #2542] [Subtask] Kyuubi Spark TPC-DS Connector - Make useAnsiStringType configurable](https://github.com/apache/incubator-kyuubi/commit/802890a7)  
-[[KYUUBI #2607] Introduce new module and setup testcontainers-based Kudu service for testing](https://github.com/apache/incubator-kyuubi/commit/b85045ad)  
-[[KYUUBI #2333][KYUUBI #2554] Configuring Flink Engine heap memory and java opts](https://github.com/apache/incubator-kyuubi/commit/6b6da1f4)  
-[[KYUUBI #2029] Hive Backend Engine - Operation Logs](https://github.com/apache/incubator-kyuubi/commit/b8fd3785)  
-[[KYUUBI #2609] Set Kyuubi server thrift client socket timeout to inf](https://github.com/apache/incubator-kyuubi/commit/c1df427f)  
-[[KYUUBI #2560] Upgrade kyuubi-hive-jdbc hive version to 3.1.3](https://github.com/apache/incubator-kyuubi/commit/a6e14ac3)  
-[[KYUUBI #2602] Bump testcontainers-scala 0.40.7](https://github.com/apache/incubator-kyuubi/commit/867e0beb)  
-[[KYUUBI #2565] Variable substitution should work in plan only mode](https://github.com/apache/incubator-kyuubi/commit/7f8369bf)  
-[[KYUUBI #2493][FOLLOWUP] Fix the exception that occurred when beeline rendered spark progress](https://github.com/apache/incubator-kyuubi/commit/0f0708b7)  
-[[KYUUBI #2378] Implement BatchesResource GET /batches/${batchId}/log](https://github.com/apache/incubator-kyuubi/commit/8c1fc100)  
-[[KYUUBI #2599] Bump scala-maven-plugin 4.6.1](https://github.com/apache/incubator-kyuubi/commit/ec561bf4)  
-[[KYUUBI #2493] Implement the progress of statement for spark sql engine](https://github.com/apache/incubator-kyuubi/commit/1cb4193d)  
-[[KYUUBI #2375][FOLLOWUP] Implement BatchesResource GET /batches](https://github.com/apache/incubator-kyuubi/commit/bcf30beb)  
-[[KYUUBI #2588] Reformat kyuubi-hive-sql-engine/pom.xml](https://github.com/apache/incubator-kyuubi/commit/5b788a13)  
-[[KYUUBI #2558] fix warn message](https://github.com/apache/incubator-kyuubi/commit/282f105a)  
-[[KYUUBI #2427][FOLLOWUP] Flaky test: deregister when meeting specified exception](https://github.com/apache/incubator-kyuubi/commit/80e068df)  
-[[KYUUBI #2582] Minimize Travis build and test](https://github.com/apache/incubator-kyuubi/commit/a7443285)  
-[[KYUUBI #2500][FOLLOWUP] Resolve flink conf at engine side](https://github.com/apache/incubator-kyuubi/commit/cad0bcd5)  
-[[KYUUBI #2571] Minimize YARN tests overhead](https://github.com/apache/incubator-kyuubi/commit/9d604955)  
-[[KYUUBI #2573] [KPIP-4][SUB-TASK] Add a seekable buffered reader for random access operation log](https://github.com/apache/incubator-kyuubi/commit/965bf218)  
-[[KYUUBI #2375][SUB-TASK][KPIP-4] Implement BatchesResource GET /batches](https://github.com/apache/incubator-kyuubi/commit/c967c74f)  
-[[KYUUBI #2571] Release connection to prevent the engine leak](https://github.com/apache/incubator-kyuubi/commit/270a5726)  
-[[KYUUBI #2522] Even the process exit code is zero, also check the application state from resource manager](https://github.com/apache/incubator-kyuubi/commit/6e17e794)  
-[[KYUUBI #2569] Change the acquisition method of flinkHome to keep it consistent with other engines](https://github.com/apache/incubator-kyuubi/commit/9d5fba56)  
-[[KYUUBI #2550] Fix swagger does not show the request/response schema issue](https://github.com/apache/incubator-kyuubi/commit/90140cc3)  
-[[KYUUBI #2500] Command OptionParser for launching Flink Backend Engine](https://github.com/apache/incubator-kyuubi/commit/1932ad72)  
-[[KYUUBI #2379][SUB-TASK][KPIP-4] Implement BatchesResource DELETE /batches/${batchId}](https://github.com/apache/incubator-kyuubi/commit/5b3123e4)  
-[[KYUUBI #2513] Support NULL type in trino engine and add QueryTests](https://github.com/apache/incubator-kyuubi/commit/a58e1cf4)  
-[[KYUUBI #2403] [Improvement] move addTimeoutMonitor to AbstractOperation because it was used in multiple engines](https://github.com/apache/incubator-kyuubi/commit/9e263d79)  
-[[KYUUBI #2531] [Subtask] Kyuubi Spark TPC-DS Connector - Initial implementation](https://github.com/apache/incubator-kyuubi/commit/dcc3ccf3)  
-[[KYUUBI #2523] Flaky Test: KyuubiBatchYarnClusterSuite - open batch session](https://github.com/apache/incubator-kyuubi/commit/792c5422)  
-[[KYUUBI #2376][SUB-TASK][KPIP-4] Implement BatchesResource GET /batches/${batchId}](https://github.com/apache/incubator-kyuubi/commit/147c83bf)  
-[[KYUUBI #2547] Support jdbc url prefix jdbc:kyuubi://](https://github.com/apache/incubator-kyuubi/commit/5392591a)  
-[[KYUUBI #2549] Do not auth the request to load OpenApiConf](https://github.com/apache/incubator-kyuubi/commit/841a3635)  
-[[KYUUBI #2548] Prevent dead loop if the batch job submission process it not alive](https://github.com/apache/incubator-kyuubi/commit/672e8e95)  
-[[KYUUBI #2533] Make Utils.parseURL public to remove unnecessary reflection](https://github.com/apache/incubator-kyuubi/commit/3208410d)  
-[[KYUUBI #2524] [DOCS] Update metrics.md](https://github.com/apache/incubator-kyuubi/commit/7612f0af)  
-[[KYUUBI #2532] avoid NPE in KyuubiHiveDriver.acceptsURL](https://github.com/apache/incubator-kyuubi/commit/05161158)  
-[[KYUUBI #2478][FOLLOWUP] Invoke getOpts method instead of Reflection](https://github.com/apache/incubator-kyuubi/commit/cdfae8d8)  
-[[KYUUBI #2490][FOLLOWUP] Fix and move set command test case](https://github.com/apache/incubator-kyuubi/commit/973339db)  
-[[KYUUBI #2517] Rename ZorderSqlAstBuilder to KyuubiSparkSQLAstBuilder](https://github.com/apache/incubator-kyuubi/commit/04a91e10)  
-[[KYUUBI #2025][HIVE] Add a Hive on Yarn doc](https://github.com/apache/incubator-kyuubi/commit/02356a38)  
-[[KYUUBI #2032][Subtask] Hive Backend Engine - new APIs with hive-service-rpc 3.1.2 - SetClientInfo](https://github.com/apache/incubator-kyuubi/commit/3ab2c81d)  
-[[KYUUBI #2490] Fix NPE in getOperationStatus](https://github.com/apache/incubator-kyuubi/commit/96da2544)  
-[[KYUUBI #2516] [DOCS] Add Contributor over time in README.md](https://github.com/apache/incubator-kyuubi/commit/b739d39f)  
-[[KYUUBI #2346] [Improvement] Simplify FlinkProcessBuilder with java executable](https://github.com/apache/incubator-kyuubi/commit/3b04d994)  
-[[KYUUBI #2472] Support FilteredShowTablesCommand for AuthZ module](https://github.com/apache/incubator-kyuubi/commit/c969433f)  
-[[KYUUBI #2309][SUB-TASK][KPIP-4] Implement BatchesResource POST /batches](https://github.com/apache/incubator-kyuubi/commit/5a36db65)  
-[[KYUUBI #2028][FOLLOWUP] add engine stop event and fix the partition of initialized event](https://github.com/apache/incubator-kyuubi/commit/9ac5faaa)  
-[[KYUUBI #2512] Fix broken link of IntelliJ IDEA Setup Guide](https://github.com/apache/incubator-kyuubi/commit/03d4bbe9)  
-[[KYUUBI #2450][FOLLOWUP] Remove opHandle from opHandleSet when exception occurs](https://github.com/apache/incubator-kyuubi/commit/8e4a2954)  
-[[KYUUBI #2510] Fix NPE when invoking YarnApplicationOperation::getApplicationInfoByTag](https://github.com/apache/incubator-kyuubi/commit/c0963e1b)  
-[[KYUUBI #2496] Prevent empty auth user when anonymous is allowed](https://github.com/apache/incubator-kyuubi/commit/beb132f9)  
-[[KYUUBI #2498] Upgrade Delta version to 1.2.1](https://github.com/apache/incubator-kyuubi/commit/fa61da62)  
-[[KYUUBI #2419] Release engine during closing kyuubi server session if share level is connection](https://github.com/apache/incubator-kyuubi/commit/93f13ef6)  
-[[KYUUBI #2487] Fix test command to make it runnable](https://github.com/apache/incubator-kyuubi/commit/af162b1f)  
-[[KYUUBI #2457] Fix flaky test: engine log truncation](https://github.com/apache/incubator-kyuubi/commit/38cf4ccc)  
-[[KYUUBI #2478] Backport HIVE-19018 to Kyuubi Beeline](https://github.com/apache/incubator-kyuubi/commit/268db010)  
-[[KYUUBI #2020] [Subtask] Hive Backend Engine - new APIs with hive-service-rpc 3.1.2 - TGetQueryId](https://github.com/apache/incubator-kyuubi/commit/b41be9eb)  
-[[KYUUBI #2484] Add conf to SessionEvent and display it in EngineSessionPage](https://github.com/apache/incubator-kyuubi/commit/06da8cf8)  
-[[KYUUBI #2433] HiveSQLEngine load required jars from HIVE_HADOOP_CLASSPATH](https://github.com/apache/incubator-kyuubi/commit/679d23f0)  
-[[KYUUBI #2477] Change state early on stopping](https://github.com/apache/incubator-kyuubi/commit/7b70a6a0)  
-[[KYUUBI #2451] Support isWrapperFor and unwrap](https://github.com/apache/incubator-kyuubi/commit/61873214)  
-[[KYUUBI #2453] [Improvement] checkValue of TypedConfigBuilder shall also print the config name](https://github.com/apache/incubator-kyuubi/commit/68ac8a19)  
-[[KYUUBI #2427] Flaky test: deregister when meeting specified exception](https://github.com/apache/incubator-kyuubi/commit/40739a9f)  
-[[KYUUBI #2456] Supports managing engines of different share level in kyuubi-ctl](https://github.com/apache/incubator-kyuubi/commit/c5210547)  
-[[KYUUBI #1987] Support preserve user context in group/server share level](https://github.com/apache/incubator-kyuubi/commit/7cede6fd)  
-[[KYUUBI #2467] Remove close launchEngineOp](https://github.com/apache/incubator-kyuubi/commit/9fd62a21)  
-[[KYUUBI #2440] [Improvement] spark engine event add endTime when it is stopped](https://github.com/apache/incubator-kyuubi/commit/8a44b6bf)  
-[[KYUUBI #2461] Use the original host argument](https://github.com/apache/incubator-kyuubi/commit/dcea90b0)  
-[[KYUUBI #2463] Redact `kyuubi.ha.zookeeper.auth.digest` in Spark engine](https://github.com/apache/incubator-kyuubi/commit/f13856aa)  
-[[KYUUBI #2445] Implement ApplicationManager and Yarn/ JPS-local Application Operation](https://github.com/apache/incubator-kyuubi/commit/5e6d645e)  
-[[KYUUBI #1936][FOLLOWUP] Stop updating credentials when credentials are expired](https://github.com/apache/incubator-kyuubi/commit/5ae7c9c0)  
-[[KYUUBI #2450] Update lastAccessTime in getStatus and add opHandle to opHandleSet before run](https://github.com/apache/incubator-kyuubi/commit/86f016d9)  
-[[KYUUBI #2439] Using Pure Java TPC-DS generator](https://github.com/apache/incubator-kyuubi/commit/3ecdd422)  
-[[KYUUBI #2448] Log the engine id when opening Kyuubi connection](https://github.com/apache/incubator-kyuubi/commit/eeb8a94f)  
-[[KYUUBI #2436] Add AlterTableRecoverPartitionsCommand for Spark Sql Authz PrivilegesBuilder](https://github.com/apache/incubator-kyuubi/commit/71bc0dc1)  
-[[KYUUBI #2432][DOCS] button "Download" is invalid](https://github.com/apache/incubator-kyuubi/commit/a3e8f7ac)  
-[[KYUUBI #2429] KYUUBI #2416] Increase Test Coverage For Privileges Builder](https://github.com/apache/incubator-kyuubi/commit/14f675d2)  
-[[KYUUBI #2344] [Improvement] Add Kyuubi Server on Kubernetes with Spark Cluster mode integration test](https://github.com/apache/incubator-kyuubi/commit/7fa04947)  
-[[KYUUBI #2201] Show ExecutionId when running status on query engine page](https://github.com/apache/incubator-kyuubi/commit/4fb93275)  
-[[KYUUBI #2424] [Improvement] add Flink compile version and Trino client compile version to KyuubiServer Log](https://github.com/apache/incubator-kyuubi/commit/a09ad0b6)  
-[[KYUUBI #2253] [Improvement] Trino Engine - Events support](https://github.com/apache/incubator-kyuubi/commit/4ec707b7)  
-[[KYUUBI #2426] Return complete error stack trace information](https://github.com/apache/incubator-kyuubi/commit/deb0e620)  
-[[KYUUBI #2410] [Improvement] Fix docker-image-tool.sh example version to 1.4.0](https://github.com/apache/incubator-kyuubi/commit/5f04aa67)  
-[[KYUUBI #2351] Fix Hive Engine terminating blocked by non-daemon threads](https://github.com/apache/incubator-kyuubi/commit/45e5eda0)  
-[[KYUUBI #2422] Wrap close session with try-finally](https://github.com/apache/incubator-kyuubi/commit/70a3005e)  
-[[KYUUBI #2420] Fix outdate .gitignore for dependency-reduced-pom.xml](https://github.com/apache/incubator-kyuubi/commit/60179171)  
-[[KYUUBI #2368] [Improvement] Command OptionParser for launching Trino Backend Engine](https://github.com/apache/incubator-kyuubi/commit/4f4960cf)  
-[[KYUUBI #2021][FOLLOWUP] Move derby workaround to test code](https://github.com/apache/incubator-kyuubi/commit/bafc2f84)  
-[[KYUUBI #2301] Limit the maximum number of concurrent connections per user and ipaddress](https://github.com/apache/incubator-kyuubi/commit/dba9e223)  
-[[KYUUBI #2323] Separate events to a submodule - kyuubi-event](https://github.com/apache/incubator-kyuubi/commit/d851b23a)  
-[[KYUUBI #2289] Use unique tag to kill applications](https://github.com/apache/incubator-kyuubi/commit/c9ea7fac)  
-[[KYUUBI #2021] Command OptionParser for launching Hive Backend Engine](https://github.com/apache/incubator-kyuubi/commit/20af38ee)  
-[[KYUUBI #2414] [KYUUBI apache#2413  ] Fix InsertIntoHiveTableCommand case in PrivilegesBuilder#buildCommand()](https://github.com/apache/incubator-kyuubi/commit/b8877323)  
-[[KYUUBI #2406] Add Flink environments to template](https://github.com/apache/incubator-kyuubi/commit/cbde503f)  
-[[KYUUBI #2355] Bump Delta Lake 1.2.0](https://github.com/apache/incubator-kyuubi/commit/5bf4184c)  
-[[KYUUBI #2349][DOCS] Usage docs for kyuubi-ctl](https://github.com/apache/incubator-kyuubi/commit/a59188ff)  
-[[KYUUBI #2381] [Test] Add Kyuubi on k8s With Spark on k8s client deploy-mode unit test](https://github.com/apache/incubator-kyuubi/commit/62db92f7)  
-[[KYUUBI #2390] RuleEliminateMarker stays in analyze phase for data masking](https://github.com/apache/incubator-kyuubi/commit/eb7ad512)  
-[[KYUUBI #2395] [DOC] Add Documentation for Spark AuthZ Extension](https://github.com/apache/incubator-kyuubi/commit/8f29b4fd)  
-[[KYUUBI #2397] Supports managing engines of different versions in kyuubi-ctl](https://github.com/apache/incubator-kyuubi/commit/a7674d29)  
-[[KYUUBI #2402] [Improvement] addTimeoutMonitor for trino engine when it run query async](https://github.com/apache/incubator-kyuubi/commit/9fb62772)  
-[[KYUUBI #2360] [Subtask] Configuring Hive engine heap memory and java opts](https://github.com/apache/incubator-kyuubi/commit/26d52faa)  
-[[KYUUBI #2399] Fix PrivilegesBuilder Build Wrong PrivilegeObjets When Query Without Project But With OrderBy/PartitionBy](https://github.com/apache/incubator-kyuubi/commit/91adc3fd)  
-[[KYUUBI #2308][SUB-TASK][KPIP-4] Batch job configuration ignore list and pre-defined configuration in server-side](https://github.com/apache/incubator-kyuubi/commit/4b42c735)  
-[[KYUUBI #2369] [Improvement] update developer.md to describe what append descriptions of new configurations to settings.md](https://github.com/apache/incubator-kyuubi/commit/1a58aaf7)  
-[[KYUUBI #2391] Fix privileges builder return wrong result when there is no project but has filter/join](https://github.com/apache/incubator-kyuubi/commit/88f168c6)  
-[[KYUUBI #2361] [Improvement] Configuring Trino Engine heap memory and java opts](https://github.com/apache/incubator-kyuubi/commit/1a68b866)  
-[[KYUUBI #2338] [DOCS] Upgrade sphinx dependencies for documentation build](https://github.com/apache/incubator-kyuubi/commit/7c6789a6)  
-[[KYUUBI #2385] Export JAVA_HOME. It seems TravisCI stopped doing it recently](https://github.com/apache/incubator-kyuubi/commit/b8d04af4)  
-[[KYUUBI #2353] [SUB-TASK][KPIP-4] Implement BatchJobSubmission operation and basic KyuubiBatchSessionImpl](https://github.com/apache/incubator-kyuubi/commit/b6d5c64c)  
-[[KYUUBI #2359] [Test] Build WithKyuubiServerOnKuberntes](https://github.com/apache/incubator-kyuubi/commit/65a272f6)  
-[[KYUUBI #2330] [Subtask] Hive Backend Engine - GetTypeInfo Operation](https://github.com/apache/incubator-kyuubi/commit/6d147894)  
-[[KYUUBI #2257] Replace replace conf vars with strings in HiveSQLEngine](https://github.com/apache/incubator-kyuubi/commit/86c6b1f8)  
-[[KYUUBI #2248][DOCS] Add a flink on yarn kerberos doc](https://github.com/apache/incubator-kyuubi/commit/d0a4697f)  
-[[KYUUBI #1451] Support Data Column Masking](https://github.com/apache/incubator-kyuubi/commit/fa95281b)  
-[[KYUUBI #2357] [Improvement] Add warn log and check in class of HiveProcessBuilder](https://github.com/apache/incubator-kyuubi/commit/d5acb78e)  
-[[KYUUBI #2328] Support getting mainResource in the module target directory of KYUUBI_HOME](https://github.com/apache/incubator-kyuubi/commit/8ef211a1)  
-[[KYUUBI #2354] Fix NPE in process builder log capture thread](https://github.com/apache/incubator-kyuubi/commit/d11866df)  
-[[KYUUBI #2337] [DOCS] Access Kyuubi with Kyuubi JDBC Driver](https://github.com/apache/incubator-kyuubi/commit/659d981c)  
-[[KYUUBI #2331] Add createSession method to further abstract openSession](https://github.com/apache/incubator-kyuubi/commit/8a525e52)  
-[[KYUUBI #2347] Output trino query id within query execute](https://github.com/apache/incubator-kyuubi/commit/55c4cae1)  
-[[KYUUBI #2345] [DOC] Hot Upgrade Kyuubi Server](https://github.com/apache/incubator-kyuubi/commit/12416a78)  
-[[KYUUBI #2336] Simplify TrinoProcessBuilder with java executable](https://github.com/apache/incubator-kyuubi/commit/9e486080)  
-[[KYUUBI #2343] [KYUUBI#2297] HiveEngineEvent toString is not pretty](https://github.com/apache/incubator-kyuubi/commit/0888abd9)  
-[[KYUUBI #1989] Decouple curator from other modules](https://github.com/apache/incubator-kyuubi/commit/91010689)  
-[[KYUUBI #2324] [SUB-TASK][KPIP-4] Implement SparkBatchProcessBuilder to submit spark batch job](https://github.com/apache/incubator-kyuubi/commit/a63e811e)  
-[[KYUUBI #2329] [KYUUBI#2214][FOLLOWUP] Cleanup kubernetes-deployment-it](https://github.com/apache/incubator-kyuubi/commit/ff19ae73)  
-[[KYUUBI #2216] [Test] [K8s] Add Spark Cluster mode on Kubernetes integration test](https://github.com/apache/incubator-kyuubi/commit/a4c521c6)  
-[[KYUUBI #2024][FOLLOWUP] Hive Backend Engine - ProcBuilder for HiveEngine](https://github.com/apache/incubator-kyuubi/commit/a56e4b4f)  
-[[KYUUBI #2281] The RenewDelegationToken method of TFrontendService should return SUCCESS_STATUS by default](https://github.com/apache/incubator-kyuubi/commit/04c536b9)  
-[[KYUUBI #2300] Add http UGIAssuming handler wrapper for kerberos enabled restful frontend service](https://github.com/apache/incubator-kyuubi/commit/9bd91054)  
-[[KYUUBI #2292][FOLLOWUP] Unify kyuubi server plugin location](https://github.com/apache/incubator-kyuubi/commit/93627611)  
-[[KYUUBI #2320] Make the CodeSource location correctly obtained on Windows](https://github.com/apache/incubator-kyuubi/commit/c29d2d85)  
-[[KYUUBI #2316] [DOCS] Fix typo in GitHub issue template](https://github.com/apache/incubator-kyuubi/commit/d3d73c6b)  
-[[KYUUBI #2317][BUILD] Bump hive-service-rpc 3.1.3 version](https://github.com/apache/incubator-kyuubi/commit/ff1bb555)  
-[[KYUUBI #2311] Refine github label](https://github.com/apache/incubator-kyuubi/commit/fbb2d974)  
-[[KYUUBI #2310] Make ranger extension work with mac m1](https://github.com/apache/incubator-kyuubi/commit/ed827708)  
-[[KYUUBI #2312] Spark data type TimestampNTZ supported version changes as 3.4.0](https://github.com/apache/incubator-kyuubi/commit/e1e0b358)  
-[[KYUUBI #2296] Fix operation log file handler leak](https://github.com/apache/incubator-kyuubi/commit/e5834ae7)  
-[[KYUUBI #2250] Support to limit the spark engine max running time](https://github.com/apache/incubator-kyuubi/commit/4bc14657)  
-[[KYUUBI #2299] Fix flaky test: support engine alive probe to fast fail on engine broken](https://github.com/apache/incubator-kyuubi/commit/9c4af55f)  
-[[KYUUBI #2292] Unify kyuubi server plugin location](https://github.com/apache/incubator-kyuubi/commit/b41ec939)  
-[[KYUUBI #2292] Unify spark extension location](https://github.com/apache/incubator-kyuubi/commit/82a024a9)  
-[[KYUUBI #2084][FOLLOWUP] Support arbitrary parameters for KyuubiConf](https://github.com/apache/incubator-kyuubi/commit/0ed865f7)  
-[[KYUUBI #2287] Revamp Flink IT by random port and merge tests](https://github.com/apache/incubator-kyuubi/commit/19f1d411)  
-[[KYUUBI #1451] Add Row-level filtering support](https://github.com/apache/incubator-kyuubi/commit/c4a608f9)  
-[[KYUUBI #2280] [INFRA] Replace BSD 3-clause with ASF License v2 for scala binaries](https://github.com/apache/incubator-kyuubi/commit/03b72268)  
-[[KYUUBI #2277] Inline kyuubi prefix in KyuubiConf](https://github.com/apache/incubator-kyuubi/commit/6a231519)  
-[[KYUUBI #2266] The default value of frontend.connection.url.use.hostname should be set to true to be consistent with previous versions](https://github.com/apache/incubator-kyuubi/commit/c1a68a7c)  
-[[KYUUBI #2260] The running query will not update the duration of the page](https://github.com/apache/incubator-kyuubi/commit/a6ba6420)  
-[[KYUUBI #2272] Fix incorrect doc link](https://github.com/apache/incubator-kyuubi/commit/f0d6ca0f)  
-[[KYUUBI #2268] Flaky test: submit spark app timeout with last log output](https://github.com/apache/incubator-kyuubi/commit/2af5067a)  
-[[KYUUBI #2275][DOCS] Fix missing prefix in trino engine quick start](https://github.com/apache/incubator-kyuubi/commit/a18a3a95)  
-[[KYUUBI #2243][DOCS] Add quick start for trino engine](https://github.com/apache/incubator-kyuubi/commit/0ee53710)  
-[[KYUUBI #2255]The engine state of Spark's EngineEvent is hardcoded with 0](https://github.com/apache/incubator-kyuubi/commit/6bd9edf2)  
-[[KYUUBI #2263] [KYUUBI  #2262]Kyuubi Spark Nightly failed - select timestamp_ntz *** FAILED ***](https://github.com/apache/incubator-kyuubi/commit/89edeaee)  
-[[KYUUBI #2209] Add detail usage documents of Flink engine](https://github.com/apache/incubator-kyuubi/commit/5733452b)  
-[[KYUUBI #2246] [BUILD] Pick Commons-Logging dependence out of Hudi-Common](https://github.com/apache/incubator-kyuubi/commit/0fd47381)  
-[[KYUUBI #2028] Hive Backend Engine - Events support](https://github.com/apache/incubator-kyuubi/commit/9da22e66)  
-[[KYUUBI #2207]  Fix DayTimeIntervalType/YearMonthIntervalType Column Size](https://github.com/apache/incubator-kyuubi/commit/d77882a8)  
-[[KYUUBI #1451] Introduce Kyuubi Spark AuthZ Module with column-level fine-grained authorization](https://github.com/apache/incubator-kyuubi/commit/513ea834)  
-[[KYUUBI #1798] Add EventBus module to unify the distribution and subscription of Kyuubi's events](https://github.com/apache/incubator-kyuubi/commit/6fe69753)  
-[[KYUUBI #2207]Support newly added spark data types: TimestampNTZType](https://github.com/apache/incubator-kyuubi/commit/dcc71b30)  
-[[KYUUBI #1021] Expire CredentialsRef in a proper time to reduce memor…](https://github.com/apache/incubator-kyuubi/commit/e16c728d)  
-[[KYUUBI #2241] Remove unused deployment documents of Spark](https://github.com/apache/incubator-kyuubi/commit/f80d1a8b)  
-[[KYUUBI #2244] load-kyuubi-env.sh should print SPARK_ENGINE_HOME for consistent](https://github.com/apache/incubator-kyuubi/commit/cf490a1b)  
-[[KYUUBI #2023] Hive Backend Engine - Shade HiveSQLEngine runtime](https://github.com/apache/incubator-kyuubi/commit/a7708076)  
-[[KYUUBI #1498] Support operation log for ExecuteScala](https://github.com/apache/incubator-kyuubi/commit/ea0121bc)  
-[[KYUUBI #2087] Add issue template for documentation improvement](https://github.com/apache/incubator-kyuubi/commit/fe3ece1d)  
-[[KYUUBI #1962] Add timeout check for createSpark](https://github.com/apache/incubator-kyuubi/commit/8b06f135)  
-[[KYUUBI #2218] Fix maven options about hive-provided](https://github.com/apache/incubator-kyuubi/commit/98887e75)  
-[[KYUUBI #2225] Support to set result max rows for spark engine](https://github.com/apache/incubator-kyuubi/commit/9797ff0d)  
-[[KYUUBI #2008][FOLLOWUP] Support engine type and subdomain in kyuubi-ctl](https://github.com/apache/incubator-kyuubi/commit/015cbe5d)  
-[[KYUUBI #2231] Close action and default sparksession before `createSpark`.](https://github.com/apache/incubator-kyuubi/commit/b18be382)  
-[[KYUUBI #2222] Refactor the log when failing to get hadoop fs delegation token](https://github.com/apache/incubator-kyuubi/commit/a5b4c1b9)  
-[[KYUUBI #2227] Fix operation log dir not deleted issue](https://github.com/apache/incubator-kyuubi/commit/55052f96)  
-[[KYUUBI #2223] Return the last rows of log for prompts even exception detected](https://github.com/apache/incubator-kyuubi/commit/c1ffc3ca)  
-[[KYUUBI #2221] Shade hive-service-rpc and thrift in Spark engine](https://github.com/apache/incubator-kyuubi/commit/86cd685d)  
-[[KYUUBI #2102] Support to retry the internal thrift request call and add engine liveness probe to enable fast fail before retry](https://github.com/apache/incubator-kyuubi/commit/e8445b7f)  
-[[KYUUBI #2207] Support newly added spark data types: DayTimeIntervalType/YearMonthIntervalType](https://github.com/apache/incubator-kyuubi/commit/d0c92caa)  
-[[KYUUBI #2085][FOLLOWUP] Fix kyuubi-common wrong label](https://github.com/apache/incubator-kyuubi/commit/fb3e1235)  
-[[KYUUBI #2208] Fixed session close operator log session dir not deleted](https://github.com/apache/incubator-kyuubi/commit/a13a89e8)  
-[[KYUUBI #2214] Add Spark Engine on Kubernetes integration test](https://github.com/apache/incubator-kyuubi/commit/4f0323d5)  
-[[KYUUBI #2203][FLINK] Support flink conf set by kyuubi conf file](https://github.com/apache/incubator-kyuubi/commit/c09cd654)  
-[[KYUUBI #2197][BUILD] Bump jersey 2.35 version](https://github.com/apache/incubator-kyuubi/commit/88e0bcd2)  
-[[KYUUBI #2204] Make comments consistent with code in EngineRef](https://github.com/apache/incubator-kyuubi/commit/89497c41)  
-[[KYUUBI #2186] Manage test failures with kyuubi spark nightly build - execute statement - select interval](https://github.com/apache/incubator-kyuubi/commit/8f2b7358)  
-[[KYUUBI #2195] Using while-loop or for-loop instead of map/range to improve performance in RowSet](https://github.com/apache/incubator-kyuubi/commit/5ed8f148)  
-[[KYUUBI #2033] Hive Backend Engine - GetCrossReference](https://github.com/apache/incubator-kyuubi/commit/4e01f9b9)  
-[[KYUUBI #2135] Build and test all modules on Linux ARM64](https://github.com/apache/incubator-kyuubi/commit/e390f34c)  
-[[KYUUBI #2035] Hive Backend Engine - `build/dist` support](https://github.com/apache/incubator-kyuubi/commit/04896233)  
-[[KYUUBI #2189] Manage test failures with kyuubi spark nightly build - deregister when meeting specified exception *** FAILED ***](https://github.com/apache/incubator-kyuubi/commit/0025505a)  
-[[KYUUBI #2119][FOLLOWUP] Support output progress bar in Spark engine](https://github.com/apache/incubator-kyuubi/commit/e7c42012)  
-[[KYUUBI #2187] Manage test failures with kyuubi spark nightly build - execute simple scala code *** FAILED ***](https://github.com/apache/incubator-kyuubi/commit/b02fb584)  
-[[KYUUBI #2184] Manage test failures with kyuubi spark nightly build - operation listener *** FAILED ***](https://github.com/apache/incubator-kyuubi/commit/55b14056)  
-[[KYUUBI #2034] Hive Backend Engine - GetPrimaryKeys](https://github.com/apache/incubator-kyuubi/commit/f333e6ba)  
-[[KYUUBI #2086] Add issue template for subtasks - remove duplicate labels](https://github.com/apache/incubator-kyuubi/commit/ea6e02b5)  
-[[KYUUBI #2086] Add issue template for subtasks](https://github.com/apache/incubator-kyuubi/commit/0b721f4f)  
-[[KYUUBI #2163][K8s] copy beeline-jars into docker image](https://github.com/apache/incubator-kyuubi/commit/f29adb2f)  
-[[KYUUBI #2175] Improve CI with cancel & concurrency & paths filter](https://github.com/apache/incubator-kyuubi/commit/3eb1fa9e)  
-[[KYUUBI #2172][BUILD] Bump Flink 1.14.4 version](https://github.com/apache/incubator-kyuubi/commit/3c2463b1)  
-[[KYUUBI #2159][FLINK] Prevent loss of exception message line separator](https://github.com/apache/incubator-kyuubi/commit/4eb5df34)  
-[[KYUUBI #2024] Hive Backend Engine - ProcBuilder for HiveEngine](https://github.com/apache/incubator-kyuubi/commit/2911ac26)  
-[[KYUUBI #2017] Hive Backend Engine - GetColumns Operation](https://github.com/apache/incubator-kyuubi/commit/f22a14f1)  
-[[KYUUBI #2019] Hive Backend Engine - GetTableTypes Operation](https://github.com/apache/incubator-kyuubi/commit/db987998)  
-[[KYUUBI #2018] Hive Backend Engine - GetFunctions Operation](https://github.com/apache/incubator-kyuubi/commit/57edfcd0)  
-[[KYUUBI #2156][FOLLOWUP] Fix configuration format in document](https://github.com/apache/incubator-kyuubi/commit/62f685ff)  
-[[KYUUBI #2119] Support output progress bar in Spark engine](https://github.com/apache/incubator-kyuubi/commit/f65db034)  
-[[KYUUBI #2016] Hive Backend Engine - GetTables Operation](https://github.com/apache/incubator-kyuubi/commit/449c4260)  
-[[KYUUBI #2156] Change log to reflect exactly why getting token failed](https://github.com/apache/incubator-kyuubi/commit/31be7a30)  
-[[KYUUBI #2148][DOCS] Add dev/reformat usage](https://github.com/apache/incubator-kyuubi/commit/36507f8e)  
-[[KYUUBI #2150] [DOCS] Fix Getting Started With Kyuubi on Kubernetes](https://github.com/apache/incubator-kyuubi/commit/95fc6da9)  
-[[KYUUBI #2143] [KYUBBI #2142][DOCS] Add IDEA setup guide](https://github.com/apache/incubator-kyuubi/commit/2225b9a1)  
-[[KYUUBI #2015] Hive Backend Engine - GetSchemas Operation](https://github.com/apache/incubator-kyuubi/commit/07d36320)  
-[[KYUUBI #1936][FOLLOWUP] Send credentials when opening session and wait for completion](https://github.com/apache/incubator-kyuubi/commit/eb4d2890)  
-[[KYUUBI #2085][FOLLOWUP] Fix the wrong path of `module:hive`](https://github.com/apache/incubator-kyuubi/commit/ffb7a6f4)  
-[[KYUUBI #2134] Respect Spark bundled log4j in extension modules](https://github.com/apache/incubator-kyuubi/commit/d7d8b05d)  
-[[KYUUBI #2014] Hive Backend Engine - GetCatalog Operation](https://github.com/apache/incubator-kyuubi/commit/54c0035b)  
-[[KYUUBI #2129] FlinkEngine throws UnsupportedOperationException in GetColumns](https://github.com/apache/incubator-kyuubi/commit/4e2fdd52)  
-[[KYUUBI #2022] Hive Backend Engine - maven-google-downloader plugin support for hive distribution](https://github.com/apache/incubator-kyuubi/commit/0cfbcd2e)  
-[[KYUUBI #2084] Support arbitrary parameters for KyuubiConf](https://github.com/apache/incubator-kyuubi/commit/8f15622d)  
-[[KYUUBI #2112]  Improve the compatibility of queryTimeout in more version clients](https://github.com/apache/incubator-kyuubi/commit/55b422bf)  
-[[KYUUBI #2115] Update license and enhance collect_licenses script](https://github.com/apache/incubator-kyuubi/commit/33424a1b)  
-[[KYUUBI #2085] Add a labeler github action to triage PRs](https://github.com/apache/incubator-kyuubi/commit/da22498a)  
-[[KYUUBI #1866][FOLLOWUP] Add Deploy Kyuubi Flink engine on Yarn](https://github.com/apache/incubator-kyuubi/commit/a83cd49e)  
-[[KYUUBI #2120] Optimize RenewDelegationToken logs in Spark engine](https://github.com/apache/incubator-kyuubi/commit/450cf691)  
-[[KYUUBI #2104] Kill yarn job using yarn client API when kyuubi engine …](https://github.com/apache/incubator-kyuubi/commit/ffdd665f)  
-[[KYUUBI #2118] [SUB-TASK][KPIP-2] Support session jars management](https://github.com/apache/incubator-kyuubi/commit/537a0f82)  
-[[KYUUBI #2127] avoid to set HA_ZK_NAMESPACE and HA_ZK_ENGINE_REF_ID repetitively when create flink sql engine](https://github.com/apache/incubator-kyuubi/commit/48c0059d)  
-[[KYUUBI #2123] Output engine information after openEngineSession call fails](https://github.com/apache/incubator-kyuubi/commit/f97350de)  
-[[KYUUBI #2125] closeSession should avoid sending RPC after openSession fails](https://github.com/apache/incubator-kyuubi/commit/8c85480b)  
-[[KYUUBI #2116] move toString() to ProcBuilder trait from its implements](https://github.com/apache/incubator-kyuubi/commit/e4af5513)  
-[[KYUUBI #2103] Revert "[KYUUBI #1948] Upgrade thrift version to 0.16.0"](https://github.com/apache/incubator-kyuubi/commit/f8efcb71)  
-[[KYUUBI #1866][FOLLOWUP] Add logging of Flink SQL Engine](https://github.com/apache/incubator-kyuubi/commit/b8389dae)  
-[[KYUUBI #1866][DOCS] Add flink sql engine quick start](https://github.com/apache/incubator-kyuubi/commit/8f7b2c66)  
-[[KYUUBI #2108] Add description about trino in the config of engine.type](https://github.com/apache/incubator-kyuubi/commit/b7a5cfcf)  
-[[KYUUBI #2071] Using while-loop instead of map/range to improve performance in RowSet](https://github.com/apache/incubator-kyuubi/commit/e942df40)  
-[[KYUUBI #2070][FLINK] Support Flink job submission on yarn-session mode](https://github.com/apache/incubator-kyuubi/commit/7d66e9aa)  
-[[KYUUBI #2089] Add debugging instructions for Flink engine](https://github.com/apache/incubator-kyuubi/commit/2486c5df)  
-[[KYUUBI #2097] [CI] Upload Test Log for CI failure shall contain trino engine log #2094](https://github.com/apache/incubator-kyuubi/commit/c5a7c669)  
-[[KYUUBI #2095] Remove useless logic about add conf when create a new engine](https://github.com/apache/incubator-kyuubi/commit/46234594)  
-[[KYUUBI #1948][FOLLOWUP] Relocate fb303 classes](https://github.com/apache/incubator-kyuubi/commit/12d56422)  
-[[KYUUBI #1978] Support NEGOTIATE/BASIC authorization for restful frontend service](https://github.com/apache/incubator-kyuubi/commit/1e23e7a9)  
-[[KYUUBI #2081] YARN_CONF_DIR shall be added to kyuubi server classpath as HADOOP_CONF_DIR](https://github.com/apache/incubator-kyuubi/commit/a71c511b)  
-[[KYUUBI #2078] `logCaptureThread` does not catch sparksubmit exception](https://github.com/apache/incubator-kyuubi/commit/cf014ee2)  
-[[KYUUBI #1936] Send credentials when opening session and wait for completion](https://github.com/apache/incubator-kyuubi/commit/8e983a19)  
-[[KYUUBI #2079] Update kyuubi layer source file to add flink and trino…](https://github.com/apache/incubator-kyuubi/commit/a882c4bf)  
-[[KYUUBI #1563] Fix broken link and add new link in `CONTRIBUTION.md`](https://github.com/apache/incubator-kyuubi/commit/95a4974e)  
-[[KYUUBI #2072] Improve rest server behavior](https://github.com/apache/incubator-kyuubi/commit/746a94a7)  
-[[KYUUBI #2075] Using thread-safe FastDateFormat instead of SimpleDateFormat](https://github.com/apache/incubator-kyuubi/commit/5a64e124)  
-[[KYUUBI #2066] fix spelling mistake and appropriate naming](https://github.com/apache/incubator-kyuubi/commit/caeb6a43)  
-[[KYUUBI #2063] Fix engine idle timeout lose efficacy for Flink Engine](https://github.com/apache/incubator-kyuubi/commit/dde83819)  
-[[KYUUBI #2061] Implementation of the very basic UI on current Jetty server](https://github.com/apache/incubator-kyuubi/commit/9bf8ff83)  
-[[KYUUBI #2011] Introduce to very basic hive engine](https://github.com/apache/incubator-kyuubi/commit/2b50aaa4)  
-[[KYUUBI #1215][DOC] Document incremental collection](https://github.com/apache/incubator-kyuubi/commit/54dfb4bb)  
-[[KYUUBI #2043] Upgrade log4j/2.x/ to 2.17.2](https://github.com/apache/incubator-kyuubi/commit/dc6085e0)  
-[[KYUUBI #2060] Clear job group for init SQL](https://github.com/apache/incubator-kyuubi/commit/f8d9010b)  
-[[KYUUBI #2044] Remove authentication thread local objects to prevent memory leak](https://github.com/apache/incubator-kyuubi/commit/35a6b9b3)  
-[[KYUUBI #1955] Add CI for branch-1.5 & 1.4 SNAPSHOTS](https://github.com/apache/incubator-kyuubi/commit/df2ff6e8)  
-[[KYUUBI #2054] [KYUUBI-1819] Support closing Flink SQL engine process](https://github.com/apache/incubator-kyuubi/commit/9518724c)  
-[[KYUUBI #2055] correct the log service name](https://github.com/apache/incubator-kyuubi/commit/b1e949d4)  
-[[KYUUBI #2047] Support more MySQL JDBC driver versions](https://github.com/apache/incubator-kyuubi/commit/109569bc)   
diff --git a/content/docs/r1.6.0-incubating/_sources/client/advanced/configurations.rst.txt b/content/docs/r1.6.0-incubating/_sources/client/advanced/configurations.rst.txt
deleted file mode 100644
index fd9f20a..0000000
--- a/content/docs/r1.6.0-incubating/_sources/client/advanced/configurations.rst.txt
+++ /dev/null
@@ -1,17 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-Client Configuration Guide
-==========================
diff --git a/content/docs/r1.6.0-incubating/_sources/client/advanced/features/engine_pool.rst.txt b/content/docs/r1.6.0-incubating/_sources/client/advanced/features/engine_pool.rst.txt
deleted file mode 100644
index ed97f3c..0000000
--- a/content/docs/r1.6.0-incubating/_sources/client/advanced/features/engine_pool.rst.txt
+++ /dev/null
@@ -1,18 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-Enabling Kyuubi Engine Pool
-===========================
-
diff --git a/content/docs/r1.6.0-incubating/_sources/client/advanced/features/engine_resouces.rst.txt b/content/docs/r1.6.0-incubating/_sources/client/advanced/features/engine_resouces.rst.txt
deleted file mode 100644
index 538c9ec..0000000
--- a/content/docs/r1.6.0-incubating/_sources/client/advanced/features/engine_resouces.rst.txt
+++ /dev/null
@@ -1,18 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-Configuring Resources for Kyuubi Engines
-========================================
-
diff --git a/content/docs/r1.6.0-incubating/_sources/client/advanced/features/engine_share_level.rst.txt b/content/docs/r1.6.0-incubating/_sources/client/advanced/features/engine_share_level.rst.txt
deleted file mode 100644
index 9dd484b..0000000
--- a/content/docs/r1.6.0-incubating/_sources/client/advanced/features/engine_share_level.rst.txt
+++ /dev/null
@@ -1,18 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-Sharing and Isolation for Kyuubi Engines
-========================================
-
diff --git a/content/docs/r1.6.0-incubating/_sources/client/advanced/features/engine_ttl.rst.txt b/content/docs/r1.6.0-incubating/_sources/client/advanced/features/engine_ttl.rst.txt
deleted file mode 100644
index 0ba7751..0000000
--- a/content/docs/r1.6.0-incubating/_sources/client/advanced/features/engine_ttl.rst.txt
+++ /dev/null
@@ -1,18 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-Setting Time to Live for Kyuubi Engines
-=======================================
-
diff --git a/content/docs/r1.6.0-incubating/_sources/client/advanced/features/engine_type.rst.txt b/content/docs/r1.6.0-incubating/_sources/client/advanced/features/engine_type.rst.txt
deleted file mode 100644
index 1687678..0000000
--- a/content/docs/r1.6.0-incubating/_sources/client/advanced/features/engine_type.rst.txt
+++ /dev/null
@@ -1,18 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-Using Different Kyuubi Engines
-==============================
-
diff --git a/content/docs/r1.6.0-incubating/_sources/client/advanced/features/index.rst.txt b/content/docs/r1.6.0-incubating/_sources/client/advanced/features/index.rst.txt
deleted file mode 100644
index 0491af2..0000000
--- a/content/docs/r1.6.0-incubating/_sources/client/advanced/features/index.rst.txt
+++ /dev/null
@@ -1,30 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-Advanced Features
-=================
-
-.. toctree::
-    :maxdepth: 2
-
-    engine_type
-    engine_share_level
-    engine_ttl
-    engine_pool
-    engine_resources
-    scala
-    plan_only
-
-
diff --git a/content/docs/r1.6.0-incubating/_sources/client/advanced/features/plan_only.rst.txt b/content/docs/r1.6.0-incubating/_sources/client/advanced/features/plan_only.rst.txt
deleted file mode 100644
index 2f2c7f1..0000000
--- a/content/docs/r1.6.0-incubating/_sources/client/advanced/features/plan_only.rst.txt
+++ /dev/null
@@ -1,18 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-Plan Only Execution Mode
-========================
-
diff --git a/content/docs/r1.6.0-incubating/_sources/client/advanced/features/scala.rst.txt b/content/docs/r1.6.0-incubating/_sources/client/advanced/features/scala.rst.txt
deleted file mode 100644
index 64877c7..0000000
--- a/content/docs/r1.6.0-incubating/_sources/client/advanced/features/scala.rst.txt
+++ /dev/null
@@ -1,18 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-Running Scala Snippets
-======================
-
diff --git a/content/docs/r1.6.0-incubating/_sources/client/advanced/index.rst.txt b/content/docs/r1.6.0-incubating/_sources/client/advanced/index.rst.txt
deleted file mode 100644
index 1ff4e59..0000000
--- a/content/docs/r1.6.0-incubating/_sources/client/advanced/index.rst.txt
+++ /dev/null
@@ -1,25 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-Client Commons
-==============
-
-.. toctree::
-    :maxdepth: 2
-
-    configurations
-    logging
-    kerberized_kyuubi
-    features/index
diff --git a/content/docs/r1.6.0-incubating/_sources/client/advanced/kerberos.md.txt b/content/docs/r1.6.0-incubating/_sources/client/advanced/kerberos.md.txt
deleted file mode 100644
index f6d71e8..0000000
--- a/content/docs/r1.6.0-incubating/_sources/client/advanced/kerberos.md.txt
+++ /dev/null
@@ -1,224 +0,0 @@
-<!--
- - Licensed to the Apache Software Foundation (ASF) under one or more
- - contributor license agreements.  See the NOTICE file distributed with
- - this work for additional information regarding copyright ownership.
- - The ASF licenses this file to You under the Apache License, Version 2.0
- - (the "License"); you may not use this file except in compliance with
- - the License.  You may obtain a copy of the License at
- -
- -   http://www.apache.org/licenses/LICENSE-2.0
- -
- - Unless required by applicable law or agreed to in writing, software
- - distributed under the License is distributed on an "AS IS" BASIS,
- - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- - See the License for the specific language governing permissions and
- - limitations under the License.
- -->
-
-
-# Configure Kerberos for clients to Access Kerberized Kyuubi
-
-## Instructions
-When Kyuubi is secured by Kerberos, the authentication procedure becomes a little complicated.
-
-![](../../imgs/kyuubi_kerberos_authentication.png)
-
-The graph above shows a simplified kerberos authentication procedure:
-1. Kerberos client sends user principal and secret key to KDC. Secret key can be a password or a keytab file.   
-2. KDC returns a `ticket-granting ticket`(TGT).
-3. Kerberos client stores TGT into a ticket cache.
-4. JDBC client, such as beeline and BI tools, reads TGT from the ticket cache.
-5. JDBC client sends TGT and server principal to KDC.
-6. KDC returns a `client-to-server ticket`.
-7. JDBC client sends `client-to-server ticket` to Kyuubi server to prove its identity.
-
-In the rest part of this page, we will describe steps needed to pass through this authentication.
-
-## Install Kerberos Client
-Usually, Kerberos client is installed as default. You can validate it using klist tool.
-
-Linux command and output:
-```bash
-$ klist -V
-Kerberos 5 version 1.15.1
-```
-
-MacOS command and output:
-```bash
-$ klist --version
-klist (Heimdal 1.5.1apple1)
-Copyright 1995-2011 Kungliga Tekniska Högskolan
-Send bug-reports to heimdal-bugs@h5l.org
-```
-
-Windows command and output:
-```cmd
-> klist -V
-Kerberos for Windows
-```
-
-If the client is not installed, you should install it ahead based on the OS platform.  
-We recommend you to install the MIT Kerberos Distribution as all commands in this guide is based on it.  
-
-## Configure Kerberos Client
-Kerberos client needs a configuration file for tuning up the creation of Kerberos ticket cache.
-Following is the configuration file's default location on different OS:
-
-OS | Path
----| ---
-Linux | /etc/krb5.conf
-MacOS | /etc/krb5.conf
-Windows | %ProgramData%\MIT\Kerberos5\krb5.ini
-
-You can use `KRB5_CONFIG` environment variable to overwrite the default location.
-
-The configuration file should be configured to point to the same KDC as Kyuubi points to.
-
-## Get Kerberos TGT
-Execute `kinit` command to get TGT from KDC.
-
-Suppose user principal is `kyuubi_user@KYUUBI.APACHE.ORG` and user keytab file name is `kyuubi_user.keytab`, 
-the command should be:
-
-```
-$ kinit -kt kyuubi_user.keytab kyuubi_user@KYUUBI.APACHE.ORG
-
-(Command is identical on different OS platform)
-```
-
-You may also execute `kinit` command with principal and password to get TGT:
-
-```
-$ kinit kyuubi_user@KYUUBI.APACHE.ORG
-Password for kyuubi_user@KYUUBI.APACHE.ORG: password 
-
-(Command is identical on different OS platform)
-```
-
-If the command executes successfully, TGT will be store in ticket cache.   
-Use `klist` command to print TGT info in ticket cache:
-
-```
-$ klist
-
-Ticket cache: FILE:/tmp/krb5cc_1000
-Default principal: kyuubi_user@KYUUBI.APACHE.ORG
-
-Valid starting       Expires              Service principal
-2021-12-13T18:44:58  2021-12-14T04:44:58  krbtgt/KYUUBI.APACHE.ORG@KYUUBI.APACHE.ORG
-    renew until 2021-12-14T18:44:57
-    
-(Command is identical on different OS platform. Ticket cache location may be different.)
-```
-
-Ticket cache may have different storage type on different OS platform. 
-
-For example,
-
-OS | Default Ticket Cache Type and Location
----| ---
-Linux | FILE:/tmp/krb5cc_%{uid}
-MacOS | KCM:%{uid}:%{gid}
-Windows | API:krb5cc
-
-You can find your ticket cache type and location in the `Ticket cache` part of `klist` output.
-
-**Note**:  
-- Ensure your ticket cache type is `FILE` as JVM can only read ticket cache stored as file.
-- Do not store TGT into default ticket cache if you are running Kyuubi and execute `kinit` on the same 
-host with the same OS user. The default ticket cache is already used by Kyuubi server.
-
-Either because the default ticket cache is not a file, or because it is used by Kyuubi server, you 
-should store ticket cache in another file location.  
-This can be achieved by specifying a file location with `-c` argument in `kinit` command.
-
-For example,
-```
-$ kinit -c /tmp/krb5cc_beeline -kt kyuubi_user.keytab kyuubi_user@KYUUBI.APACHE.ORG
-
-(Command is identical on different OS platform)
-```
-
-To check the ticket cache, specify the file location with `-c` argument in `klist` command.
-
-For example,
-```
-$ klist -c /tmp/krb5cc_beeline
-
-(Command is identical on different OS platform)
-```
-
-## Add Kerberos Client Configuration File to JVM Search Path
-The JVM, which JDBC client is running on, also needs to read the Kerberos client configuration file.
-However, JVM uses different default locations from Kerberos client, and does not honour `KRB5_CONFIG`
-environment variable.
-
-OS | JVM Search Paths
----| ---
-Linux | System scope: `/etc/krb5.conf`
-MacOS | User scope: `$HOME/Library/Preferences/edu.mit.Kerberos`<br/>System scope: `/etc/krb5.conf`
-Windows | User scoep: `%USERPROFILE%\krb5.ini`<br/>System scope: `%windir%\krb5.ini`
-
-You can use JVM system property, `java.security.krb5.conf`, to overwrite the default location.
-
-## Add Kerberos Ticket Cache to JVM Search Path
-JVM determines the ticket cache location in the following order:
-1. Path specified by `KRB5CCNAME` environment variable. Path must start with `FILE:`.
-2. `/tmp/krb5cc_%{uid}` on Unix-like OS, e.g. Linux, MacOS
-3. `${user.home}/krb5cc_${user.name}` if `${user.name}` is not null
-4. `${user.home}/krb5cc` if `${user.name}` is null
-
-**Note**:  
-- `${user.home}` and `${user.name}` are JVM system properties.
-- `${user.home}` should be replaced with `${user.dir}` if `${user.home}` is null.
- 
-Ensure your ticket cache is stored as a file and put it in one of the above locations. 
-
-## Ensure core-site.xml Exists in Classpath
-Like hadoop clients, `hadoop.security.authentication` should be set to `KERBEROS` in `core-site.xml` 
-to let Hive JDBC driver use Kerberos authentication. `core-site.xml` should be placed under beeline's 
-classpath or BI tools' classpath.
-
-### Beeline
-Here are the usual locations where `core-site.xml` should exist for different beeline distributions:
-
-Client | Location | Note
---- | --- | ---
-Hive beeline | `$HADOOP_HOME/etc/hadoop` | Hive resolves `$HADOOP_HOME` and use `$HADOOP_HOME/bin/hadoop` command to launch beeline. `$HADOOP_HOME/etc/hadoop` is in `hadoop` command's classpath.
-Spark beeline | `$HADOOP_CONF_DIR` | In `$SPARK_HOME/conf/spark-env.sh`, `$HADOOP_CONF_DIR` often be set to the directory containing hadoop client configuration files.
-Kyuubi beeline | `$HADOOP_CONF_DIR` | In `$KYUUBI_HOME/conf/kyuubi-env.sh`, `$HADOOP_CONF_DIR` often be set to the directory containing hadoop client configuration files.
-
-If `core-site.xml` is not found in above locations, create one with the following content:
-```xml
-<configuration>
-  <property>
-    <name>hadoop.security.authentication</name>
-    <value>kerberos</value>
-  </property>
-</configuration>
-```
-
-### BI Tools
-As to BI tools, ways to add `core-site.xml` varies.  
-Take DBeaver as an example. We can add files to DBeaver's classpath through its `Global libraries` preference.  
-As `Global libraries` only accepts jar files, you should package `core-site.xml` into a jar file.
-
-```bash
-$ jar -c -f core-site.jar core-site.xml
-
-(Command is identical on different OS platform)
-```
-
-## Connect with JDBC URL
-The last step is to connect to Kyuubi with the right JDBC URL.  
-The JDBC URL should be in format: 
-
-```
-jdbc:hive2://<kyuubi_server_a ddress>:<kyuubi_server_port>/<db>;principal=<kyuubi_server_principal>
-```
-
-**Note**:  
-- `kyuubi_server_principal` is the value of `kyuubi.kinit.principal` set in `kyuubi-defaults.conf`.
-- As a command line argument, JDBC URL should be quoted to avoid being split into 2 commands by ";".
-- As to DBeaver, `<db>;principal=<kyuubi_server_principal>` should be set as the `Database/Schema` argument.
-
diff --git a/content/docs/r1.6.0-incubating/_sources/client/advanced/logging.rst.txt b/content/docs/r1.6.0-incubating/_sources/client/advanced/logging.rst.txt
deleted file mode 100644
index ab0013a..0000000
--- a/content/docs/r1.6.0-incubating/_sources/client/advanced/logging.rst.txt
+++ /dev/null
@@ -1,17 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-Logging
-=======
diff --git a/content/docs/r1.6.0-incubating/_sources/client/bi_tools/datagrip.md.txt b/content/docs/r1.6.0-incubating/_sources/client/bi_tools/datagrip.md.txt
deleted file mode 100644
index 517061d..0000000
--- a/content/docs/r1.6.0-incubating/_sources/client/bi_tools/datagrip.md.txt
+++ /dev/null
@@ -1,57 +0,0 @@
-<!--
- - Licensed to the Apache Software Foundation (ASF) under one or more
- - contributor license agreements.  See the NOTICE file distributed with
- - this work for additional information regarding copyright ownership.
- - The ASF licenses this file to You under the Apache License, Version 2.0
- - (the "License"); you may not use this file except in compliance with
- - the License.  You may obtain a copy of the License at
- -
- -   http://www.apache.org/licenses/LICENSE-2.0
- -
- - Unless required by applicable law or agreed to in writing, software
- - distributed under the License is distributed on an "AS IS" BASIS,
- - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- - See the License for the specific language governing permissions and
- - limitations under the License.
- -->
-
-# DataGrip
-## What is DataGrip
-[DataGrip](https://www.jetbrains.com/datagrip/) is a multi-engine database environment released by JetBrains, supporting MySQL and PostgreSQL, Microsoft SQL Server and Oracle, Sybase, DB2, SQLite, HyperSQL, Apache Derby, and H2.
-
-## Preparation
-### Get DataGrip And Install
-Please go to [Download DataGrip](https://www.jetbrains.com/datagrip/download) to get and install an appropriate version for yourself.
-### Get Kyuubi Started
-[Get kyuubi server started](../../quick_start.md) before you try DataGrip with kyuubi.
-
-For debugging purpose, you can use `tail -f` or `tailf` to track the server log.
-## Configurations
-### Start DataGrip
-After you install DataGrip, just launch it.
-### Select Database
-Substantially, this step is to choose a JDBC Driver type to use later. We can choose Apache Hive to set up a driver for Kyuubi.
-
-![select database](../../imgs/datagrip/select_database.png)
-### Datasource Driver
-You should first download the missing driver files. Just click on the link below, DataGrip will download and install those. 
-
-![datasource and driver](../../imgs/datagrip/datasource_and_driver.png)
-### Generic JDBC Connection Settings
-After install drivers, you should configure the right host and port which you can find in kyuubi server log. By default, we use `localhost` and `10009` to configure.
-
-Of course, you can fill other configs.
-
-After generic configs, you can use test connection to test.
-
-![configuration](../../imgs/datagrip/configuration.png)
-## Interacting With Kyuubi Server
-Now, you can interact with Kyuubi server.
-
-The left side of the photo is the table, and the right side of the photo is the console.
-
-You can interact through the visual interface or code.
-
-![workspace](../../imgs/datagrip/workspace.png)
-## The End
-There are many other amazing features in both Kyuubi and DataGrip and here is just the tip of the iceberg. The rest is for you to discover.
\ No newline at end of file
diff --git a/content/docs/r1.6.0-incubating/_sources/client/bi_tools/dbeaver.rst.txt b/content/docs/r1.6.0-incubating/_sources/client/bi_tools/dbeaver.rst.txt
deleted file mode 100644
index 0064985..0000000
--- a/content/docs/r1.6.0-incubating/_sources/client/bi_tools/dbeaver.rst.txt
+++ /dev/null
@@ -1,125 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-DBeaver
-=======
-
-What is DBeaver
----------------
-
-.. image:: https://raw.githubusercontent.com/wiki/dbeaver/dbeaver/images/dbeaver-icon-64x64.png
-
-`DBeaver`_ is a free multi-platform database tool for developers, database administrators, analysts, and all people who need to work with databases.
-Supports all popular databases as well as kyuubi JDBC.
-
-.. seealso:: `DBeaver Wiki`
-
-Installation
-------------
-Please go to `Download DBeaver`_ page to get and install an appropriate release version for yourself.
-
-.. versionadded:: 22.1.0(dbeaver)
-   DBeaver officially supports apache kyuubi JDBC driver since 06 Jun 2022 via `PR 16567 <https://github.com/dbeaver/dbeaver/issues/16567>`_.
-
-Using DBeaver with Kyuubi
--------------------------
-If you have successfully installed dbeaver, just hit the button to launch it.
-
-New Connection
-**************
-
-Firstly, we need to create a database connection against a live kyuubi server.
-You are able to find the kyuubi jdbc driver since dbeaver 22.1.0, as shown in the following figure.
-
-.. image:: ../../imgs/dbeaver/new_database_connection.png
-
-.. note::
-   We can also choose Apache Hive or Apache Spark to set up a driver for Kyuubi, because they are compatible with the same client.
-
-Configure Connection
-********************
-
-Secondly, we configure the JDBC connection settings to format an underlying kyuubi JDBC connection URL string.
-
-Basic Connection Settings
-^^^^^^^^^^^^^^^^^^^^^^^^^
-
-The basic connection setting contains a minimal set of items you need to talk with kyuubi server,
-
-- Host - hostname or IP address that the kyuubi server bound with, default: `localhost`.
-- Port - port that the kyuubi server listening to, default: `10009`.
-- Database/Schema - database or schema to use, default: `default`.
-- Authentication - identity information, such as user/password, based on the server authentication mechanism.
-
-Session Configurations
-^^^^^^^^^^^^^^^^^^^^^^
-
-Session configuration list is an optional part of kyuubi JDBC URLs, which are very helpful to override some configurations of the kyuubi server at session scope.
-The setup page of dbeaver does not contain any text box for such behavior.
-However, we can append the semicolon-separated configuration pairs to the Database/Schema filed leading with a number sign(#).
-Though it's a bit weird, but it works.
-
-.. image:: ../../imgs/dbeaver/configure_database_connection.png
-
-As an example, shown in the picture above, the engine uses 2 gigabytes memory for the driver process of kyuubi engine and will be terminated after idle for 30 seconds.
-
-Connecting in HA mode
-^^^^^^^^^^^^^^^^^^^^^
-
-Kyuubi supports HA by service discovery over Apache Zookeeper cluster.
-
-.. image:: ../../imgs/dbeaver/configure_database_connection_ha.png
-
-As an example, shown in the above picture, the Host and Port fields can be used to concat the comma separated zookeeper peers,
-while the `serviceDiscoveryMode` and `zooKeeperNamespace` are appended to the Database/Schema field.
-
-Test Connection
-***************
-
-It is not necessary but recommended to click `Test Connection` to verify the connection is set correctly.
-If something wrong happens at the client side or server side, we can debug ahead with the error message.
-
-SQL Operations
-**************
-
-Now, we can use the SQL editor to write queries to interact with Kyuubi server through the connection.
-
-.. code-block:: sql
-
-   DESC NAMESPACE DEFAULT;
-
-.. code-block:: sql
-
-   CREATE TABLE spark_catalog.`default`.SRC(KEY INT, VALUE STRING) USING PARQUET;
-   INSERT INTO TABLE spark_catalog.`default`.SRC VALUES (11215016, 'Kent Yao');
-
-.. code-block:: sql
-
-   SELECT KEY % 10 AS ID, SUBSTRING(VALUE, 1, 4) AS NAME FROM spark_catalog.`default`.SRC;
-
-.. image:: ../../imgs/dbeaver/metadata.png
-
-.. code-block:: sql
-
-   DROP TABLE spark_catalog.`default`.SRC;
-
-Client Authentication
----------------------
-For kerberized kyuubi clusters, please refer to `Kerberos Authentication`_ for more information.
-
-.. _DBeaver: https://dbeaver.io/
-.. _DBeaver Wiki: https://github.com/dbeaver/dbeaver/wiki
-.. _Download DBeaver: https://dbeaver.io/download/
-.. _Kerberos Authentication: ../advanced/kerberized_kyuubi.html#bi-tools
diff --git a/content/docs/r1.6.0-incubating/_sources/client/bi_tools/hue.md.txt b/content/docs/r1.6.0-incubating/_sources/client/bi_tools/hue.md.txt
deleted file mode 100644
index ff11632..0000000
--- a/content/docs/r1.6.0-incubating/_sources/client/bi_tools/hue.md.txt
+++ /dev/null
@@ -1,130 +0,0 @@
-<!--
- - Licensed to the Apache Software Foundation (ASF) under one or more
- - contributor license agreements.  See the NOTICE file distributed with
- - this work for additional information regarding copyright ownership.
- - The ASF licenses this file to You under the Apache License, Version 2.0
- - (the "License"); you may not use this file except in compliance with
- - the License.  You may obtain a copy of the License at
- -
- -   http://www.apache.org/licenses/LICENSE-2.0
- -
- - Unless required by applicable law or agreed to in writing, software
- - distributed under the License is distributed on an "AS IS" BASIS,
- - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- - See the License for the specific language governing permissions and
- - limitations under the License.
- -->
-
-
-# Cloudera Hue
-
-## What is Hue
-
-[Hue](https://gethue.com/) is an open source SQL Assistant for Databases & Data Warehouses.
-
-## Preparation
-
-### Get Kyuubi Started
-
-[Get the server Started](../../quick_start.md) first before your try Hue with Kyuubi.
-
-```bash
-Welcome to
-  __  __                           __
- /\ \/\ \                         /\ \      __
- \ \ \/'/'  __  __  __  __  __  __\ \ \____/\_\
-  \ \ , <  /\ \/\ \/\ \/\ \/\ \/\ \\ \ '__`\/\ \
-   \ \ \\`\\ \ \_\ \ \ \_\ \ \ \_\ \\ \ \L\ \ \ \
-    \ \_\ \_\/`____ \ \____/\ \____/ \ \_,__/\ \_\
-     \/_/\/_/`/___/> \/___/  \/___/   \/___/  \/_/
-                /\___/
-                \/__/
-```
-
-## Run Hue in Docker
-
-Here we demo running Kyuubi on macOS and Hue on [Docker for Mac](https://docs.docker.com/docker-for-mac/), 
-there are several known limitations of network, and you can find 
-[workarounds from here](https://docs.docker.com/docker-for-mac/networking/#known-limitations-use-cases-and-workarounds).
-
-### Configuration
-
-1. Copy a configuration template from Hue Docker image.
-
-```
-docker run --rm gethue/hue:latest cat /usr/share/hue/desktop/conf/hue.ini > hue.ini
-```
-
-2. Modify the `hue.ini`
-
-```ini
-[beeswax]
-  # Kyuubi 1.1.x support thrift version from 1 to 10
-  thrift_version=7
-  # change to your username to avoid permissions issue for local test
-  auth_username=chengpan
-
-[notebook]
-  [[interpreters]]
-    [[[sql]]]
-      name=SparkSQL
-      interface=hiveserver2
-      
-[spark]
-  # Host of the Spark Thrift Server
-  # For macOS users, use docker.for.mac.host.internal to access host network
-  sql_server_host=docker.for.mac.host.internal
-
-  # Port of the Spark Thrift Server
-  sql_server_port=10009
-  
-# other configurations
-...
-```
-
-### Start Hue in Docker
-
-```
-docker run -p 8888:8888 -v $PWD/hue.ini:/usr/share/hue/desktop/conf/hue.ini gethue/hue:latest
-```
-
-Go http://localhost:8888/ and follow the guide to create an account.
-
-![](../../imgs/hue/start.png)
-
-Having fun with Hue and Kyuubi!
-
-![](../../imgs/hue/spark_sql_docker.png)
-
-## For CDH 6.x Users
-
-If you are using CDH 6.x, there is a trick that CDH 6.x blocks Spark in default, you need to modify the configuration to 
-overwrite the `desktop.app_blacklist` to remove this restriction.
-
-Config Hue in Cloudera Manager.
-
-![](../../imgs/hue/cloudera_manager.png)
-
-Refer following configuration and tune it to fit your environment.
-```
-[desktop]
- app_blacklist=zookeeper,hbase,impala,search,sqoop,security
- use_new_editor=true
-[[interpreters]]
-[[[sparksql]]]
-  name=Spark SQL
-  interface=hiveserver2
-  # other interpreters
-  ...
-[spark]
-sql_server_host=kyuubi-server-host
-sql_server_port=10009
-```
-
-You need to restart the Hue Service to activate the configuration changes, and then Spark SQL will available in editor list.
-
-![](../../imgs/hue/editor.png)
-
-Having fun with Hue and Kyuubi!
-
-![](../../imgs/hue/spark_sql_cdh6.png)
diff --git a/content/docs/r1.6.0-incubating/_sources/client/bi_tools/index.rst.txt b/content/docs/r1.6.0-incubating/_sources/client/bi_tools/index.rst.txt
deleted file mode 100644
index b11076a..0000000
--- a/content/docs/r1.6.0-incubating/_sources/client/bi_tools/index.rst.txt
+++ /dev/null
@@ -1,32 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-Business Intelligence Tools and SQL IDEs
-========================================
-
-Kyuubi provides a standard JDBC/ODBC interface over thrift that allows various existing BI tools, SQL clients/IDEs to connect with.
-
-.. note:: Is your favorite tool missing?
-   `Report an feature request <https://kyuubi.apache.org/issue_tracking.html>`_ or help us document it.
-
-.. toctree::
-    :maxdepth: 1
-
-    superset
-    hue
-    datagrip
-    dbeaver
-    powerbi
-    tableau
diff --git a/content/docs/r1.6.0-incubating/_sources/client/bi_tools/powerbi.rst.txt b/content/docs/r1.6.0-incubating/_sources/client/bi_tools/powerbi.rst.txt
deleted file mode 100644
index 2da6747..0000000
--- a/content/docs/r1.6.0-incubating/_sources/client/bi_tools/powerbi.rst.txt
+++ /dev/null
@@ -1,21 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-`PowerBI`_
-==========
-
-.. warning:: The document you are visiting now is incomplete, please help kyuubi community to fix it if appropriate for you.
-
-.. _PowerBI: https://powerbi.microsoft.com/en-us/
\ No newline at end of file
diff --git a/content/docs/r1.6.0-incubating/_sources/client/bi_tools/superset.rst.txt b/content/docs/r1.6.0-incubating/_sources/client/bi_tools/superset.rst.txt
deleted file mode 100644
index 52afdd9..0000000
--- a/content/docs/r1.6.0-incubating/_sources/client/bi_tools/superset.rst.txt
+++ /dev/null
@@ -1,21 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-`Apache Superset`_
-==================
-
-.. warning:: The document you are visiting now is incomplete, please help kyuubi community to fix it if appropriate for you.
-
-.. _Apache Superset: https://superset.apache.org/
\ No newline at end of file
diff --git a/content/docs/r1.6.0-incubating/_sources/client/bi_tools/tableau.rst.txt b/content/docs/r1.6.0-incubating/_sources/client/bi_tools/tableau.rst.txt
deleted file mode 100644
index ef6e6aa..0000000
--- a/content/docs/r1.6.0-incubating/_sources/client/bi_tools/tableau.rst.txt
+++ /dev/null
@@ -1,21 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-`Tableau`_
-==========
-
-.. warning:: The document you are visiting now is incomplete, please help kyuubi community to fix it if appropriate for you.
-
-.. _Tableau: https://www.tableau.com/
\ No newline at end of file
diff --git a/content/docs/r1.6.0-incubating/_sources/client/cli/hive_beeline.rst.txt b/content/docs/r1.6.0-incubating/_sources/client/cli/hive_beeline.rst.txt
deleted file mode 100644
index fda925a..0000000
--- a/content/docs/r1.6.0-incubating/_sources/client/cli/hive_beeline.rst.txt
+++ /dev/null
@@ -1,31 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-Hive Beeline
-============
-
-Kyuubi supports Apache Hive beeline that works with Kyuubi server.
-Hive beeline is a `SQLLine CLI <http://sqlline.sourceforge.net/>`_ based on the `Hive JDBC Driver <../jdbc/hive_jdbc.html>`_.
-
-Prerequisites
--------------
-
-- Kyuubi server installed and launched.
-- Hive beeline installed
-
-.. important:: Kyuubi does not support embedded mode which beeline and server run in the same process.
-   It always uses remote mode for connecting beeline with a separate server process over thrift.
-
-.. warning:: The document you are visiting now is incomplete, please help kyuubi community to fix it if appropriate for you.
diff --git a/content/docs/r1.6.0-incubating/_sources/client/cli/index.rst.txt b/content/docs/r1.6.0-incubating/_sources/client/cli/index.rst.txt
deleted file mode 100644
index 61be9ad..0000000
--- a/content/docs/r1.6.0-incubating/_sources/client/cli/index.rst.txt
+++ /dev/null
@@ -1,23 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-Command Line Interface(CLI)s
-============================
-
-.. toctree::
-    :maxdepth: 2
-
-    kyuubi_beeline
-    hive_beeline
diff --git a/content/docs/r1.6.0-incubating/_sources/client/cli/kyuubi_beeline.rst.txt b/content/docs/r1.6.0-incubating/_sources/client/cli/kyuubi_beeline.rst.txt
deleted file mode 100644
index e217810..0000000
--- a/content/docs/r1.6.0-incubating/_sources/client/cli/kyuubi_beeline.rst.txt
+++ /dev/null
@@ -1,22 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-Kyuubi Beeline
-==============
-
-.. warning:: The document you are visiting now is incomplete, please help kyuubi community to fix it if appropriate for you.
-
-
-
diff --git a/content/docs/r1.6.0-incubating/_sources/client/index.rst.txt b/content/docs/r1.6.0-incubating/_sources/client/index.rst.txt
deleted file mode 100644
index 049f593..0000000
--- a/content/docs/r1.6.0-incubating/_sources/client/index.rst.txt
+++ /dev/null
@@ -1,39 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-Clients
-=======
-
-This section aims to document the APIs, clients and tools for end-users who are not necessary to care about deployment at the kyuubi server side.
-
-Kyuubi provides standards-based drivers for JDBC, and ODBC enabling developers to build database applications in their language of choice.
-
-In addition, APIs like REST, Thrift, etc., allow developers to access kyuubi directly and flexibly.
-
-.. note::
-   When you try some of the examples in this section, make sure you have a available server.
-
-.. toctree::
-    :maxdepth: 2
-
-    jdbc/index
-    cli/index
-    bi_tools/index
-    odbc/index
-    thrift/index
-    rest/index
-    ui/index
-    python/index
-    advanced/index
\ No newline at end of file
diff --git a/content/docs/r1.6.0-incubating/_sources/client/jdbc/hive_jdbc.md.txt b/content/docs/r1.6.0-incubating/_sources/client/jdbc/hive_jdbc.md.txt
deleted file mode 100644
index f0886d7..0000000
--- a/content/docs/r1.6.0-incubating/_sources/client/jdbc/hive_jdbc.md.txt
+++ /dev/null
@@ -1,82 +0,0 @@
-<!--
- - Licensed to the Apache Software Foundation (ASF) under one or more
- - contributor license agreements.  See the NOTICE file distributed with
- - this work for additional information regarding copyright ownership.
- - The ASF licenses this file to You under the Apache License, Version 2.0
- - (the "License"); you may not use this file except in compliance with
- - the License.  You may obtain a copy of the License at
- -
- -   http://www.apache.org/licenses/LICENSE-2.0
- -
- - Unless required by applicable law or agreed to in writing, software
- - distributed under the License is distributed on an "AS IS" BASIS,
- - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- - See the License for the specific language governing permissions and
- - limitations under the License.
- -->
-
-
-# Hive JDBC Driver
-
-
-## Instructions
-
-Kyuubi does not provide its own JDBC Driver so far,
-as it is fully compatible with Hive JDBC and ODBC drivers that let you connect to popular Business Intelligence (BI) tools to query,
-analyze and visualize data though Spark SQL engines.
-
-
-## Install Hive JDBC
-
-For programing, the easiest way to get `hive-jdbc` is from [the maven central](https://mvnrepository.com/artifact/org.apache.hive/hive-jdbc). For example,
-
-- **maven**
-```xml
-<dependency>
-    <groupId>org.apache.hive</groupId>
-    <artifactId>hive-jdbc</artifactId>
-    <version>2.3.8</version>
-</dependency>
-```
-
-- **sbt**
-```scala
-libraryDependencies += "org.apache.hive" % "hive-jdbc" % "2.3.8"
-```
-
-- **gradle**
-```gradle
-implementation group: 'org.apache.hive', name: 'hive-jdbc', version: '2.3.8'
-```
-
-For BI tools, please refer to [Quick Start](../quick_start/index.html) to check the guide for the BI tool used.
-If you find there is no specific document for the BI tool that you are using, don't worry, the configuration part for all BI tools are basically the same.
-Also, we will appreciate if you can help us to improve the document.
-
-
-## JDBC URL
-
-JDBC URLs have the following format:
-
-```
-jdbc:hive2://<host>:<port>/<dbName>;<sessionVars>?<kyuubiConfs>#<[spark|hive]Vars>
-```
-
-JDBC Parameter | Description
----------------| -----------
-host | The cluster node hosting Kyuubi Server.
-port | The port number to which is Kyuubi Server listening.
-dbName | Optional database name to set the current database to run the query against, use `default` if absent.
-sessionVars | Optional `Semicolon(;)` separated `key=value` parameters for the JDBC/ODBC driver. Such as `user`, `password` and `hive.server2.proxy.user`.
-kyuubiConfs | Optional `Semicolon(;)` separated `key=value` parameters for Kyuubi server to create the corresponding engine, dismissed if engine exists.
-[spark&#124;hive]Vars | Optional `Semicolon(;)` separated `key=value` parameters for Spark/Hive variables used for variable substitution.
-
-## Example
-
-```
-jdbc:hive2://localhost:10009/default;hive.server2.proxy.user=proxy_user?kyuubi.engine.share.level=CONNECTION;spark.ui.enabled=false#var_x=y
-```
-
-## Unsupported Hive Features
-
-- Connect to HiveServer2 using HTTP transport. ```transportMode=http```
diff --git a/content/docs/r1.6.0-incubating/_sources/client/jdbc/index.rst.txt b/content/docs/r1.6.0-incubating/_sources/client/jdbc/index.rst.txt
deleted file mode 100644
index 31871f1..0000000
--- a/content/docs/r1.6.0-incubating/_sources/client/jdbc/index.rst.txt
+++ /dev/null
@@ -1,25 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-JDBC Drivers
-============
-
-.. toctree::
-    :maxdepth: 1
-
-    kyuubi_jdbc
-    hive_jdbc
-    mysql_jdbc
-
diff --git a/content/docs/r1.6.0-incubating/_sources/client/jdbc/kyuubi_jdbc.rst.txt b/content/docs/r1.6.0-incubating/_sources/client/jdbc/kyuubi_jdbc.rst.txt
deleted file mode 100644
index fdc40d5..0000000
--- a/content/docs/r1.6.0-incubating/_sources/client/jdbc/kyuubi_jdbc.rst.txt
+++ /dev/null
@@ -1,160 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-Kyuubi Hive JDBC Driver
-=======================
-
-.. versionadded:: 1.4.0
-   Since 1.4.0, kyuubi community maintains a forked hive jdbc driver module and provides both shaded and non-shaded packages.
-
-This packages aims to support some missing functionalities of the original hive jdbc.
-For kyuubi engines that support multiple catalogs, it provides meta APIs for better support.
-The behaviors of the original hive jdbc have remained.
-
-To access a Hive data warehouse or new lakehouse formats, such as Apache Iceberg/Hudi, delta lake using the kyuubi jdbc driver for Apache kyuubi, you need to configure
-the following:
-
-- The list of driver library files - :ref:`referencing-libraries`.
-- The Driver or DataSource class - :ref:`registering_class`.
-- The connection URL for the driver - :ref:`building_url`
-
-.. _referencing-libraries:
-
-Referencing the JDBC Driver Libraries
--------------------------------------
-
-Before you use the jdbc driver for Apache Kyuubi, the JDBC application or Java code that
-you are using to connect to your data must be able to access the driver JAR files.
-
-Using the Driver in Java Code
-*****************************
-
-In the code, specify the artifact `kyuubi-hive-jdbc-shaded` from `Maven Central`_ according to the build tool you use.
-
-Maven
-^^^^^
-
-.. code-block:: xml
-
-   <dependency>
-       <groupId>org.apache.kyuubi</groupId>
-       <artifactId>kyuubi-hive-jdbc-shaded</artifactId>
-       <version>1.5.2-incubating</version>
-   </dependency>
-
-Sbt
-^^^
-
-.. code-block:: sbt
-
-   libraryDependencies += "org.apache.kyuubi" % "kyuubi-hive-jdbc-shaded" % "1.5.2-incubating"
-
-
-Gradle
-^^^^^^
-
-.. code-block:: gradle
-
-   implementation group: 'org.apache.kyuubi', name: 'kyuubi-hive-jdbc-shaded', version: '1.5.2-incubating'
-
-Using the Driver in a JDBC Application
-**************************************
-
-For `JDBC Applications`_, such as BI tools, SQL IDEs, please check the specific guide for detailed information.
-
-.. note:: Is your favorite tool missing?
-   `Report an feature request <https://kyuubi.apache.org/issue_tracking.html>`_ or help us document it.
-
-.. _registering_class:
-
-Registering the Driver Class
-----------------------------
-
-Before connecting to your data, you must register the JDBC Driver class for your application.
-
-- org.apache.kyuubi.jdbc.KyuubiHiveDriver
-- org.apache.kyuubi.jdbc.KyuubiDriver (Deprecated)
-
-The following sample code shows how to use the `java.sql.DriverManager`_ class to establish a
-connection for JDBC:
-
-.. code-block:: java
-
-   private static Connection connectViaDM() throws Exception
-   {
-      Connection connection = null;
-      connection = DriverManager.getConnection(CONNECTION_URL);
-      return connection;
-   }
-
-.. _building_url:
-
-Building the Connection URL
----------------------------
-
-Basic Connection URL format
-***************************
-
-Use the connection URL to supply connection information to the kyuubi server or cluster that you are
-accessing. The following is the format of the connection URL for the Kyuubi Hive JDBC Driver
-
-.. code-block:: jdbc
-
-   jdbc:subprotocol://host:port/schema;<clientProperties;><[#|?]sessionProperties>
-
-- subprotocol: kyuubi or hive2
-- host: DNS or IP address of the kyuubi server
-- port: The number of the TCP port that the server uses to listen for client requests
-- dbName: Optional database name to set the current database to run the query against, use `default` if absent.
-- clientProperties: Optional `semicolon(;)` separated `key=value` parameters identified and affect the client behavior locally. e.g., user=foo;password=bar.
-- sessionProperties: Optional `semicolon(;)` separated `key=value` parameters used to configure the session, operation or background engines.
-  For instance, `kyuubi.engine.share.level=CONNECTION` determines the background engine instance is used only by the current connection. `spark.ui.enabled=false` disables the Spark UI of the engine.
-
-.. important::
-   - The sessionProperties MUST come after a leading number sign(#) or question mark (?).
-   - Properties are case-sensitive
-   - Do not duplicate properties in the connection URL
-
-Connection URL over Http
-************************
-
-.. versionadded:: 1.6.0
-
-.. code-block:: jdbc
-
-   jdbc:subprotocol://host:port/schema;transportMode=http;httpPath=<http_endpoint>
-
-- http_endpoint is the corresponding HTTP endpoint configured by `kyuubi.frontend.thrift.http.path` at the server side.
-
-Connection URL over Service Discovery
-*************************************
-
-.. code-block:: jdbc
-
-   jdbc:subprotocol://<zookeeper quorum>/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=kyuubi
-
-- zookeeper quorum is the corresponding zookeeper cluster configured by `kyuubi.ha.zookeeper.quorum` at the server side.
-- zooKeeperNamespace is  the corresponding namespace configured by `kyuubi.ha.zookeeper.namespace` at the server side.
-
-Authentication
---------------
-
-
-DataTypes
----------
-
-.. _Maven Central: https://mvnrepository.com/artifact/org.apache.kyuubi/kyuubi-hive-jdbc-shaded
-.. _JDBC Applications: ../bi_tools/index.html
-.. _java.sql.DriverManager: https://docs.oracle.com/javase/8/docs/api/java/sql/DriverManager.html
diff --git a/content/docs/r1.6.0-incubating/_sources/client/jdbc/mysql_jdbc.rst.txt b/content/docs/r1.6.0-incubating/_sources/client/jdbc/mysql_jdbc.rst.txt
deleted file mode 100644
index 0702fcd..0000000
--- a/content/docs/r1.6.0-incubating/_sources/client/jdbc/mysql_jdbc.rst.txt
+++ /dev/null
@@ -1,26 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-
-`MySQL Connectors`_
-================
-
-.. versionadded:: 1.4.0
-
-Kyuubi provides an frontend service that enables the connectivity and accessibility from MySQL connectors.
-
-.. warning:: The document you are visiting now is incomplete, please help kyuubi community to fix it if appropriate for you.
-
-.. _MySQL Connectors: https://www.mysql.com/products/connector/
diff --git a/content/docs/r1.6.0-incubating/_sources/client/odbc/index.rst.txt b/content/docs/r1.6.0-incubating/_sources/client/odbc/index.rst.txt
deleted file mode 100644
index 0d4ed8b..0000000
--- a/content/docs/r1.6.0-incubating/_sources/client/odbc/index.rst.txt
+++ /dev/null
@@ -1,24 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-
-ODBC Drivers
-============================
-
-.. toctree::
-    :maxdepth: 2
-
-    todo
-
diff --git a/content/docs/r1.6.0-incubating/_sources/client/python/index.rst.txt b/content/docs/r1.6.0-incubating/_sources/client/python/index.rst.txt
deleted file mode 100644
index 6dfbec0..0000000
--- a/content/docs/r1.6.0-incubating/_sources/client/python/index.rst.txt
+++ /dev/null
@@ -1,24 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-
-Python DB-APIs
-==============
-
-.. toctree::
-    :maxdepth: 2
-
-    pyhive
-
diff --git a/content/docs/r1.6.0-incubating/_sources/client/python/pyhive.rst.txt b/content/docs/r1.6.0-incubating/_sources/client/python/pyhive.rst.txt
deleted file mode 100644
index fe2f9cf..0000000
--- a/content/docs/r1.6.0-incubating/_sources/client/python/pyhive.rst.txt
+++ /dev/null
@@ -1,22 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-
-`PyHive`_
-=========
-
-.. warning:: The document you are visiting now is incomplete, please help kyuubi community to fix it if appropriate for you.
-
-.. _PyHive: https://github.com/dropbox/PyHive
diff --git a/content/docs/r1.6.0-incubating/_sources/client/rest/index.rst.txt b/content/docs/r1.6.0-incubating/_sources/client/rest/index.rst.txt
deleted file mode 100644
index 3e94c0e..0000000
--- a/content/docs/r1.6.0-incubating/_sources/client/rest/index.rst.txt
+++ /dev/null
@@ -1,24 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-
-RESTful APIs and Clients
-========================
-
-.. toctree::
-    :maxdepth: 2
-
-    rest_api
-
diff --git a/content/docs/r1.6.0-incubating/_sources/client/rest/rest_api.md.txt b/content/docs/r1.6.0-incubating/_sources/client/rest/rest_api.md.txt
deleted file mode 100644
index e9a40c9..0000000
--- a/content/docs/r1.6.0-incubating/_sources/client/rest/rest_api.md.txt
+++ /dev/null
@@ -1,124 +0,0 @@
-<!--
- - Licensed to the Apache Software Foundation (ASF) under one or more
- - contributor license agreements.  See the NOTICE file distributed with
- - this work for additional information regarding copyright ownership.
- - The ASF licenses this file to You under the Apache License, Version 2.0
- - (the "License"); you may not use this file except in compliance with
- - the License.  You may obtain a copy of the License at
- -
- -   http://www.apache.org/licenses/LICENSE-2.0
- -
- - Unless required by applicable law or agreed to in writing, software
- - distributed under the License is distributed on an "AS IS" BASIS,
- - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- - See the License for the specific language governing permissions and
- - limitations under the License.
- -->
-
-# REST API v1
-
-Note that: now the api version is v1 and the base uri is `/api/v1`.
-
-## Batch Resource
-
-### GET /batches
-
-Returns all the batches.
-
-#### Request Parameters
-
-| Name       | Description                                                                                             | Type   |
-| :--------- |:--------------------------------------------------------------------------------------------------------| :----- |
-| batchType  | The batch type, such as spark/flink, if no batchType is specified,<br/> return all types                | String |
-| batchState | The valid batch state can be one of the following:<br/> PENDING, RUNNING, FINISHED, ERROR, CANCELED     | String |
-| batchUser  | The user name that created the batch                                                                    | String |
-| createTime | Return the batch that created after this timestamp                                                      | Long   |
-| endTime    | Return the batch that ended before this timestamp                                                       | Long   |
-| from       | The start index to fetch sessions                                                                       | Int    |
-| size       | Number of sessions to fetch                                                                             | Int    |
-
-#### Response Body
-
-| Name    | Description                         | Type |
-| :------ | :---------------------------------- | :--- |
-| from    | The start index of fetched sessions | Int  |
-| total   | Number of sessions fetched          | Int  |
-| batches | [Batch](#batch) List                | List |
-
-### POST /batches
-
-#### Request Body
-
-| Name      | Description                                        | Type             |
-| :-------- |:---------------------------------------------------|:-----------------|
-| batchType | The batch type, such as Spark, Flink               | String           |
-| resource  | The resource containing the application to execute | Path (required)  |
-| className | Application main class                             | String(required) |
-| name      | The name of this batch.                            | String           |
-| conf      | Configuration properties                           | Map of key=val   |
-| args      | Command line arguments for the application         | List of Strings  |
-
-
-#### Response Body
-
-The created [Batch](#batch) object.
-
-### GET /batches/{batchId}
-
-Returns the batch information.
-
-#### Response Body
-
-The [Batch](#batch).
-
-### DELETE /batches/${batchId}
-
-Kill the batch if it is still running.
-
-#### Request Parameters
-
-| Name                    | Description                   | Type             |
-| :---------------------- | :---------------------------- | :--------------- |
-| hive.server2.proxy.user | the proxy user to impersonate | String(optional) |
-
-#### Response Body
-
-| Name    | Description                           | Type    |
-| :------ |:--------------------------------------| :------ |
-| success | Whether killed the batch successfully | Boolean |
-| msg     | The kill batch message                | String  |
-
-### GET /batches/${batchId}/localLog
-
-Gets the local log lines from this batch.
-
-#### Request Parameters
-
-| Name | Description                       | Type |
-| :--- | :-------------------------------- | :--- |
-| from | Offset                            | Int  |
-| size | Max number of log lines to return | Int  |
-
-#### Response Body
-
-| Name      | Description       | Type          |
-| :-------- | :---------------- |:--------------|
-| logRowSet | The log lines     | List of sting |
-| rowCount  | The log row count | Int           |
-
-### Batch
-
-| Name           | Description                                                       | Type   |
-| :------------- |:------------------------------------------------------------------| :----- |
-| id             | The batch id                                                      | String |
-| user           | The user created the batch                                        | String |
-| batchType      | The batch type                                                    | String |
-| name           | The batch name                                                    | String |
-| appId          | The batch application Id                                          | String |
-| appUrl         | The batch application tracking url                                | String |
-| appState       | The batch application state                                       | String |
-| appDiagnostic  | The batch application diagnostic                                  | String |
-| kyuubiInstance | The kyuubi instance that created the batch                        | String |
-| state          | The kyuubi batch operation state                                  | String |
-| createTime     | The batch create time                                             | Long   |
-| endTime        | The batch end time, if it has not been terminated, the value is 0 | Long   |
diff --git a/content/docs/r1.6.0-incubating/_sources/client/thrift/index.rst.txt b/content/docs/r1.6.0-incubating/_sources/client/thrift/index.rst.txt
deleted file mode 100644
index e7def48..0000000
--- a/content/docs/r1.6.0-incubating/_sources/client/thrift/index.rst.txt
+++ /dev/null
@@ -1,24 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-
-Thrift APIs
-===========
-
-.. toctree::
-    :maxdepth: 2
-
-    hive_beeline
-
diff --git a/content/docs/r1.6.0-incubating/_sources/client/ui/index.rst.txt b/content/docs/r1.6.0-incubating/_sources/client/ui/index.rst.txt
deleted file mode 100644
index 63a02cb..0000000
--- a/content/docs/r1.6.0-incubating/_sources/client/ui/index.rst.txt
+++ /dev/null
@@ -1,24 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-
-Web UI
-======
-
-.. toctree::
-    :maxdepth: 2
-
-    hive_beeline
-
diff --git a/content/docs/r1.6.0-incubating/_sources/community/CONTRIBUTING.md.txt b/content/docs/r1.6.0-incubating/_sources/community/CONTRIBUTING.md.txt
deleted file mode 100644
index a1c0bd9..0000000
--- a/content/docs/r1.6.0-incubating/_sources/community/CONTRIBUTING.md.txt
+++ /dev/null
@@ -1,61 +0,0 @@
-<!--
- - Licensed to the Apache Software Foundation (ASF) under one or more
- - contributor license agreements.  See the NOTICE file distributed with
- - this work for additional information regarding copyright ownership.
- - The ASF licenses this file to You under the Apache License, Version 2.0
- - (the "License"); you may not use this file except in compliance with
- - the License.  You may obtain a copy of the License at
- -
- -   http://www.apache.org/licenses/LICENSE-2.0
- -
- - Unless required by applicable law or agreed to in writing, software
- - distributed under the License is distributed on an "AS IS" BASIS,
- - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- - See the License for the specific language governing permissions and
- - limitations under the License.
- -->
-
-# Contributing to Apache Kyuubi
-
-Thanks for your interest in the Apache Kyuubi project.
-Contributions are welcome and are greatly appreciated!
-Every little bit helps, and a credit will always be given.
-
-This page provides some orientation and resources we have for you to get involved.
-It also offers recommendations on getting the best results when engaging with the community.
-We hope that this will be a pleasant first experience for you to return to continue contributing.
-
-## Get Involved
-
-In the process of using Apache Kyuubi, if you have any questions, suggestions, or improvement ideas, you can participate in the Kyuubi community building through the following suggested channels.
-
-- Join the [Mailing Lists](https://kyuubi.apache.org/mailing_lists.html) - the best way to keep up-to-date with the community.
-- [Issue Tracker](https://kyuubi.apache.org/issue_tracking.html) - tracking bugs, ideas, plans, etc.
-- [Github Discussions](https://github.com/apache/incubator-kyuubi/discussions) - second to mailing list for anything else you want to share or ask
-
-## Contributing Guide
-
-As a community-driven project. All bits of help are welcome.
-
-Contributing code is excellent, but that’s probably not the first place to start.
-There are many ways to make valuable contributions to the project and community.
-
-You can make various types of contributions to Kyuubi, including the following but not limited to,
-
-- Answer questions in the  [Mailing Lists](https://kyuubi.apache.org/mailing_lists.html)
-- [Share your success stories with us](https://github.com/apache/incubator-kyuubi/discussions/925) 
-- Improve Documentation - [![Documentation Status](https://readthedocs.org/projects/kyuubi/badge/?version=latest)](https://kyuubi.apache.org/docs/latest/)
-- Test latest releases - [![Latest tag](https://img.shields.io/github/v/tag/apache/incubator-kyuubi?label=tag)](https://github.com/apache/incubator-kyuubi/tags)
-- Improve test coverage - [![codecov](https://codecov.io/gh/apache/incubator-kyuubi/branch/master/graph/badge.svg)](https://codecov.io/gh/apache/incubator-kyuubi)
-- Report bugs and better help developers to reproduce
-- Review changes
-- [Make a pull request](https://kyuubi.apache.org/pull_request.html)
-- Promote to others
-- Click the star button if you like this project
-
-## Easter Eggs for Contributors
-
-TBD, please be patient for the surprise.
-
-## IDE Setup Guide
-[IntelliJ IDEA Setup Guide](https://kyuubi.readthedocs.io/en/latest/develop_tools/idea_setup.html)
diff --git a/content/docs/r1.6.0-incubating/_sources/community/collaborators.md.txt b/content/docs/r1.6.0-incubating/_sources/community/collaborators.md.txt
deleted file mode 100644
index d424262..0000000
--- a/content/docs/r1.6.0-incubating/_sources/community/collaborators.md.txt
+++ /dev/null
@@ -1,22 +0,0 @@
-<!--
- - Licensed to the Apache Software Foundation (ASF) under one or more
- - contributor license agreements.  See the NOTICE file distributed with
- - this work for additional information regarding copyright ownership.
- - The ASF licenses this file to You under the Apache License, Version 2.0
- - (the "License"); you may not use this file except in compliance with
- - the License.  You may obtain a copy of the License at
- -
- -   http://www.apache.org/licenses/LICENSE-2.0
- -
- - Unless required by applicable law or agreed to in writing, software
- - distributed under the License is distributed on an "AS IS" BASIS,
- - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- - See the License for the specific language governing permissions and
- - limitations under the License.
- -->
-
-# Collaborators
-
-[PPMC Members and Committers](https://people.apache.org/phonebook.html?podling=kyuubi)
-
-See full contributor list at [contributors](https://github.com/apache/incubator-kyuubi/graphs/contributors).
diff --git a/content/docs/r1.6.0-incubating/_sources/community/index.rst.txt b/content/docs/r1.6.0-incubating/_sources/community/index.rst.txt
deleted file mode 100644
index 420905d..0000000
--- a/content/docs/r1.6.0-incubating/_sources/community/index.rst.txt
+++ /dev/null
@@ -1,27 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-
-Community
-=========
-
-.. toctree::
-    :maxdepth: 2
-    :glob:
-
-    CONTRIBUTING
-    collaborators
-    release
-
diff --git a/content/docs/r1.6.0-incubating/_sources/community/release.md.txt b/content/docs/r1.6.0-incubating/_sources/community/release.md.txt
deleted file mode 100644
index 435226d..0000000
--- a/content/docs/r1.6.0-incubating/_sources/community/release.md.txt
+++ /dev/null
@@ -1,283 +0,0 @@
-<!--
- - Licensed to the Apache Software Foundation (ASF) under one or more
- - contributor license agreements.  See the NOTICE file distributed with
- - this work for additional information regarding copyright ownership.
- - The ASF licenses this file to You under the Apache License, Version 2.0
- - (the "License"); you may not use this file except in compliance with
- - the License.  You may obtain a copy of the License at
- -
- -   http://www.apache.org/licenses/LICENSE-2.0
- -
- - Unless required by applicable law or agreed to in writing, software
- - distributed under the License is distributed on an "AS IS" BASIS,
- - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- - See the License for the specific language governing permissions and
- - limitations under the License.
- -->
-
-Kyuubi Release Guide
-===
-
-## Introduction
-The Apache Kyuubi (Incubating) project periodically declares and publishes releases. A release is one or more packages
-of the project artifact(s) that are approved for general public distribution and use. They may come with various
-degrees of caveat regarding their perceived quality and potential for change, such as "alpha", "beta", "incubating",
-"stable", etc.
-
-The Kyuubi community treats releases with great importance. They are a public face of the project and most users
-interact with the project only through the releases. Releases are signed off by the entire Kyuubi community in a
-public vote.
-
-Each release is executed by a Release Manager, who is selected among the Kyuubi committers. This document describes
-the process that the Release Manager follows to perform a release. Any changes to this process should be discussed
-and adopted on the [dev mailing list](mailto:dev@kyuubi.apache.org).
-
-Please remember that publishing software has legal consequences. This guide complements the foundation-wide 
-[Product Release Policy](https://www.apache.org/dev/release.html) and 
-[Release Distribution Policy](https://www.apache.org/dev/release-distribution).
-
-### Overview
-
-The release process consists of several steps:
-
-1. Decide to release
-2. Prepare for the release
-3. Cut branch off for __major__ release
-4. Build a release candidate
-5. Vote on the release candidate
-6. If necessary, fix any issues and go back to step 3.
-7. Finalize the release
-8. Promote the release
-
-## Decide to release
-
-Deciding to release and selecting a Release Manager is the first step of the release process. This is a consensus-based
-decision of the entire community.
-
-Anybody can propose a release on the [dev mailing list](mailto:dev@kyuubi.apache.org), giving a solid argument and
-nominating a committer as the Release Manager (including themselves). There’s no formal process, no vote requirements,
-and no timing requirements. Any objections should be resolved by consensus before starting the release.
-
-In general, the community prefers to have a rotating set of 1-2 Release Managers. Keeping a small core set of managers
-allows enough people to build expertise in this area and improve processes over time, without Release Managers needing
-to re-learn the processes for each release. That said, if you are a committer interested in serving the community in
-this way, please reach out to the community on the [dev mailing list](mailto:dev@kyuubi.apache.org).
-
-### Checklist to proceed to the next step
-
-1. Community agrees to release
-2. Community selects a Release Manager
-
-## Prepare for the release
-
-Before your first release, you should perform one-time configuration steps. This will set up your security keys for
-signing the release and access to various release repositories.
-
-### One-time setup instructions
-
-#### ASF authentication
-
-The environments `ASF_USERNAME` and `ASF_PASSWORD` have been used in several places and several times in the release
-process, you can either one-time set up them in `~/.bashrc` or `~/.zshrc`, or export them in terminal every time.
-
-```shell
-export ASF_USERNAME=<your apache username>
-export ASF_PASSWORD=<your apache password>
-```
-
-#### Java Home
-An available environment variable `JAVA_HOME`, you can do `echo $JAVA_HOME` to check it.
-Note that, the Java version should be 8.
-
-#### Subversion
-
-Besides on `git`, `svn` is also required for Apache release, please refer to
-https://www.apache.org/dev/version-control.html#https-svn for details.
-
-#### GPG Key
-
-You need to have a GPG key to sign the release artifacts. Please be aware of the ASF-wide
-[release signing guidelines](https://www.apache.org/dev/release-signing.html). If you don’t have a GPG key associated
-with your Apache account, please create one according to the guidelines.
-
-Determine your Apache GPG Key and Key ID, as follows:
-```shell
-gpg --list-keys --keyid-format SHORT
-```
-
-This will list your GPG keys. One of these should reflect your Apache account, for example:
-```shell
-pub   rsa4096 2021-08-30 [SC]
-      8FC8075E1FDC303276C676EE8001952629BCC75D
-uid           [ultimate] Cheng Pan <ch...@apache.org>
-sub   rsa4096 2021-08-30 [E]
-```
-
-> Note: To follow the [Apache's release specification](https://infra.apache.org/release-signing.html#note), all new RSA keys generated should be at least 4096 bits. Do not generate new DSA keys.
-
-Here, the key ID is the 8-digit hex string in the pub line: `29BCC75D`.
-
-To export the PGP public key, using:
-```shell
-gpg --armor --export 29BCC75D
-```
-
-If you have more than one gpg key, you can specify the default key as the following:
-```
-echo 'default-key <key-fpr>' > ~/.gnupg/gpg.conf
-```
-
-The last step is to update the KEYS file with your code signing key 
-https://www.apache.org/dev/openpgp.html#export-public-key
-
-```shell
-svn checkout --depth=files "https://dist.apache.org/repos/dist/release/incubator/kyuubi" work/svn-kyuubi
-
-(gpg --list-sigs "${ASF_USERNAME}@apache.org" && gpg --export --armor "${ASF_USERNAME}@apache.org") >> work/svn-kyuubi/KEYS
-
-svn commit --username "${ASF_USERNAME}" --password "${ASF_PASSWORD}" --message "Update KEYS" work/svn-kyuubi
-```
-
-In order to make yourself have the right permission to stage java artifacts in Apache Nexus staging repository, please submit your GPG public key to ubuntu server via
-
-```shell
-gpg --keyserver hkp://keyserver.ubuntu.com --send-keys ${PUBLIC_KEY} # send public key to ubuntu server
-gpg --keyserver hkp://keyserver.ubuntu.com --recv-keys ${PUBLIC_KEY} # verify
-```
-
-## Cut branch if for major release
-
-Kyuubi use version pattern `{MAJOR_VERSION}.{MINOR_VERSION}.{PATCH_VERSION}[-{OPTIONAL_SUFFIX}]`, e.g. `1.3.0-incubating`.
-__Major Release__ means `MAJOR_VERSION` or `MINOR_VERSION` changed, and __Patch Release__ means `PATCH_VERSION` changed.
-
-The main step towards preparing a major release is to create a release branch. This is done via standard Git branching
-mechanism and should be announced to the community once the branch is created.
-
-> Note: If you are releasing a patch version, you can ignore this step.
-
-The release branch pattern is `branch-{MAJOR_VERSION}.{MINOR_VERSION}`, e.g. `branch-1.3`.
-
-After cutting release branch, don't forget bump version in `master` branch.
-
-## Build a release candidate
-
-> Don't forget to switch to the release branch!  
-
-1. Set environment variables.
-
-```shell
-export RELEASE_VERSION=<release version, e.g. 1.3.0-incubating>
-export RELEASE_RC_NO=<RC number, e.g. 0>
-```
-
-2. Bump version.
-
-```shell
-build/mvn versions:set -DgenerateBackupPoms=false \
-  -DnewVersion="${RELEASE_VERSION}" \
-  -Pspark-3.2,spark-block-cleaner
-
-git commit -am "[RELEASE] Bump ${RELEASE_VERSION}"
-```
-
-3. Create a git tag for the release candidate.
-
-The tag pattern is `v${RELEASE_VERSION}-rc${RELEASE_RC_NO}`, e.g. `v1.3.0-incubating-rc0`
-
-> NOTE: After all the voting passed, be sure to create a final tag with the pattern: `v${RELEASE_VERSION}`
-
-4. Package the release binaries & sources, and upload them to the Apache staging SVN repo. Publish jars to the Apache
-staging Maven repo.
-
-```shell
-build/release/release.sh publish
-```
-
-To make your release available in the staging repository, you must close the staging repo in the [Apache Nexus](https://repository.apache.org/#stagingRepositories). Until you close, you can re-run deploying to staging multiple times. But once closed, it will create a new staging repo. So ensure you close this, so that the next RC (if need be) is on a new repo. Once everything is good, close the staging repository on Apache Nexus.
-
-5. Generate a pre-release note from GitHub for the subsequent voting.
-
-Goto the [release page](https://github.com/apache/incubator-kyuubi/releases) and click the "Draft a new release" button, then it would jump to a new page to prepare the release.
-
-Filling in all the necessary information required by the form. And in the bottom of the form, choose the "This is a pre-release" checkbox. Finally, click the "Publish release" button to finish the step.
-
-> Note: the pre-release note is used for voting purposes. It would be marked with a **Pre-release** tag. After all the voting works(dev and general) are finished, do not forget to inverse the "This is a pre-release" checkbox. The pre-release version comes from vx.y.z-incubating-rcN tags, and the final version should come from vx.y.z-incubating tags.
-
-## Vote on the release candidate
-
-The release voting takes place on the Apache Kyuubi (Incubating) developers list (the (P)PMC is voting).
-
-- If possible, attach a draft of the release notes with the email.
-- Recommend represent voting closing time in UTC format.
-- Make sure the email is in text format and the links are correct
-
-> Note: you can generate the voting mail content for dev ML automatically via invoke the `build/release/script/dev_kyuubi_vote.sh` script. 
-
-Once the vote is done, you should also send out a summary email with the totals, with a subject that looks
-something like __[VOTE][RESULT] ....__
-
-Then, you can move the release vote on the general incubator mailing list, and generate the voting mail content automatically via invoke the `build/release/script/general_incubator_vote.sh` script.
-Also, you should send out a summary email like dev ML voting.
-
-> Note, any reason causes voting cancel. You should re-vote on the dev ML firstly.
-
-## Finalize the Release
-
-__Be Careful!__
-
-__THIS STEP IS IRREVERSIBLE so make sure you selected the correct staging repository.__
-__Once you move the artifacts into the release folder, they cannot be removed.__
-
-After the vote passes, to upload the binaries to Apache mirrors, you move the binaries from dev directory (this should
-be where they are voted) to release directory. This "moving" is the only way you can add stuff to the actual release
-directory. (Note: only (P)PMC members can move to release directory)
-
-Move the sub-directory in "dev" to the corresponding directory in "release". If you've added your signing key to the
-KEYS file, also update the release copy.
-
-```shell
-build/release/release.sh finalize
-```
-
-Verify that the resources are present in https://www.apache.org/dist/incubator/kyuubi/. It may take a while for them
-to be visible. This will be mirrored throughout the Apache network.
-
-For Maven Central Repository, you can Release from the [Apache Nexus Repository Manager](https://repository.apache.org/).
-Log in, open Staging Repositories, find the one voted on, select and click Release and confirm. If successful, it should
-show up under https://repository.apache.org/content/repositories/releases/org/apache/kyuubi/ and the same under 
-https://repository.apache.org/content/groups/maven-staging-group/org/apache/kyuubi/ (look for the correct release version).
-After some time this will be sync’d to [Maven Central](https://search.maven.org/) automatically.
-
-## Promote the release
-
-### Update Website
-
-Fork and clone [Apache Kyuubi website](https://github.com/apache/incubator-kyuubi-website)
-
-1. Add a new markdown file in `src/zh/news/`, `src/en/news/`
-2. Add a new markdown file in `src/zh/release/`, `src/en/release/`
-3. Follow [Build Document](../develop_tools/build_document.md) to build documents, then copy `apache/incubator-kyuubi`'s
-   folder `docs/_build/html` to `apache/incubator-kyuubi-website`'s folder `content/docs/r{RELEASE_VERSION}`
-
-### Create an Announcement
-
-Once everything is working, create an announcement on the website and then send an e-mail to the mailing list.
-You can generate the announcement via `buld/release/script/announce.sh` automatically.
-The mailing list includes: `general@incubator.apache.org`, `announce@apache.org`, `dev@kyuubi.apache.org`, `user@spark.apache.org`.
-
-Note that, you must use the apache.org email to send announce to `announce@apache.org`.
-
-Enjoy an adult beverage of your choice, and congratulations on making a Kyuubi release.
-
-
-## Remove the dist repo directories for deprecated release candidates
-
-Remove the deprecated dist repo directories at last. 
-
-```shell
-cd work/svn-dev
-svn delete https://dist.apache.org/repos/dist/dev/incubator/kyuubi/{RELEASE_TAG} \
-  --username "${ASF_USERNAME}" \
-  --password "${ASF_PASSWORD}" \
-  --message "Remove deprecated Apache Kyuubi ${RELEASE_TAG}" 
-```
diff --git a/content/docs/r1.6.0-incubating/_sources/connector/flink/flink_table_store.rst.txt b/content/docs/r1.6.0-incubating/_sources/connector/flink/flink_table_store.rst.txt
deleted file mode 100644
index c2fd667..0000000
--- a/content/docs/r1.6.0-incubating/_sources/connector/flink/flink_table_store.rst.txt
+++ /dev/null
@@ -1,111 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-`Flink Table Store`_
-==========
-
-Flink Table Store is a unified storage to build dynamic tables for both streaming and batch processing in Flink,
-supporting high-speed data ingestion and timely data query.
-
-.. tip::
-   This article assumes that you have mastered the basic knowledge and operation of `Flink Table Store`_.
-   For the knowledge about Flink Table Store not mentioned in this article,
-   you can obtain it from its `Official Documentation`_.
-
-By using kyuubi, we can run SQL queries towards Flink Table Store which is more
-convenient, easy to understand, and easy to expand than directly using
-flink to manipulate Flink Table Store.
-
-Flink Table Store Integration
--------------------
-
-To enable the integration of kyuubi flink sql engine and Flink Table Store, you need to:
-
-- Referencing the Flink Table Store :ref:`dependencies<flink-table-store-deps>`
-
-.. _flink-table-store-deps:
-
-Dependencies
-************
-
-The **classpath** of kyuubi flink sql engine with Flink Table Store supported consists of
-
-1. kyuubi-flink-sql-engine-\ |release|\ _2.12.jar, the engine jar deployed with Kyuubi distributions
-2. a copy of flink distribution
-3. flink-table-store-dist-<version>.jar (example: flink-table-store-dist-0.2.jar), which can be found in the `Maven Central`_
-
-In order to make the Flink Table Store packages visible for the runtime classpath of engines, we can use these methods:
-
-1. Put the Flink Table Store packages into ``$FLINK_HOME/lib`` directly
-2. Setting the HADOOP_CLASSPATH environment variable or copy the `Pre-bundled Hadoop Jar`_ to flink/lib.
-
-.. warning::
-   Please mind the compatibility of different Flink Table Store and Flink versions, which can be confirmed on the page of `Flink Table Store multi engine support`_.
-
-Flink Table Store Operations
-------------------
-
-Taking ``CREATE CATALOG`` as a example,
-
-.. code-block:: sql
-
-   CREATE CATALOG my_catalog WITH (
-     'type'='table-store',
-     'warehouse'='hdfs://nn:8020/warehouse/path' -- or 'file:///tmp/foo/bar'
-   );
-
-   USE CATALOG my_catalog;
-
-Taking ``CREATE TABLE`` as a example,
-
-.. code-block:: sql
-
-   CREATE TABLE MyTable (
-     user_id BIGINT,
-     item_id BIGINT,
-     behavior STRING,
-     dt STRING,
-     PRIMARY KEY (dt, user_id) NOT ENFORCED
-   ) PARTITIONED BY (dt) WITH (
-     'bucket' = '4'
-   );
-
-Taking ``Query Table`` as a example,
-
-.. code-block:: sql
-
-   SET 'execution.runtime-mode' = 'batch';
-   SELECT * FROM orders WHERE catalog_id=1025;
-
-Taking ``Streaming Query`` as a example,
-
-.. code-block:: sql
-
-   SET 'execution.runtime-mode' = 'streaming';
-   SELECT * FROM MyTable /*+ OPTIONS ('log.scan'='latest') */;
-
-Taking ``Rescale Bucket` as a example,
-
-.. code-block:: sql
-
-   ALTER TABLE my_table SET ('bucket' = '4');
-   INSERT OVERWRITE my_table PARTITION (dt = '2022-01-01');
-
-
-.. _Flink Table Store: https://nightlies.apache.org/flink/flink-table-store-docs-stable/
-.. _Official Documentation: https://nightlies.apache.org/flink/flink-table-store-docs-stable/
-.. _Maven Central: https://mvnrepository.com/artifact/org.apache.flink/flink-table-store-dist
-.. _Pre-bundled Hadoop Jar: https://flink.apache.org/downloads.html
-.. _Flink Table Store multi engine support: https://nightlies.apache.org/flink/flink-table-store-docs-stable/docs/engines/overview/
diff --git a/content/docs/r1.6.0-incubating/_sources/connector/flink/hudi.rst.txt b/content/docs/r1.6.0-incubating/_sources/connector/flink/hudi.rst.txt
deleted file mode 100644
index 0000bde..0000000
--- a/content/docs/r1.6.0-incubating/_sources/connector/flink/hudi.rst.txt
+++ /dev/null
@@ -1,117 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-`Hudi`_
-========
-
-Apache Hudi (pronounced “hoodie”) is the next generation streaming data lake platform.
-Apache Hudi brings core warehouse and database functionality directly to a data lake.
-
-.. tip::
-   This article assumes that you have mastered the basic knowledge and operation of `Hudi`_.
-   For the knowledge about Hudi not mentioned in this article,
-   you can obtain it from its `Official Documentation`_.
-
-By using Kyuubi, we can run SQL queries towards Hudi which is more convenient, easy to understand,
-and easy to expand than directly using flink to manipulate Hudi.
-
-Hudi Integration
-----------------
-
-To enable the integration of kyuubi flink sql engine and Hudi through
-Catalog APIs, you need to:
-
-- Referencing the Hudi :ref:`dependencies<flink-hudi-deps>`
-
-.. _flink-hudi-deps:
-
-Dependencies
-************
-
-The **classpath** of kyuubi flink sql engine with Hudi supported consists of
-
-1. kyuubi-flink-sql-engine-\ |release|\ _2.12.jar, the engine jar deployed with Kyuubi distributions
-2. a copy of flink distribution
-3. hudi-flink<flink.version>-bundle_<scala.version>-<hudi.version>.jar (example: hudi-flink1.14-bundle_2.12-0.11.1.jar), which can be found in the `Maven Central`_
-
-In order to make the Hudi packages visible for the runtime classpath of engines, we can use one of these methods:
-
-1. Put the Hudi packages into ``$flink_HOME/lib`` directly
-2. Set ``pipeline.jars=/path/to/hudi-flink-bundle``
-
-Hudi Operations
----------------
-
-Taking ``Create Table`` as a example,
-
-.. code-block:: sql
-
-   CREATE TABLE t1 (
-     id INT PRIMARY KEY NOT ENFORCED,
-     name STRING,
-     price DOUBLE
-   ) WITH (
-     'connector' = 'hudi',
-     'path' = 's3://bucket-name/hudi/',
-     'table.type' = 'MERGE_ON_READ' -- this creates a MERGE_ON_READ table, by default is COPY_ON_WRITE
-   );
-
-Taking ``Query Data`` as a example,
-
-.. code-block:: sql
-
-   SELECT * FROM t1;
-
-Taking ``Insert and Update Data`` as a example,
-
-.. code-block:: sql
-
-   INSERT INTO t1 VALUES (1, 'Lucas' , 2.71828);
-
-Taking ``Streaming Query`` as a example,
-
-.. code-block:: sql
-
-   CREATE TABLE t1 (
-     uuid VARCHAR(20) PRIMARY KEY NOT ENFORCED,
-     name VARCHAR(10),
-     age INT,
-     ts TIMESTAMP(3),
-     `partition` VARCHAR(20)
-   )
-   PARTITIONED BY (`partition`)
-   WITH (
-     'connector' = 'hudi',
-     'path' = '${path}',
-     'table.type' = 'MERGE_ON_READ',
-     'read.streaming.enabled' = 'true',  -- this option enable the streaming read
-     'read.start-commit' = '20210316134557', -- specifies the start commit instant time
-     'read.streaming.check-interval' = '4' -- specifies the check interval for finding new source commits, default 60s.
-   );
-
-   -- Then query the table in stream mode
-   SELECT * FROM t1;
-
-Taking ``Delete Data``,
-
-The streaming query can implicitly auto delete data.
-When consuming data in streaming query,
-Hudi Flink source can also accepts the change logs from the underneath data source,
-it can then applies the UPDATE and DELETE by per-row level.
-
-
-.. _Hudi: https://hudi.apache.org/
-.. _Official Documentation: https://hudi.apache.org/docs/overview
-.. _Maven Central: https://mvnrepository.com/artifact/org.apache.hudi
diff --git a/content/docs/r1.6.0-incubating/_sources/connector/flink/iceberg.rst.txt b/content/docs/r1.6.0-incubating/_sources/connector/flink/iceberg.rst.txt
deleted file mode 100644
index ab4a701..0000000
--- a/content/docs/r1.6.0-incubating/_sources/connector/flink/iceberg.rst.txt
+++ /dev/null
@@ -1,121 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-`Iceberg`_
-==========
-
-Apache Iceberg is an open table format for huge analytic datasets.
-Iceberg adds tables to compute engines including Spark, Trino, PrestoDB, Flink, Hive and Impala
-using a high-performance table format that works just like a SQL table.
-
-.. tip::
-   This article assumes that you have mastered the basic knowledge and operation of `Iceberg`_.
-   For the knowledge about Iceberg not mentioned in this article,
-   you can obtain it from its `Official Documentation`_.
-
-By using kyuubi, we can run SQL queries towards Iceberg which is more
-convenient, easy to understand, and easy to expand than directly using
-flink to manipulate Iceberg.
-
-Iceberg Integration
--------------------
-
-To enable the integration of kyuubi flink sql engine and Iceberg through Catalog APIs, you need to:
-
-- Referencing the Iceberg :ref:`dependencies<flink-iceberg-deps>`
-
-.. _flink-iceberg-deps:
-
-Dependencies
-************
-
-The **classpath** of kyuubi flink sql engine with Iceberg supported consists of
-
-1. kyuubi-flink-sql-engine-\ |release|\ _2.12.jar, the engine jar deployed with Kyuubi distributions
-2. a copy of flink distribution
-3. iceberg-flink-runtime-<flink.version>-<iceberg.version>.jar (example: iceberg-flink-runtime-1.14-0.14.0.jar), which can be found in the `Maven Central`_
-
-In order to make the Iceberg packages visible for the runtime classpath of engines, we can use one of these methods:
-
-1. Put the Iceberg packages into ``$FLINK_HOME/lib`` directly
-2. Set ``pipeline.jars=/path/to/iceberg-flink-runtime``
-
-.. warning::
-   Please mind the compatibility of different Iceberg and Flink versions, which can be confirmed on the page of `Iceberg multi engine support`_.
-
-Iceberg Operations
-------------------
-
-Taking ``CREATE CATALOG`` as a example,
-
-.. code-block:: sql
-
-   CREATE CATALOG hive_catalog WITH (
-     'type'='iceberg',
-     'catalog-type'='hive',
-     'uri'='thrift://localhost:9083',
-     'warehouse'='hdfs://nn:8020/warehouse/path'
-   );
-   USE CATALOG hive_catalog;
-
-Taking ``CREATE DATABASE`` as a example,
-
-.. code-block:: sql
-
-   CREATE DATABASE iceberg_db;
-   USE iceberg_db;
-
-Taking ``CREATE TABLE`` as a example,
-
-.. code-block:: sql
-
-   CREATE TABLE `hive_catalog`.`default`.`sample` (
-     id BIGINT COMMENT 'unique id',
-     data STRING
-   );
-
-Taking ``Batch Read`` as a example,
-
-.. code-block:: sql
-
-   SET execution.runtime-mode = batch;
-   SELECT * FROM sample;
-
-Taking ``Streaming Read`` as a example,
-
-.. code-block:: sql
-
-   SET execution.runtime-mode = streaming;
-   SELECT * FROM sample /*+ OPTIONS('streaming'='true', 'monitor-interval'='1s')*/ ;
-
-Taking ``INSERT INTO`` as a example,
-
-.. code-block:: sql
-
-   INSERT INTO `hive_catalog`.`default`.`sample` VALUES (1, 'a');
-   INSERT INTO `hive_catalog`.`default`.`sample` SELECT id, data from other_kafka_table;
-
-Taking ``INSERT OVERWRITE`` as a example,
-Flink streaming job does not support INSERT OVERWRITE.
-
-.. code-block:: sql
-
-   INSERT OVERWRITE `hive_catalog`.`default`.`sample` VALUES (1, 'a');
-   INSERT OVERWRITE `hive_catalog`.`default`.`sample` PARTITION(data='a') SELECT 6;
-
-.. _Iceberg: https://iceberg.apache.org/
-.. _Official Documentation: https://iceberg.apache.org/docs/latest/
-.. _Maven Central: https://mvnrepository.com/artifact/org.apache.iceberg
-.. _Iceberg multi engine support: https://iceberg.apache.org/multi-engine-support/
diff --git a/content/docs/r1.6.0-incubating/_sources/connector/flink/index.rst.txt b/content/docs/r1.6.0-incubating/_sources/connector/flink/index.rst.txt
deleted file mode 100644
index c9d9109..0000000
--- a/content/docs/r1.6.0-incubating/_sources/connector/flink/index.rst.txt
+++ /dev/null
@@ -1,24 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-Connectors For Flink SQL Query Engine
-=====================================
-
-.. toctree::
-    :maxdepth: 2
-
-    flink_table_store
-    hudi
-    iceberg
diff --git a/content/docs/r1.6.0-incubating/_sources/connector/hive/index.rst.txt b/content/docs/r1.6.0-incubating/_sources/connector/hive/index.rst.txt
deleted file mode 100644
index dddfd5c..0000000
--- a/content/docs/r1.6.0-incubating/_sources/connector/hive/index.rst.txt
+++ /dev/null
@@ -1,20 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-Connectors for Hive SQL Query Engine
-====================================
-
-.. toctree::
-    :maxdepth: 2
diff --git a/content/docs/r1.6.0-incubating/_sources/connector/index.rst.txt b/content/docs/r1.6.0-incubating/_sources/connector/index.rst.txt
deleted file mode 100644
index f7911e6..0000000
--- a/content/docs/r1.6.0-incubating/_sources/connector/index.rst.txt
+++ /dev/null
@@ -1,42 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-Connectors
-==========
-
-This section describes the connectors available for different kyuubi engines to access data from various data sources.
-
-.. note:: Is your connector missing?
-   `Report an feature request <https://kyuubi.apache.org/issue_tracking.html>`_ or help us document it.
-
-.. toctree::
-    :maxdepth: 2
-
-    spark/index
-
-.. toctree::
-    :maxdepth: 2
-
-    flink/index
-
-.. toctree::
-    :maxdepth: 2
-
-    hive/index
-
-.. toctree::
-    :maxdepth: 2
-
-    trino/index
diff --git a/content/docs/r1.6.0-incubating/_sources/connector/spark/delta_lake.rst.txt b/content/docs/r1.6.0-incubating/_sources/connector/spark/delta_lake.rst.txt
deleted file mode 100644
index 164036c..0000000
--- a/content/docs/r1.6.0-incubating/_sources/connector/spark/delta_lake.rst.txt
+++ /dev/null
@@ -1,95 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-`Delta Lake`_
-=============
-
-Delta lake is an open-source project that enables building a Lakehouse
-Architecture on top of existing storage systems such as S3, ADLS, GCS,
-and HDFS.
-
-.. tip::
-   This article assumes that you have mastered the basic knowledge and
-   operation of `Delta Lake`_.
-   For the knowledge about delta lake not mentioned in this article,
-   you can obtain it from its `Official Documentation`_.
-
-By using kyuubi, we can run SQL queries towards delta lake which is more
-convenient, easy to understand, and easy to expand than directly using
-spark to manipulate delta lake.
-
-Delta Lake Integration
-----------------------
-
-To enable the integration of kyuubi spark sql engine and delta lake through
-Apache Spark Datasource V2 and Catalog APIs, you need to:
-
-- Referencing the delta lake :ref:`dependencies<spark-delta-lake-deps>`
-- Setting the spark extension and catalog :ref:`configurations<spark-delta-lake-conf>`
-
-.. _spark-delta-lake-deps:
-
-Dependencies
-************
-
-The **classpath** of kyuubi spark sql engine with delta lake supported consists of
-
-1. kyuubi-spark-sql-engine-\ |release|\ _2.12.jar, the engine jar deployed with kyuubi distributions
-2. a copy of spark distribution
-3. delta-core & delta-storage, which can be found in the `Maven Central`_
-
-In order to make the delta packages visible for the runtime classpath of engines, we can use one of these methods:
-
-1. Put the delta packages into ``$SPARK_HOME/jars`` directly
-2. Set ``spark.jars=/path/to/delta-core,/path/to/delta-storage``
-
-.. warning::
-   Please mind the compatibility of different Delta Lake and Spark versions, which can be confirmed on the page of `delta release notes`_.
-
-.. _spark-delta-lake-conf:
-
-Configurations
-**************
-
-To activate functionality of delta lake, we can set the following configurations:
-
-.. code-block:: properties
-
-   spark.sql.extensions=io.delta.sql.DeltaSparkSessionExtension
-   spark.sql.catalog.spark_catalog=org.apache.spark.sql.delta.catalog.DeltaCatalog
-
-Delta Lake Operations
----------------------
-
-As for end-users, who only use a pure SQL interface, there aren't much differences between
-using a delta table and a regular hive table. Unless you are going to use some advanced
-features, but they are still SQL, just more syntax added.
-
-Taking ``CREATE A TABLE`` as a example,
-
-.. code-block:: sql
-
-   CREATE TABLE IF NOT EXISTS kyuubi_delta (
-     id INT,
-     name STRING,
-     org STRING,
-     url STRING,
-     start TIMESTAMP
-   ) USING DELTA;
-
-.. _Delta Lake: https://delta.io/
-.. _Official Documentation: https://docs.delta.io/latest/index.html
-.. _Maven Central: https://mvnrepository.com/artifact/io.delta/delta-core
-.. _Delta release notes: https://github.com/delta-io/delta/releases
\ No newline at end of file
diff --git a/content/docs/r1.6.0-incubating/_sources/connector/spark/delta_lake_with_azure_blob.rst.txt b/content/docs/r1.6.0-incubating/_sources/connector/spark/delta_lake_with_azure_blob.rst.txt
deleted file mode 100644
index dd302c5..0000000
--- a/content/docs/r1.6.0-incubating/_sources/connector/spark/delta_lake_with_azure_blob.rst.txt
+++ /dev/null
@@ -1,345 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-Delta Lake with Microsoft Azure Blob Storage
-============================================
-
-Registration And Configuration
-----------------------------------------------
-
-Register An Account And Log In
-******************************
-
-Regarding the Microsoft Azure account, please contact your organization or register
-an account as an individual. For details, please refer to the `Microsoft Azure official
-website`_.
-
-Create Storage Container
-************************
-
-After logging in with your Microsoft Azure account, please follow the steps below to create a data storage container:
-
-.. image:: ../../imgs/deltalake/azure_create_new_container.png
-
-Get Access Key
-**************
-
-.. image:: ../../imgs/deltalake/azure_create_azure_access_key.png
-
-Deploy Spark
-------------
-
-Download Spark Package
-**********************
-
-Download spark package that matches your environment from `spark official website`_.
-And then unpackage:
-```shell
-tar -xzvf spark-3.2.0-bin-hadoop3.2.tgz
-```
-
-Config Spark
-************
-
-Enter the ``$SPARK_HOME/conf`` directory and execute:
-
-.. code-block:: shell
-
-   cp spark-defaults.conf.template spark-defaults.conf
-
-Add following configuration to spark-defaults.conf, please refer to your own local configuration for specific personalized configuration:
-
-.. code-block:: properties
-
-   spark.master                     spark://<YOUR_HOST>:7077
-   spark.sql.extensions             io.delta.sql.DeltaSparkSessionExtension
-   spark.sql.catalog.spark_catalog  org.apache.spark.sql.delta.catalog.DeltaCatalog
-
-Create a new file named ``core-site.xml`` under ``$SPARK_HOME/conf`` directory, and add following configuration:
-
-.. code-block:: xml
-
-   <?xml version="1.0" encoding="UTF-8"?>
-   <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
-   <configuration>
-   <property>
-       <name>fs.AbstractFileSystem.wasb.Impl</name>
-       <value>org.apache.hadoop.fs.azure.Wasb</value>
-    </property>
-    <property>
-     <name>fs.azure.account.key.YOUR_AZURE_ACCOUNT.blob.core.windows.net</name>
-     <value>YOUR_AZURE_ACCOUNT_ACCESS_KEY</value>
-    </property>
-    <property>
-       <name>fs.azure.block.blob.with.compaction.dir</name>
-       <value>/hbase/WALs,/tmp/myblobfiles</value>
-    </property>
-    <property>
-       <name>fs.azure</name>
-       <value>org.apache.hadoop.fs.azure.NativeAzureFileSystem</value>
-    </property>
-   <property>
-       <name>fs.azure.enable.append.support</name>
-       <value>true</value>
-    </property>
-   </configuration>
-
-
-Copy Dependencies To Spark
-**************************
-
-Copy jar packages required by delta lake and microsoft azure to ./spark/jars directory:
-
-.. code-block:: shell
-
-   wget https://repo1.maven.org/maven2/io/delta/delta-core_2.12/1.0.0/delta-core_2.12-1.0.0.jar -O ./spark/jars/delta-core_2.12-1.0.0.jar
-   wget https://repo1.maven.org/maven2/com/microsoft/azure/azure-storage/8.6.6/azure-storage-8.6.6.jar -O ./spark/jars/azure-storage-8.6.6.jar
-   wget https://repo1.maven.org/maven2/com/azure/azure-storage-blob/12.14.2/azure-storage-blob-12.14.2.jar -O ./spark/jars/azure-storage-blob-12.14.2.jar
-   wget https://repo1.maven.org/maven2/org/apache/hadoop/hadoop-azure/3.1.1/hadoop-azure-3.1.1.jar -O ./spark/jars/hadoop-azure-3.1.1.jar
-
-Start Spark Standalone cluster
-******************************
-
-.. code-block:: shell
-
-   ./spark/sbin/start-master.sh -h <YOUR_HOST> -p 7077 --webui-port 9090
-   ./spark/sbin/start-worker.sh spark://<YOUR_HOST>:7077
-
-Test The connectivity Of Spark And Delta Lake
-*********************************************
-
-Start spark shell:
-
-.. code-block:: shell
-
-   ./bin/spark-shell
-
-Generate a piece of random data and push them to delta lake:
-
-.. code-block:: scala
-
-   scala> val data = spark.range(1000, 2000)
-   scala> data.write.format("delta").mode("overwrite").save("wasbs://<YOUR_CONTAINER_NAME>@<YOUR_AZURE_ACCOUNT>.blob.core.windows.net/<YOUR_TABLE_NAME>")
-
-After this, you can check your data on azure web UI. For example, my container name is 1000 and table name is alexDemo20211127:
-
-.. image:: ../../imgs/deltalake/azure_spark_connection_test_storage.png
-
-You can also check data by reading back the data from delta lake:
-
-.. code-block:: scala
-
-   scala> val df=spark.read.format("delta").load("wasbs://<YOUR_CONTAINER_NAME>@<YOUR_AZURE_ACCOUNT>.blob.core.windows.net/<YOUR_TABLE_NAME>")
-   scala> df.show()
-
-If there is no problem with the above, it proves that spark has been built with delta lake.
-
-Deploy Kyuubi
--------------
-
-Install Kyuubi
-**************
-
-1.Download the latest version of kyuubi from `kyuubi download page`_.
-
-2.Unpackage
-
-   tar -xzvf  apache-kyuubi-|release|-incubating-bin.tgz
-
-Config Kyuubi
-*************
-
-Enter the ./kyuubi/conf directory
-
-.. code-block:: shell
-
-   cp kyuubi-defaults.conf.template kyuubi-defaults.conf
-   vim kyuubi-defaults.conf
-
-Add the following content:
-
-.. code-block:: properties
-   spark.master                    spark://<YOUR_HOST>:7077
-   kyuubi.authentication           NONE
-   kyuubi.frontend.bind.host       <YOUR_HOST>
-   kyuubi.frontend.bind.port       10009
-   # If you use your own zk cluster, you need to configure your zk host port.
-   # kyuubi.ha.zookeeper.quorum    <YOUR_HOST>:2181
-
-Start Kyuubi
-************
-
-.. code-block:: shell
-
-   bin/kyuubi start
-
-Check kyuubi log, in order to check kyuubi start status and find the jdbc connection url:
-
-.. code-block:: log
-
-   2021-11-26 17:49:50.235 INFO service.ThriftFrontendService: Starting and exposing JDBC connection at: jdbc:hive2://HOST:10009/
-   2021-11-26 17:49:50.265 INFO client.ServiceDiscovery: Created a /kyuubi/serviceUri=host:10009;version=1.3.1-incubating;sequence=0000000037 on ZooKeeper for KyuubiServer uri: host:10009
-   2021-11-26 17:49:50.267 INFO server.KyuubiServer: Service[KyuubiServer] is started.
-
-You can get the jdbc connection url by the log above.
-
-Test The Connectivity Of Kyuubi And Delta Lake
-**********************************************
-
-Use ``$KYUUBI_HOME/bin/beeline`` tool,
-
-.. code-block:: shell
-
-   ./bin//beeline -u 'jdbc:hive2://<YOUR_HOST>:10009/'
-
-At the same time, you can also check whether the engine is running on the spark UI:
-
-.. image:: ../../imgs/deltalake/kyuubi_start_status_spark_UI.png
-
-When the engine started, it will expose a thrift endpoint and register itself into ZooKeeper, Kyuubi server can get the connection info from ZooKeeper and establish the connection to the engine.
-So, you can check the registration details in zookeeper path '/kyuubi_USER/anonymous'.
-
-Dealing Delta Lake Data By Using Kyuubi Examples
-------------------------------------------------
-
-Operate delta-lake data through SQL:  
-
-Create Table
-************
-
-.. code-block:: sql
-   -- Create or replace table with path
-   CREATE OR REPLACE TABLE delta.`wasbs://1000@azure_account.blob.core.windows.net/alexDemo20211129` (
-     date DATE,
-     eventId STRING,
-     eventType STRING,
-     data STRING)
-   USING DELTA
-   PARTITIONED BY (date);
-
-Insert Data
-***********
-
-Append Mode
-^^^^^^^^^^^
-
-.. code-block:: sql
-
-   INSERT INTO delta.`wasbs://1000@azure_account.blob.core.windows.net/alexDemo20211129` (
-       date,
-       eventId,
-       eventType,
-       data)
-   VALUES
-       (now(),'001','test','Hello World!'),
-       (now(),'002','test','Hello World!'),
-       (now(),'003','test','Hello World!');
-
-Result:
-
-.. code-block:: text
-
-   +-------------+----------+------------+---------------+
-   |    date     | eventId  | eventType  |     data      |
-   +-------------+----------+------------+---------------+
-   | 2021-11-29  | 001      | test       | Hello World!  |
-   | 2021-11-29  | 003      | test       | Hello World!  |
-   | 2021-11-29  | 002      | test       | Hello World!  |
-   +-------------+----------+------------+---------------+
-
-Overwrite Mode
-^^^^^^^^^^^^^^
-
-.. code-block:: sql
-
-   INSERT OVERWRITE TABLE delta.`wasbs://1000@azure_account.blob.core.windows.net/alexDemo20211129`(
-       date,
-       eventId,
-       eventType,
-       data)
-   VALUES
-   (now(),'001','test','hello kyuubi'),
-   (now(),'002','test','hello kyuubi');
-
-Result:
-
-.. code-block:: text
-   +-------------+----------+------------+---------------+
-   |    date     | eventId  | eventType  |     data      |
-   +-------------+----------+------------+---------------+
-   | 2021-11-29  | 002      | test       | hello kyuubi  |
-   | 2021-11-29  | 001      | test       | hello kyuubi  |
-   +-------------+----------+------------+---------------+
-
-Delete Table Data
-*****************
-
-.. code-block:: sql
-   DELETE FROM
-      delta.`wasbs://1000@azure_account.blob.core.windows.net/alexDemo20211129`
-   WHERE eventId = 002;
-
-Result:
-
-.. code-block:: text
-
-   +-------------+----------+------------+---------------+
-   |    date     | eventId  | eventType  |     data      |
-   +-------------+----------+------------+---------------+
-   | 2021-11-29  | 001      | test       | hello kyuubi  |
-   +-------------+----------+------------+---------------+
-
-Update table data
-*****************
-
-.. code-block:: sql
-
-   UPDATE
-       delta.`wasbs://1000@azure_account.blob.core.windows.net/alexDemo20211129`
-   SET data = 'This is a test for update data.'
-   WHERE eventId = 001;
-
-Result:
-
-.. code-block:: text
-
-   +-------------+----------+------------+----------------------------------+
-   |    date     | eventId  | eventType  |               data               |
-   +-------------+----------+------------+----------------------------------+
-   | 2021-11-29  | 001      | test       | This is a test for update data.  |
-   +-------------+----------+------------+----------------------------------+
-
-Select table data
-*****************
-
-.. code-block:: sql
-
-   SELECT *
-   FROM
-       delta.`wasbs://1000@azure_account.blob.core.windows.net/alexDemo20211129`;
-
-Result:
-
-.. code-block:: text
-
-   +-------------+----------+------------+----------------------------------+
-   |    date     | eventId  | eventType  |               data               |
-   +-------------+----------+------------+----------------------------------+
-   | 2021-11-29  | 001      | test       | This is a test for update data.  |
-   +-------------+----------+------------+----------------------------------+
-
-.. _Microsoft Azure official website: https://azure.microsoft.com/en-gb/
-.. _spark official website: https://spark.apache.org/downloads.html
-.. _kyuubi download page: https://kyuubi.apache.org/releases.html
diff --git a/content/docs/r1.6.0-incubating/_sources/connector/spark/flink_table_store.rst.txt b/content/docs/r1.6.0-incubating/_sources/connector/spark/flink_table_store.rst.txt
deleted file mode 100644
index ee4c2b3..0000000
--- a/content/docs/r1.6.0-incubating/_sources/connector/spark/flink_table_store.rst.txt
+++ /dev/null
@@ -1,90 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-`Flink Table Store`_
-==========
-
-Flink Table Store is a unified storage to build dynamic tables for both streaming and batch processing in Flink,
-supporting high-speed data ingestion and timely data query.
-
-.. tip::
-   This article assumes that you have mastered the basic knowledge and operation of `Flink Table Store`_.
-   For the knowledge about Flink Table Store not mentioned in this article,
-   you can obtain it from its `Official Documentation`_.
-
-By using kyuubi, we can run SQL queries towards Flink Table Store which is more
-convenient, easy to understand, and easy to expand than directly using
-spark to manipulate Flink Table Store.
-
-Flink Table Store Integration
--------------------
-
-To enable the integration of kyuubi spark sql engine and Flink Table Store through
-Apache Spark Datasource V2 and Catalog APIs, you need to:
-
-- Referencing the Flink Table Store :ref:`dependencies<spark-flink-table-store-deps>`
-- Setting the spark extension and catalog :ref:`configurations<spark-flink-table-store-conf>`
-
-.. _spark-flink-table-store-deps:
-
-Dependencies
-************
-
-The **classpath** of kyuubi spark sql engine with Flink Table Store supported consists of
-
-1. kyuubi-spark-sql-engine-\ |release|\ _2.12.jar, the engine jar deployed with Kyuubi distributions
-2. a copy of spark distribution
-3. flink-table-store-spark-<version>.jar (example: flink-table-store-spark-0.2.jar), which can be found in the `Maven Central`_
-
-In order to make the Flink Table Store packages visible for the runtime classpath of engines, we can use one of these methods:
-
-1. Put the Flink Table Store packages into ``$SPARK_HOME/jars`` directly
-2. Set ``spark.jars=/path/to/flink-table-store-spark``
-
-.. warning::
-   Please mind the compatibility of different Flink Table Store and Spark versions, which can be confirmed on the page of `Flink Table Store multi engine support`_.
-
-.. _spark-flink-table-store-conf:
-
-Configurations
-**************
-
-To activate functionality of Flink Table Store, we can set the following configurations:
-
-.. code-block:: properties
-
-   spark.sql.catalog.tablestore=org.apache.flink.table.store.spark.SparkCatalog
-   spark.sql.catalog.tablestore.warehouse=file:/tmp/warehouse
-
-Flink Table Store Operations
-------------------
-
-Flink Table Store supports reading table store tables through Spark.
-A common scenario is to write data with Flink and read data with Spark.
-You can follow this document `Flink Table Store Quick Start`_  to write data to a table store table
-and then use kyuubi spark sql engine to query the table with the following SQL ``SELECT`` statement.
-
-
-.. code-block:: sql
-
-   select * from table_store.default.word_count;
-
-
-
-.. _Flink Table Store: https://nightlies.apache.org/flink/flink-table-store-docs-stable/
-.. _Flink Table Store Quick Start: https://nightlies.apache.org/flink/flink-table-store-docs-stable/docs/try-table-store/quick-start/
-.. _Official Documentation: https://nightlies.apache.org/flink/flink-table-store-docs-stable/
-.. _Maven Central: https://mvnrepository.com/artifact/org.apache.flink
-.. _Flink Table Store multi engine support: https://nightlies.apache.org/flink/flink-table-store-docs-stable/docs/engines/overview/
diff --git a/content/docs/r1.6.0-incubating/_sources/connector/spark/hudi.rst.txt b/content/docs/r1.6.0-incubating/_sources/connector/spark/hudi.rst.txt
deleted file mode 100644
index 045e751..0000000
--- a/content/docs/r1.6.0-incubating/_sources/connector/spark/hudi.rst.txt
+++ /dev/null
@@ -1,112 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-`Hudi`_
-========
-
-Apache Hudi (pronounced “hoodie”) is the next generation streaming data lake platform.
-Apache Hudi brings core warehouse and database functionality directly to a data lake.
-
-.. tip::
-   This article assumes that you have mastered the basic knowledge and operation of `Hudi`_.
-   For the knowledge about Hudi not mentioned in this article,
-   you can obtain it from its `Official Documentation`_.
-
-By using Kyuubi, we can run SQL queries towards Hudi which is more convenient, easy to understand,
-and easy to expand than directly using Spark to manipulate Hudi.
-
-Hudi Integration
-----------------
-
-To enable the integration of kyuubi spark sql engine and Hudi through
-Catalog APIs, you need to:
-
-- Referencing the Hudi :ref:`dependencies<spark-hudi-deps>`
-- Setting the Spark extension and catalog :ref:`configurations<spark-hudi-conf>`
-
-.. _spark-hudi-deps:
-
-Dependencies
-************
-
-The **classpath** of kyuubi spark sql engine with Hudi supported consists of
-
-1. kyuubi-spark-sql-engine-\ |release|\ _2.12.jar, the engine jar deployed with Kyuubi distributions
-2. a copy of spark distribution
-3. hudi-spark<spark.version>-bundle_<scala.version>-<hudi.version>.jar (example: hudi-spark3.2-bundle_2.12-0.11.1.jar), which can be found in the `Maven Central`_
-
-In order to make the Hudi packages visible for the runtime classpath of engines, we can use one of these methods:
-
-1. Put the Hudi packages into ``$SPARK_HOME/jars`` directly
-2. Set ``spark.jars=/path/to/hudi-spark-bundle``
-
-.. _spark-hudi-conf:
-
-Configurations
-**************
-
-To activate functionality of Hudi, we can set the following configurations:
-
-.. code-block:: properties
-   # Spark 3.2
-   spark.serializer=org.apache.spark.serializer.KryoSerializer
-   spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension
-   spark.sql.catalog.spark_catalog=org.apache.spark.sql.hudi.catalog.HoodieCatalog
-
-   # Spark 3.1
-   spark.serializer=org.apache.spark.serializer.KryoSerializer
-   spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension
-
-Hudi Operations
----------------
-
-Taking ``Create Table`` as a example,
-
-.. code-block:: sql
-
-   CREATE TABLE hudi_cow_nonpcf_tbl (
-     uuid INT,
-     name STRING,
-     price DOUBLE
-   ) USING HUDI;
-
-Taking ``Query Data`` as a example,
-
-.. code-block:: sql
-
-   SELECT * FROM hudi_cow_nonpcf_tbl WHERE id < 20;
-
-Taking ``Insert Data`` as a example,
-
-.. code-block:: sql
-
-   INSERT INTO hudi_cow_nonpcf_tbl SELECT 1, 'a1', 20;
-
-
-Taking ``Update Data`` as a example,
-
-.. code-block:: sql
-
-   UPDATE hudi_cow_nonpcf_tbl SET name = 'foo', price = price * 2 WHERE id = 1;
-
-Taking ``Delete Data`` as a example,
-
-.. code-block:: sql
-
-   DELETE FROM hudi_cow_nonpcf_tbl WHERE uuid = 1;
-
-.. _Hudi: https://hudi.apache.org/
-.. _Official Documentation: https://hudi.apache.org/docs/overview
-.. _Maven Central: https://mvnrepository.com/artifact/org.apache.hudi
diff --git a/content/docs/r1.6.0-incubating/_sources/connector/spark/iceberg.rst.txt b/content/docs/r1.6.0-incubating/_sources/connector/spark/iceberg.rst.txt
deleted file mode 100644
index 2ce58aa..0000000
--- a/content/docs/r1.6.0-incubating/_sources/connector/spark/iceberg.rst.txt
+++ /dev/null
@@ -1,124 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-`Iceberg`_
-==========
-
-Apache Iceberg is an open table format for huge analytic datasets.
-Iceberg adds tables to compute engines including Spark, Trino, PrestoDB, Flink, Hive and Impala
-using a high-performance table format that works just like a SQL table.
-
-.. tip::
-   This article assumes that you have mastered the basic knowledge and operation of `Iceberg`_.
-   For the knowledge about Iceberg not mentioned in this article,
-   you can obtain it from its `Official Documentation`_.
-
-By using kyuubi, we can run SQL queries towards Iceberg which is more
-convenient, easy to understand, and easy to expand than directly using
-spark to manipulate Iceberg.
-
-Iceberg Integration
--------------------
-
-To enable the integration of kyuubi spark sql engine and Iceberg through
-Apache Spark Datasource V2 and Catalog APIs, you need to:
-
-- Referencing the Iceberg :ref:`dependencies<spark-iceberg-deps>`
-- Setting the spark extension and catalog :ref:`configurations<spark-iceberg-conf>`
-
-.. _spark-iceberg-deps:
-
-Dependencies
-************
-
-The **classpath** of kyuubi spark sql engine with Iceberg supported consists of
-
-1. kyuubi-spark-sql-engine-\ |release|\ _2.12.jar, the engine jar deployed with Kyuubi distributions
-2. a copy of spark distribution
-3. iceberg-spark-runtime-<spark.version>_<scala.version>-<iceberg.version>.jar (example: iceberg-spark-runtime-3.2_2.12-0.14.0.jar), which can be found in the `Maven Central`_
-
-In order to make the Iceberg packages visible for the runtime classpath of engines, we can use one of these methods:
-
-1. Put the Iceberg packages into ``$SPARK_HOME/jars`` directly
-2. Set ``spark.jars=/path/to/iceberg-spark-runtime``
-
-.. warning::
-   Please mind the compatibility of different Iceberg and Spark versions, which can be confirmed on the page of `Iceberg multi engine support`_.
-
-.. _spark-iceberg-conf:
-
-Configurations
-**************
-
-To activate functionality of Iceberg, we can set the following configurations:
-
-.. code-block:: properties
-
-   spark.sql.catalog.spark_catalog=org.apache.iceberg.spark.SparkCatalog
-   spark.sql.catalog.spark_catalog.type=hive
-   spark.sql.catalog.spark_catalog.uri=thrift://metastore-host:port
-   spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions
-
-Iceberg Operations
-------------------
-
-Taking ``CREATE TABLE`` as a example,
-
-.. code-block:: sql
-
-   CREATE TABLE foo (
-     id bigint COMMENT 'unique id',
-     data string)
-   USING iceberg;
-
-Taking ``SELECT`` as a example,
-
-.. code-block:: sql
-
-   SELECT * FROM foo;
-
-Taking ``INSERT`` as a example,
-
-.. code-block:: sql
-
-   INSERT INTO foo VALUES (1, 'a'), (2, 'b'), (3, 'c');
-
-Taking ``UPDATE`` as a example, Spark 3.1 added support for UPDATE queries that update matching rows in tables.
-
-.. code-block:: sql
-
-   UPDATE foo SET data = 'd', id = 4 WHERE id >= 3 and id < 4;
-
-Taking ``DELETE FROM`` as a example, Spark 3 added support for DELETE FROM queries to remove data from tables.
-
-.. code-block:: sql
-
-   DELETE FROM foo WHERE id >= 1 and id < 2;
-
-Taking ``MERGE INTO`` as a example,
-
-.. code-block:: sql
-
-   MERGE INTO target_table t
-   USING source_table s
-   ON t.id = s.id
-   WHEN MATCHED AND s.opType = 'delete' THEN DELETE
-   WHEN MATCHED AND s.opType = 'update' THEN UPDATE SET id = s.id, data = s.data
-   WHEN NOT MATCHED AND s.opType = 'insert' THEN INSERT (id, data) VALUES (s.id, s.data);
-
-.. _Iceberg: https://iceberg.apache.org/
-.. _Official Documentation: https://iceberg.apache.org/docs/latest/
-.. _Maven Central: https://mvnrepository.com/artifact/org.apache.iceberg
-.. _Iceberg multi engine support: https://iceberg.apache.org/multi-engine-support/
diff --git a/content/docs/r1.6.0-incubating/_sources/connector/spark/index.rst.txt b/content/docs/r1.6.0-incubating/_sources/connector/spark/index.rst.txt
deleted file mode 100644
index 7109eda..0000000
--- a/content/docs/r1.6.0-incubating/_sources/connector/spark/index.rst.txt
+++ /dev/null
@@ -1,42 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-Connectors for Spark SQL Query Engine
-=====================================
-
-The Kyuubi Spark SQL Query Engine uses Spark DataSource APIs(V1/V2) to access
-data from different data sources.
-
-By default, it provides accessibility to hive warehouses with various file formats
-supported, such as parquet, orc, json, etc.
-
-Also,it can easily integrate with other third-party libraries, such as Hudi,
-Iceberg, Delta Lake, Kudu, Flink Table Store, HBase,Cassandra, etc.
-
-We also provide sample data sources like TDC-DS, TPC-H for testing and benchmarking
-purpose.
-
-.. toctree::
-    :maxdepth: 2
-
-    delta_lake
-    delta_lake_with_azure_blob
-    hudi
-    iceberg
-    kudu
-    flink_table_store
-    tidb
-    tpcds
-    tpch
diff --git a/content/docs/r1.6.0-incubating/_sources/connector/spark/kudu.md.txt b/content/docs/r1.6.0-incubating/_sources/connector/spark/kudu.md.txt
deleted file mode 100644
index 0d3c850..0000000
--- a/content/docs/r1.6.0-incubating/_sources/connector/spark/kudu.md.txt
+++ /dev/null
@@ -1,185 +0,0 @@
-<!--
- - Licensed to the Apache Software Foundation (ASF) under one or more
- - contributor license agreements.  See the NOTICE file distributed with
- - this work for additional information regarding copyright ownership.
- - The ASF licenses this file to You under the Apache License, Version 2.0
- - (the "License"); you may not use this file except in compliance with
- - the License.  You may obtain a copy of the License at
- -
- -   http://www.apache.org/licenses/LICENSE-2.0
- -
- - Unless required by applicable law or agreed to in writing, software
- - distributed under the License is distributed on an "AS IS" BASIS,
- - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- - See the License for the specific language governing permissions and
- - limitations under the License.
- -->
-
-# Kudu
-
-## What is Apache Kudu
-
-> A new addition to the open source Apache Hadoop ecosystem, Apache Kudu completes Hadoop's storage layer to enable fast analytics on fast data.
-
-When you are reading this documentation, we suppose that you are not necessary to be familiar with [Apache Kudu](https://kudu.apache.org/). But at least, you have one running Kudu cluster which is able to be connected for you. And it is even better for you to understand what Apache Kudu is capable with.
-
-Anything missing on this page about Apache Kudu background knowledge, you can refer to its official website.
-
-## Why Kyuubi on Kudu
-Basically, Kyuubi can take place of HiveServer2 as a multi tenant ad-hoc SQL on Hadoop solution, with the advantages of speed and power coming from Spark SQL. You can run SQL queries towards both data source and Hive tables whose data is secured only with computing resources you are authorized.
-
-> Spark SQL supports operating on a variety of data sources through the DataFrame interface. A DataFrame can be operated on using relational transformations and can also be used to create a temporary view. Registering a DataFrame as a temporary view allows you to run SQL queries over its data. This section describes the general methods for loading and saving data using the Spark Data Sources and then goes into specific options that are available for the built-in data sources.
-
-In Kyuubi, we can register Kudu tables and other data source tables as Spark temporary views to enable federated union queries across Hive, Kudu, and other data sources.
-
-## Kudu Integration with Apache Spark
-Before integrating Kyuubi with Kudu, we strongly suggest that you integrate and test Spark with Kudu first. You may find the guide from Kudu's online documentation -- [Kudu Integration with Spark](https://kudu.apache.org/docs/developing.html#_kudu_integration_with_spark)
-
-## Kudu Integration with Kyuubi
-
-#### Install Kudu Spark Dependency
-Confirm your Kudu cluster version and download the corresponding kudu spark dependency library, such as [org.apache.kudu:kudu-spark3_2.12-1.14.0](https://repo1.maven.org/maven2/org/apache/kudu/kudu-spark3_2.12/1.14.0/kudu-spark3_2.12-1.14.0.jar) to `$SPARK_HOME`/jars.
-
-#### Start Kyuubi
-
-Now, you can start Kyuubi server with this kudu embedded Spark distribution.
-
-#### Start Beeline Or Other Client You Prefer
-
-```shell
-bin/beeline -u 'jdbc:hive2://<host>:<port>/;principal=<if kerberized>;#spark.yarn.queue=kyuubi_test'
-```
-
-#### Register Kudu table as Spark Temporary view
-
-```sql
-CREATE TEMPORARY VIEW kudutest
-USING kudu
-options ( 
-  kudu.master "ip1:port1,ip2:port2,...",
-  kudu.table "kudu::test.testtbl")
-```
-
-```sql
-0: jdbc:hive2://spark5.jd.163.org:10009/> show tables;
-19/07/09 15:28:03 INFO ExecuteStatementInClientMode: Running query 'show tables' with 1104328b-515c-4f8b-8a68-1c0b202bc9ed
-19/07/09 15:28:03 INFO KyuubiSparkUtil$: Application application_1560304876299_3805060 has been activated
-19/07/09 15:28:03 INFO ExecuteStatementInClientMode: Executing query in incremental mode, running 1 jobs before optimization
-19/07/09 15:28:03 INFO ExecuteStatementInClientMode: Executing query in incremental mode, running 1 jobs without optimization
-19/07/09 15:28:03 INFO DAGScheduler: Asked to cancel job group 1104328b-515c-4f8b-8a68-1c0b202bc9ed
-+-----------+-----------------------------+--------------+--+
-| database  |          tableName          | isTemporary  |
-+-----------+-----------------------------+--------------+--+
-| kyuubi    | hive_tbl                    | false        |
-|           | kudutest                    | true         |
-+-----------+-----------------------------+--------------+--+
-2 rows selected (0.29 seconds)
-```
-
-#### Query Kudu Table
-
-```sql
-0: jdbc:hive2://spark5.jd.163.org:10009/> select * from kudutest;
-19/07/09 15:25:17 INFO ExecuteStatementInClientMode: Running query 'select * from kudutest' with ac3e8553-0d79-4c57-add1-7d3ffe34ba16
-19/07/09 15:25:17 INFO KyuubiSparkUtil$: Application application_1560304876299_3805060 has been activated
-19/07/09 15:25:17 INFO ExecuteStatementInClientMode: Executing query in incremental mode, running 3 jobs before optimization
-19/07/09 15:25:17 INFO ExecuteStatementInClientMode: Executing query in incremental mode, running 3 jobs without optimization
-19/07/09 15:25:17 INFO DAGScheduler: Asked to cancel job group ac3e8553-0d79-4c57-add1-7d3ffe34ba16
-+---------+---------------+----------------+--+
-| userid  | sharesetting  | notifysetting  |
-+---------+---------------+----------------+--+
-| 1       | 1             | 1              |
-| 5       | 5             | 5              |
-| 2       | 2             | 2              |
-| 3       | 3             | 3              |
-| 4       | 4             | 4              |
-+---------+---------------+----------------+--+
-5 rows selected (1.083 seconds)
-```
-
-
-#### Join Kudu table with Hive table
-
-```sql
-0: jdbc:hive2://spark5.jd.163.org:10009/> select t1.*, t2.* from hive_tbl t1 join kudutest t2 on t1.userid=t2.userid+1;
-19/07/09 15:31:01 INFO ExecuteStatementInClientMode: Running query 'select t1.*, t2.* from hive_tbl t1 join kudutest t2 on t1.userid=t2.userid+1' with 6982fa5c-29fa-49be-a5bf-54c935bbad18
-19/07/09 15:31:01 INFO KyuubiSparkUtil$: Application application_1560304876299_3805060 has been activated
-<omitted lines.... >
-19/07/09 15:31:01 INFO DAGScheduler: Asked to cancel job group 6982fa5c-29fa-49be-a5bf-54c935bbad18
-+---------+---------------+----------------+---------+---------------+----------------+--+
-| userid  | sharesetting  | notifysetting  | userid  | sharesetting  | notifysetting  |
-+---------+---------------+----------------+---------+---------------+----------------+--+
-| 2       | 2             | 2              | 1       | 1             | 1              |
-| 3       | 3             | 3              | 2       | 2             | 2              |
-| 4       | 4             | 4              | 3       | 3             | 3              |
-+---------+---------------+----------------+---------+---------------+----------------+--+
-3 rows selected (1.63 seconds)
-```
-
-#### Insert to Kudu table
-
-You should notice that only `INSERT INTO` is supported by Kudu, `OVERWRITE` data is not supported
-
-```sql
-0: jdbc:hive2://spark5.jd.163.org:10009/> insert overwrite table kudutest select *  from hive_tbl;
-19/07/09 15:35:29 INFO ExecuteStatementInClientMode: Running query 'insert overwrite table kudutest select *  from hive_tbl' with 1afdb791-1aa7-4ceb-8ba8-ff53c17615d1
-19/07/09 15:35:29 INFO KyuubiSparkUtil$: Application application_1560304876299_3805060 has been activated
-19/07/09 15:35:30 ERROR ExecuteStatementInClientMode:
-Error executing query as bdms_hzyaoqin,
-insert overwrite table kudutest select *  from hive_tbl
-Current operation state RUNNING,
-java.lang.UnsupportedOperationException: overwrite is not yet supported
-	at org.apache.kudu.spark.kudu.KuduRelation.insert(DefaultSource.scala:424)
-	at org.apache.spark.sql.execution.datasources.InsertIntoDataSourceCommand.run(InsertIntoDataSourceCommand.scala:42)
-	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
-	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
-	at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
-	at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190)
-	at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190)
-	at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259)
-	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
-	at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258)
-	at org.apache.spark.sql.Dataset.<init>(Dataset.scala:190)
-	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75)
-	at org.apache.spark.sql.SparkSQLUtils$.toDataFrame(SparkSQLUtils.scala:39)
-	at org.apache.kyuubi.operation.statement.ExecuteStatementInClientMode.execute(ExecuteStatementInClientMode.scala:152)
-	at org.apache.kyuubi.operation.statement.ExecuteStatementOperation$$anon$1$$anon$2.run(ExecuteStatementOperation.scala:74)
-	at org.apache.kyuubi.operation.statement.ExecuteStatementOperation$$anon$1$$anon$2.run(ExecuteStatementOperation.scala:70)
-	at java.security.AccessController.doPrivileged(Native Method)
-	at javax.security.auth.Subject.doAs(Subject.java:422)
-	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
-	at org.apache.kyuubi.operation.statement.ExecuteStatementOperation$$anon$1.run(ExecuteStatementOperation.scala:70)
-	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
-	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
-	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
-	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
-	at java.lang.Thread.run(Thread.java:745)
-
-
-19/07/09 15:35:30 INFO DAGScheduler: Asked to cancel job group 1afdb791-1aa7-4ceb-8ba8-ff53c17615d1
-
-```
-
-```sql
-0: jdbc:hive2://spark5.jd.163.org:10009/> insert into table kudutest select * from hive_tbl;
-19/07/09 15:36:26 INFO ExecuteStatementInClientMode: Running query 'insert into table kudutest select *  from hive_tbl' with f7460400-0564-4f98-93b6-ad76e579e7af
-19/07/09 15:36:26 INFO KyuubiSparkUtil$: Application application_1560304876299_3805060 has been activated
-<omitted lines ...>
-19/07/09 15:36:27 INFO DAGScheduler: ResultStage 36 (foreachPartition at KuduContext.scala:332) finished in 0.322 s
-19/07/09 15:36:27 INFO DAGScheduler: Job 36 finished: foreachPartition at KuduContext.scala:332, took 0.324586 s
-19/07/09 15:36:27 INFO KuduContext: completed upsert ops: duration histogram: 33.333333333333336%: 2ms, 66.66666666666667%: 64ms, 100.0%: 102ms, 100.0%: 102ms
-19/07/09 15:36:27 INFO ExecuteStatementInClientMode: Executing query in incremental mode, running 1 jobs before optimization
-19/07/09 15:36:27 INFO ExecuteStatementInClientMode: Executing query in incremental mode, running 1 jobs without optimization
-19/07/09 15:36:27 INFO DAGScheduler: Asked to cancel job group f7460400-0564-4f98-93b6-ad76e579e7af
-+---------+--+
-| Result  |
-+---------+--+
-+---------+--+
-No rows selected (0.611 seconds)
-```
-
-## References
-[https://kudu.apache.org/](https://kudu.apache.org/)
-[https://kudu.apache.org/docs/developing.html#_kudu_integration_with_spark](https://kudu.apache.org/docs/developing.html#_kudu_integration_with_spark)
-[https://github.com/apache/incubator-kyuubi](https://github.com/apache/incubator-kyuubi)
-[https://spark.apache.org/docs/latest/sql-data-sources.html](https://spark.apache.org/docs/latest/sql-data-sources.html)
diff --git a/content/docs/r1.6.0-incubating/_sources/connector/spark/tidb.rst.txt b/content/docs/r1.6.0-incubating/_sources/connector/spark/tidb.rst.txt
deleted file mode 100644
index 366f3b2..0000000
--- a/content/docs/r1.6.0-incubating/_sources/connector/spark/tidb.rst.txt
+++ /dev/null
@@ -1,103 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-`TiDB`_
-==========
-
-TiDB is an open-source NewSQL database that supports Hybrid Transactional and Analytical Processing
-(HTAP) workloads.
-
-TiSpark is a thin layer built for running Apache Spark on top of TiDB/TiKV to answer complex OLAP
-queries. It enjoys the merits of both the Spark platform and the distributed clusters
-of TiKV while seamlessly integrated to TiDB to provide one-stop HTAP solutions for online
-transactions and analyses.
-
-.. tip::
-   This article assumes that you have mastered the basic knowledge and operation of TiDB and TiSpark.
-   For the knowledge not mentioned in this article, you can obtain it from TiDB `Official Documentation`_.
-
-By using kyuubi, we can run SQL queries towards TiDB/TiKV which is more
-convenient, easy to understand, and easy to expand than directly using
-spark to manipulate TiDB/TiKV.
-
-TiDB Integration
--------------------
-
-To enable the integration of kyuubi spark sql engine and TiDB through
-Apache Spark Datasource V2 and Catalog APIs, you need to:
-
-- Referencing the TiSpark :ref:`dependencies<spark-tidb-deps>`
-- Setting the spark extension and catalog :ref:`configurations<spark-tidb-conf>`
-
-.. _spark-tidb-deps:
-
-Dependencies
-************
-The classpath of kyuubi spark sql engine with TiDB supported consists of
-
-1. kyuubi-spark-sql-engine-\ |release|\ _2.12.jar, the engine jar deployed with Kyuubi distributions
-2. a copy of spark distribution
-3. tispark-assembly-<spark.version>_<scala.version>-<tispark.version>.jar (example: tispark-assembly-3.2_2.12-3.0.1.jar), which can be found in the `Maven Central`_
-
-In order to make the TiSpark packages visible for the runtime classpath of engines, we can use one of these methods:
-
-1. Put the TiSpark packages into ``$SPARK_HOME/jars`` directly
-2. Set ``spark.jars=/path/to/tispark-assembly``
-
-.. warning::
-   Please mind the compatibility of different TiDB, TiSpark and Spark versions, which can be confirmed on the page of `TiSpark Environment setup`_.
-
-.. _spark-tidb-conf:
-
-Configurations
-**************
-
-To activate functionality of TiSpark, we can set the following configurations:
-
-.. code-block:: properties
-
-   spark.tispark.pd.addresses $pd_host:$pd_port
-   spark.sql.extensions org.apache.spark.sql.TiExtensions
-   spark.sql.catalog.tidb_catalog  org.apache.spark.sql.catalyst.catalog.TiCatalog
-   spark.sql.catalog.tidb_catalog.pd.addresses $pd_host:$pd_port
-
-The `spark.tispark.pd.addresses` and `spark.sql.catalog.tidb_catalog.pd.addresses` configurations
-allow you to put in multiple PD servers. Specify the port number for each of them.
-
-For example, when you have multiple PD servers on `10.16.20.1,10.16.20.2,10.16.20.3` with the port `2379`,
-put it as `10.16.20.1:2379,10.16.20.2:2379,10.16.20.3:2379`.
-
-TiDB Operations
-------------------
-
-Taking ``SELECT`` as a example,
-
-.. code-block:: sql
-
-   SELECT * FROM foo;
-
-Taking ``DELETE FROM`` as a example, Spark 3 added support for DELETE FROM queries to remove data from tables.
-
-.. code-block:: sql
-
-   DELETE FROM foo WHERE id >= 1 and id < 2;
-
-.. note::
-   As for now (TiSpark 3.0.1), TiSpark does not support ``CREATE TABLE``, ``INSERT INTO/OVERWRITE`` operations
-   through Apache Spark Datasource V2 and Catalog APIs.
-
-.. _Official Documentation: https://docs.pingcap.com/tidb/stable/overview
-.. _Maven Central: https://repo1.maven.org/maven2/com/pingcap/tispark/
-.. _TiSpark Environment setup: https://docs.pingcap.com/tidb/stable/tispark-overview#environment-setup
\ No newline at end of file
diff --git a/content/docs/r1.6.0-incubating/_sources/connector/spark/tpcds.rst.txt b/content/docs/r1.6.0-incubating/_sources/connector/spark/tpcds.rst.txt
deleted file mode 100644
index e52e56c..0000000
--- a/content/docs/r1.6.0-incubating/_sources/connector/spark/tpcds.rst.txt
+++ /dev/null
@@ -1,108 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-TPC-DS
-=====
-
-The TPC-DS is a decision support benchmark. It consists of a suite of business oriented ad-hoc queries and concurrent
-data modifications. The queries and the data populating the database have been chosen to have broad industry-wide
-relevance.
-
-.. tip::
-   This article assumes that you have mastered the basic knowledge and operation of `TPC-DS`_.
-   For the knowledge about TPC-DS not mentioned in this article, you can obtain it from its `Official Documentation`_.
-
-This connector can be used to test the capabilities and query syntax of Spark without configuring access to an external
-data source. When you query a TPC-DS table, the connector generates the data on the fly using a deterministic algorithm.
-
-Goto `Try Kyuubi`_ to explore TPC-DS data instantly!
-
-TPC-DS Integration
-------------------
-
-To enable the integration of kyuubi spark sql engine and TPC-DS through
-Apache Spark Datasource V2 and Catalog APIs, you need to:
-
-- Referencing the TPC-DS connector :ref:`dependencies<spark-tpcds-deps>`
-- Setting the spark catalog :ref:`configurations<spark-tpcds-conf>`
-
-.. _spark-tpcds-deps:
-
-Dependencies
-************
-
-The **classpath** of kyuubi spark sql engine with TPC-DS supported consists of
-
-1. kyuubi-spark-sql-engine-\ |release|\ _2.12.jar, the engine jar deployed with Kyuubi distributions
-2. a copy of spark distribution
-3. kyuubi-spark-connector-tpcds-\ |release|\ _2.12.jar, which can be found in the `Maven Central`_
-
-In order to make the TPC-DS connector package visible for the runtime classpath of engines, we can use one of these methods:
-
-1. Put the TPC-DS connector package into ``$SPARK_HOME/jars`` directly
-2. Set spark.jars=kyuubi-spark-connector-tpcds-\ |release|\ _2.12.jar
-
-.. _spark-tpcds-conf:
-
-Configurations
-**************
-
-To add TPC-DS tables as a catalog, we can set the following configurations in ``$SPARK_HOME/conf/spark-defaults.conf``:
-
-.. code-block:: properties
-
-   # (required) Register a catalog named `tpcds` for the spark engine.
-   spark.sql.catalog.tpcds=org.apache.kyuubi.spark.connector.tpcds.TPCDSCatalog
-
-   # (optional) Excluded database list from the catalog, all available databases are:
-   #            sf0, tiny, sf1, sf10, sf30, sf100, sf300, sf1000, sf3000, sf10000, sf30000, sf100000.
-   spark.sql.catalog.tpcds.excludeDatabases=sf10000,sf30000
-
-   # (optional) When true, use CHAR/VARCHAR, otherwise use STRING. It affects output of the table schema,
-   #            e.g. `SHOW CREATE TABLE <table>`, `DESC <table>`.
-   spark.sql.catalog.tpcds.useAnsiStringType=false
-
-   # (optional) TPCDS changed table schemas in v2.6.0, turn off this option to use old table schemas.
-   #            See detail at: https://www.tpc.org/tpc_documents_current_versions/pdf/tpc-ds_v3.2.0.pdf
-   spark.sql.catalog.tpcds.useTableSchema_2_6=true
-
-   # (optional) Maximum bytes per task, consider reducing it if you want higher parallelism.
-   spark.sql.catalog.tpcds.read.maxPartitionBytes=128m
-
-TPC-DS Operations
-----------------
-
-Listing databases under `tpcds` catalog.
-
-.. code-block:: sql
-
-   SHOW DATABASES IN tpcds;
-
-Listing tables under `tpcds.sf1` database.
-
-.. code-block:: sql
-
-   SHOW TABLES IN tpcds.sf1;
-
-Switch current database to `tpcds.sf1` and run a query against it.
-
-.. code-block:: sql
-
-   USE tpcds.sf1;
-   SELECT * FROM orders;
-
-.. _Official Documentation: https://www.tpc.org/tpcds/
-.. _Try Kyuubi: https://try.kyuubi.cloud/
-.. _Maven Central: https://repo1.maven.org/maven2/org/apache/kyuubi/kyuubi-spark-connector-tpcds_2.12/
\ No newline at end of file
diff --git a/content/docs/r1.6.0-incubating/_sources/connector/spark/tpch.rst.txt b/content/docs/r1.6.0-incubating/_sources/connector/spark/tpch.rst.txt
deleted file mode 100644
index 72ad8e9..0000000
--- a/content/docs/r1.6.0-incubating/_sources/connector/spark/tpch.rst.txt
+++ /dev/null
@@ -1,104 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-TPC-H
-=====
-
-The TPC-H is a decision support benchmark. It consists of a suite of business oriented ad-hoc queries and concurrent
-data modifications. The queries and the data populating the database have been chosen to have broad industry-wide
-relevance.
-
-.. tip::
-   This article assumes that you have mastered the basic knowledge and operation of `TPC-H`_.
-   For the knowledge about TPC-H not mentioned in this article, you can obtain it from its `Official Documentation`_.
-
-This connector can be used to test the capabilities and query syntax of Spark without configuring access to an external
-data source. When you query a TPC-H table, the connector generates the data on the fly using a deterministic algorithm.
-
-Goto `Try Kyuubi`_ to explore TPC-H data instantly!
-
-TPC-H Integration
-------------------
-
-To enable the integration of kyuubi spark sql engine and TPC-H through
-Apache Spark Datasource V2 and Catalog APIs, you need to:
-
-- Referencing the TPC-H connector :ref:`dependencies<spark-tpch-deps>`
-- Setting the spark catalog :ref:`configurations<spark-tpch-conf>`
-
-.. _spark-tpch-deps:
-
-Dependencies
-************
-
-The **classpath** of kyuubi spark sql engine with TPC-H supported consists of
-
-1. kyuubi-spark-sql-engine-\ |release|\ _2.12.jar, the engine jar deployed with Kyuubi distributions
-2. a copy of spark distribution
-3. kyuubi-spark-connector-tpch-\ |release|\ _2.12.jar, which can be found in the `Maven Central`_
-
-In order to make the TPC-H connector package visible for the runtime classpath of engines, we can use one of these methods:
-
-1. Put the TPC-H connector package into ``$SPARK_HOME/jars`` directly
-2. Set spark.jars=kyuubi-spark-connector-tpch-\ |release|\ _2.12.jar
-
-.. _spark-tpch-conf:
-
-Configurations
-**************
-
-To add TPC-H tables as a catalog, we can set the following configurations in ``$SPARK_HOME/conf/spark-defaults.conf``:
-
-.. code-block:: properties
-
-   # (required) Register a catalog named `tpch` for the spark engine.
-   spark.sql.catalog.tpch=org.apache.kyuubi.spark.connector.tpch.TPCHCatalog
-
-   # (optional) Excluded database list from the catalog, all available databases are:
-   #            sf0, tiny, sf1, sf10, sf30, sf100, sf300, sf1000, sf3000, sf10000, sf30000, sf100000.
-   spark.sql.catalog.tpch.excludeDatabases=sf10000,sf30000
-
-   # (optional) When true, use CHAR/VARCHAR, otherwise use STRING. It affects output of the table schema,
-   #            e.g. `SHOW CREATE TABLE <table>`, `DESC <table>`.
-   spark.sql.catalog.tpch.useAnsiStringType=false
-
-   # (optional) Maximum bytes per task, consider reducing it if you want higher parallelism.
-   spark.sql.catalog.tpch.read.maxPartitionBytes=128m
-
-TPC-H Operations
-----------------
-
-Listing databases under `tpch` catalog.
-
-.. code-block:: sql
-
-   SHOW DATABASES IN tpch;
-
-Listing tables under `tpch.sf1` database.
-
-.. code-block:: sql
-
-   SHOW TABLES IN tpch.sf1;
-
-Switch current database to `tpch.sf1` and run a query against it.
-
-.. code-block:: sql
-
-   USE tpch.sf1;
-   SELECT * FROM orders;
-
-.. _Official Documentation: https://www.tpc.org/tpch/
-.. _Try Kyuubi: https://try.kyuubi.cloud/
-.. _Maven Central: https://repo1.maven.org/maven2/org/apache/kyuubi/kyuubi-spark-connector-tpch_2.12/
\ No newline at end of file
diff --git a/content/docs/r1.6.0-incubating/_sources/connector/trino/flink_table_store.rst.txt b/content/docs/r1.6.0-incubating/_sources/connector/trino/flink_table_store.rst.txt
deleted file mode 100644
index 8dd0c40..0000000
--- a/content/docs/r1.6.0-incubating/_sources/connector/trino/flink_table_store.rst.txt
+++ /dev/null
@@ -1,94 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-`Flink Table Store`_
-==========
-
-Flink Table Store is a unified storage to build dynamic tables for both streaming and batch processing in Flink,
-supporting high-speed data ingestion and timely data query.
-
-.. tip::
-   This article assumes that you have mastered the basic knowledge and operation of `Flink Table Store`_.
-   For the knowledge about Flink Table Store not mentioned in this article,
-   you can obtain it from its `Official Documentation`_.
-
-By using kyuubi, we can run SQL queries towards Flink Table Store which is more
-convenient, easy to understand, and easy to expand than directly using
-trino to manipulate Flink Table Store.
-
-Flink Table Store Integration
--------------------
-
-To enable the integration of kyuubi trino sql engine and Flink Table Store, you need to:
-
-- Referencing the Flink Table Store :ref:`dependencies<trino-flink-table-store-deps>`
-- Setting the trino extension and catalog :ref:`configurations<trino-flink-table-store-conf>`
-
-.. _trino-flink-table-store-deps:
-
-Dependencies
-************
-
-The **classpath** of kyuubi trino sql engine with Flink Table Store supported consists of
-
-1. kyuubi-trino-sql-engine-\ |release|\ _2.12.jar, the engine jar deployed with Kyuubi distributions
-2. a copy of trino distribution
-3. flink-table-store-trino-<version>.jar (example: flink-table-store-trino-0.2.jar), which code can be found in the `Source Code`_
-4. flink-shaded-hadoop-2-uber-2.8.3-10.0.jar, which code can be found in the `Pre-bundled Hadoop 2.8.3`_
-
-In order to make the Flink Table Store packages visible for the runtime classpath of engines, we can use these methods:
-
-1. Build the flink-table-store-trino-<version>.jar by reference to `Flink Table Store Trino README`_
-2. Put the flink-table-store-trino-<version>.jar and flink-shaded-hadoop-2-uber-2.8.3-10.0.jar packages into ``$TRINO_SERVER_HOME/plugin/tablestore`` directly
-
-.. warning::
-   Please mind the compatibility of different Flink Table Store and Trino versions, which can be confirmed on the page of `Flink Table Store multi engine support`_.
-
-.. _trino-flink-table-store-conf:
-
-Configurations
-**************
-
-To activate functionality of Flink Table Store, we can set the following configurations:
-
-Catalogs are registered by creating a catalog properties file in the $TRINO_SERVER_HOME/etc/catalog directory.
-For example, create $TRINO_SERVER_HOME/etc/catalog/tablestore.properties with the following contents to mount the tablestore connector as the tablestore catalog:
-
-.. code-block:: properties
-
-   connector.name=tablestore
-   warehouse=file:///tmp/warehouse
-
-Flink Table Store Operations
-------------------
-
-Flink Table Store supports reading table store tables through Trino.
-A common scenario is to write data with Flink and read data with Trino.
-You can follow this document `Flink Table Store Quick Start`_  to write data to a table store table
-and then use kyuubi trino sql engine to query the table with the following SQL ``SELECT`` statement.
-
-
-.. code-block:: sql
-
-   SELECT * FROM tablestore.default.t1
-
-
-.. _Flink Table Store: https://nightlies.apache.org/flink/flink-table-store-docs-stable/
-.. _Flink Table Store Quick Start: https://nightlies.apache.org/flink/flink-table-store-docs-stable/docs/try-table-store/quick-start/
-.. _Official Documentation: https://nightlies.apache.org/flink/flink-table-store-docs-stable/
-.. _Source Code: https://github.com/JingsongLi/flink-table-store-trino
-.. _Flink Table Store multi engine support: https://nightlies.apache.org/flink/flink-table-store-docs-stable/docs/engines/overview/
-.. _Pre-bundled Hadoop 2.8.3: https://repo.maven.apache.org/maven2/org/apache/flink/flink-shaded-hadoop-2-uber/2.8.3-10.0/flink-shaded-hadoop-2-uber-2.8.3-10.0.jar
-.. _Flink Table Store Trino README: https://github.com/JingsongLi/flink-table-store-trino#readme
diff --git a/content/docs/r1.6.0-incubating/_sources/connector/trino/iceberg.rst.txt b/content/docs/r1.6.0-incubating/_sources/connector/trino/iceberg.rst.txt
deleted file mode 100644
index 6fc09bc..0000000
--- a/content/docs/r1.6.0-incubating/_sources/connector/trino/iceberg.rst.txt
+++ /dev/null
@@ -1,92 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-`Iceberg`_
-==========
-
-Apache Iceberg is an open table format for huge analytic datasets.
-Iceberg adds tables to compute engines including Spark, Trino, PrestoDB, Flink, Hive and Impala
-using a high-performance table format that works just like a SQL table.
-
-.. tip::
-   This article assumes that you have mastered the basic knowledge and operation of `Iceberg`_.
-   For the knowledge about Iceberg not mentioned in this article,
-   you can obtain it from its `Official Documentation`_.
-
-By using kyuubi, we can run SQL queries towards Iceberg which is more
-convenient, easy to understand, and easy to expand than directly using
-Trino to manipulate Iceberg.
-
-Iceberg Integration
--------------------
-
-To enable the integration of kyuubi trino sql engine and Iceberg through Catalog APIs, you need to:
-
-- Setting the Trino extension and catalog :ref:`configurations`
-
-.. _configurations:
-
-Configurations
-**************
-
-To activate functionality of Iceberg, we can set the following configurations:
-
-.. code-block:: properties
-
-   connector.name=iceberg
-   hive.metastore.uri=thrift://localhost:9083
-
-Iceberg Operations
-------------------
-
-Taking ``CREATE TABLE`` as a example,
-
-.. code-block:: sql
-
-   CREATE TABLE orders (
-     orderkey bigint,
-     orderstatus varchar,
-     totalprice double,
-     orderdate date
-   ) WITH (
-     format = 'ORC'
-   );
-
-Taking ``SELECT`` as a example,
-
-.. code-block:: sql
-
-   SELECT * FROM new_orders;
-
-Taking ``INSERT`` as a example,
-
-.. code-block:: sql
-
-   INSERT INTO cities VALUES (1, 'San Francisco');
-
-Taking ``UPDATE`` as a example,
-
-.. code-block:: sql
-
-   UPDATE purchases SET status = 'OVERDUE' WHERE ship_date IS NULL;
-
-Taking ``DELETE FROM`` as a example,
-
-.. code-block:: sql
-
-   DELETE FROM lineitem WHERE shipmode = 'AIR';
-
-.. _Iceberg: https://iceberg.apache.org/
-.. _Official Documentation: https://trino.io/docs/current/connector/iceberg.html#
diff --git a/content/docs/r1.6.0-incubating/_sources/connector/trino/index.rst.txt b/content/docs/r1.6.0-incubating/_sources/connector/trino/index.rst.txt
deleted file mode 100644
index a5c5675..0000000
--- a/content/docs/r1.6.0-incubating/_sources/connector/trino/index.rst.txt
+++ /dev/null
@@ -1,23 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-Connectors For Trino SQL Engine
-=====================================
-
-.. toctree::
-    :maxdepth: 2
-
-    flink_table_store
-    iceberg
\ No newline at end of file
diff --git a/content/docs/r1.6.0-incubating/_sources/deployment/engine_lifecycle.md.txt b/content/docs/r1.6.0-incubating/_sources/deployment/engine_lifecycle.md.txt
deleted file mode 100644
index 35944fa..0000000
--- a/content/docs/r1.6.0-incubating/_sources/deployment/engine_lifecycle.md.txt
+++ /dev/null
@@ -1,59 +0,0 @@
-<!--
- - Licensed to the Apache Software Foundation (ASF) under one or more
- - contributor license agreements.  See the NOTICE file distributed with
- - this work for additional information regarding copyright ownership.
- - The ASF licenses this file to You under the Apache License, Version 2.0
- - (the "License"); you may not use this file except in compliance with
- - the License.  You may obtain a copy of the License at
- -
- -   http://www.apache.org/licenses/LICENSE-2.0
- -
- - Unless required by applicable law or agreed to in writing, software
- - distributed under the License is distributed on an "AS IS" BASIS,
- - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- - See the License for the specific language governing permissions and
- - limitations under the License.
- -->
-
-# The TTL Of Kyuubi Engines
-
-For a multi-tenant cluster, its overall resource utilization is a KPI that measures how effectively its resource is utilized against its availability or capacity.
-To better improve the overall resource utilization of the cluster,
-- At cluster layer, we leverage the capabilities, such as [Capacity Scheduler](https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html), of resource scheduling management services, such as YARN and K8s.
-- At application layer, we'd be better to acquire and release resources according to the real workloads.
-
-## The Big Contributors Of Resource Waste
-
-- The time to wait for the resource to be allocated, such as the scheduling delay, the start/stop cost.
-  - A longer time-to-live(TTL) for allocated resources can significantly reduce such time costs within an application.
-
-- The time being idle of the resource.
-  - A shorter time to live for allocated resources can make all resources in rapid turnarounds across applications.
-
-## TTL Types In Kyuubi Engines
-
-<body><div class="mxgraph" style="" data-mxgraph="{&quot;lightbox&quot;:false,&quot;nav&quot;:true,&quot;edit&quot;:&quot;_blank&quot;,&quot;xml&quot;:&quot;&lt;mxfile host=\&quot;Electron\&quot; modified=\&quot;2021-12-10T05:54:16.011Z\&quot; agent=\&quot;5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/15.8.7 Chrome/91.0.4472.164 Electron/13.6.2 Safari/537.36\&quot; etag=\&quot;OsSJijKpSO7wcXva956r\&quot; version=\&quot;15.8.7\&quot; type=\&quot;de [...]
-<script type="text/javascript" src="https://viewer.diagrams.net/js/viewer-static.min.js"></script>
-</body>
-
-- Engine TTL
-  - The TTL of engines describes how long an engine will be cached after all sessions are disconnected.
-- Executor TTL
-  - The TTL of the executor describes how long an executor will be cached when no tasks come.
-
-## Configurations
-
-### Engine TTL
-
-| Key                                          | Default                                                                        | Meaning                                                                                                                                                                                                       | Type                                    | Since                                |
-|----------------------------------------------|--------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|--------------------------------------|
-| kyuubi\.session\.engine<br>\.check\.interval | <div style='width: 65pt;word-wrap: break-word;white-space: normal'>PT5M</div>  | <div style='width: 170pt;word-wrap: break-word;white-space: normal'>The check interval for engine timeout</div>                                                                                               | <div style='width: 30pt'>duration</div> | <div style='width: 20pt'>1.0.0</div> |
-| kyuubi\.session\.engine<br>\.idle\.timeout   | <div style='width: 65pt;word-wrap: break-word;white-space: normal'>PT30M</div> | <div style='width: 170pt;word-wrap: break-word;white-space: normal'>engine timeout, the engine will self-terminate when it's not accessed for this duration. 0 or negative means not to self-terminate.</div> | <div style='width: 30pt'>duration</div> | <div style='width: 20pt'>1.0.0</div> |
-
-The above two configurations can be used together to set the TTL of engines.
-These configurations are user-facing and able to use in JDBC urls.
-Note that, for [connection](engine_share_level.html#connection) share level engines that will be terminated at once when the connection is disconnected, these configurations not necessarily work in this case.
-
-### Executor TTL
-
-Executor TTL is part of functionality of Apache Spark's [Dynamic Resource Allocation](./spark/dynamic_allocation.md).
diff --git a/content/docs/r1.6.0-incubating/_sources/deployment/engine_on_kubernetes.md.txt b/content/docs/r1.6.0-incubating/_sources/deployment/engine_on_kubernetes.md.txt
deleted file mode 100644
index 6f3e73a..0000000
--- a/content/docs/r1.6.0-incubating/_sources/deployment/engine_on_kubernetes.md.txt
+++ /dev/null
@@ -1,121 +0,0 @@
-<!--
- - Licensed to the Apache Software Foundation (ASF) under one or more
- - contributor license agreements.  See the NOTICE file distributed with
- - this work for additional information regarding copyright ownership.
- - The ASF licenses this file to You under the Apache License, Version 2.0
- - (the "License"); you may not use this file except in compliance with
- - the License.  You may obtain a copy of the License at
- -
- -   http://www.apache.org/licenses/LICENSE-2.0
- -
- - Unless required by applicable law or agreed to in writing, software
- - distributed under the License is distributed on an "AS IS" BASIS,
- - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- - See the License for the specific language governing permissions and
- - limitations under the License.
- -->
-
-
-# Deploy Kyuubi engines on Kubernetes
-
-## Requirements
-
-When you want to run Kyuubi's Spark SQL engines on Kubernetes, you'd better have cognition upon the following things.
-
-* Read about [Running Spark On Kubernetes](http://spark.apache.org/docs/latest/running-on-kubernetes.html)
-* An active Kubernetes cluster
-* [Kubectl](https://kubernetes.io/docs/reference/kubectl/overview/)
-* KubeConfig of the target cluster
-
-## Configurations
-
-### Master
-
-Spark on Kubernetes config master by using a special format.
-
-`spark.master=k8s://https://<k8s-apiserver-host>:<k8s-apiserver-port>`
-
-You can use cmd `kubectl cluster-info` to get api-server host and port.
-
-### Docker Image
-
-Spark ships a `./bin/docker-image-tool.sh` script to build and publish the Docker images for running Spark applications on Kubernetes.
-
-When deploying Kyuubi engines against a Kubernetes cluster, we need to set up the docker images in the Docker registry first.
-
-Example usage is:
-
-```shell
-./bin/docker-image-tool.sh -r <repo> -t my-tag build
-./bin/docker-image-tool.sh -r <repo> -t my-tag push
-# To build docker image with specify openJdk 
-./bin/docker-image-tool.sh -r <repo> -t my-tag -b java_image_tag=<openjdk:${java_image_tag}> build
-# To build additional PySpark docker image
-./bin/docker-image-tool.sh -r <repo> -t my-tag -p ./kubernetes/dockerfiles/spark/bindings/python/Dockerfile build
-# To build additional SparkR docker image
-./bin/docker-image-tool.sh -r <repo> -t my-tag -R ./kubernetes/dockerfiles/spark/bindings/R/Dockerfile build
-```
-
-### Test Cluster
-
-You can use the shell code to test your cluster whether it is normal or not.
-
-```shell
-$SPARK_HOME/bin/spark-submit \
- --master k8s://https://<k8s-apiserver-host>:<k8s-apiserver-port> \
- --class org.apache.spark.examples.SparkPi \
- --conf spark.executor.instances=5 \
- --conf spark.dynamicAllocation.enabled=false \
- --conf spark.shuffle.service.enabled=false \
- --conf spark.kubernetes.container.image=<spark-image> \
- local://<path_to_examples.jar>
-```
-
-When running shell, you can use cmd `kubectl describe pod <podName>` to check if the information meets expectations.
-
-### ServiceAccount
-
-When use Client mode to submit application, spark driver use the kubeconfig to access api-service to create and watch executor pods.
-
-When use Cluster mode to submit application, spark driver pod use serviceAccount to access api-service to create and watch executor pods.
-
-In both cases, you need to figure out whether you have the permissions under the corresponding namespace. You can use following cmd to create serviceAccount (You need to have the kubeconfig which have the create serviceAccount permission).
-
-```shell
-# create serviceAccount
-kubectl create serviceaccount spark -n <namespace>
-# binding role
-kubectl create clusterrolebinding spark-role --clusterrole=edit --serviceaccount=<namespace>:spark --namespace=<namespace>
-```
-
-### Volumes
-
-As it known to us all, Kubernetes can use configurations to mount volumes into driver and executor pods.
-
-* hostPath: mounts a file or directory from the host node’s filesystem into a pod.
-* emptyDir: an initially empty volume created when a pod is assigned to a node.
-* nfs: mounts an existing NFS(Network File System) into a pod.
-* persistentVolumeClaim: mounts a PersistentVolume into a pod.
-
-Note: Please
-see [the Security section of this document](http://spark.apache.org/docs/latest/running-on-kubernetes.html#security) for security issues related to volume mounts.
-
-```
-spark.kubernetes.driver.volumes.<type>.<name>.options.path=<dist_path>
-spark.kubernetes.driver.volumes.<type>.<name>.mount.path=<container_path>
-
-spark.kubernetes.executor.volumes.<type>.<name>.options.path=<dist_path>
-spark.kubernetes.executor.volumes.<type>.<name>.mount.path=<container_path>
-```
-
-Read [Using Kubernetes Volumes](http://spark.apache.org/docs/latest/running-on-kubernetes.html#using-kubernetes-volumes) for more about volumes.
-
-### PodTemplateFile
-
-Kubernetes allows defining pods from template files. Spark users can similarly use template files to define the driver or executor pod configurations that Spark configurations do not support.
-
-To do so, specify the spark properties `spark.kubernetes.driver.podTemplateFile` and `spark.kubernetes.executor.podTemplateFile` to point to local files accessible to the spark-submit process.
-
-### Other
-
-You can read Spark's official documentation for [Running on Kubernetes](http://spark.apache.org/docs/latest/running-on-kubernetes.html) for more information.
\ No newline at end of file
diff --git a/content/docs/r1.6.0-incubating/_sources/deployment/engine_on_yarn.md.txt b/content/docs/r1.6.0-incubating/_sources/deployment/engine_on_yarn.md.txt
deleted file mode 100644
index 54f8b50..0000000
--- a/content/docs/r1.6.0-incubating/_sources/deployment/engine_on_yarn.md.txt
+++ /dev/null
@@ -1,258 +0,0 @@
-<!--
- - Licensed to the Apache Software Foundation (ASF) under one or more
- - contributor license agreements.  See the NOTICE file distributed with
- - this work for additional information regarding copyright ownership.
- - The ASF licenses this file to You under the Apache License, Version 2.0
- - (the "License"); you may not use this file except in compliance with
- - the License.  You may obtain a copy of the License at
- -
- -   http://www.apache.org/licenses/LICENSE-2.0
- -
- - Unless required by applicable law or agreed to in writing, software
- - distributed under the License is distributed on an "AS IS" BASIS,
- - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- - See the License for the specific language governing permissions and
- - limitations under the License.
- -->
-
-
-# Deploy Kyuubi engines on Yarn
-
-## Deploy Kyuubi Spark Engine on Yarn
-
-### Requirements
-
-When you want to deploy Kyuubi's Spark SQL engines on YARN, you'd better have cognition upon the following things.
-
-- Knowing the basics about [Running Spark on YARN](http://spark.apache.org/docs/latest/running-on-yarn.html)
-- A binary distribution of Spark which is built with YARN support
-  - You can use the built-in Spark distribution
-  - You can get it from [Spark official website](https://spark.apache.org/downloads.html) directly
-  - You can [Build Spark](http://spark.apache.org/docs/latest/building-spark.html#specifying-the-hadoop-version-and-enabling-yarn) with `-Pyarn` maven option
-- An active [Apache Hadoop YARN](https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html) cluster
-- An active Apache Hadoop HDFS cluster
-- Setup Hadoop client configurations at the machine the Kyuubi server locates
-
-### Configurations
-
-#### Environment
-
-Either `HADOOP_CONF_DIR` or `YARN_CONF_DIR` is configured and points to the Hadoop client configurations directory, usually, `$HADOOP_HOME/etc/hadoop`.
-
-If the `HADOOP_CONF_DIR` points the YARN and HDFS cluster correctly, you should be able to run the `SparkPi` example on YARN.
-```bash
-$ HADOOP_CONF_DIR=/path/to/hadoop/conf $SPARK_HOME/bin/spark-submit \
-    --class org.apache.spark.examples.SparkPi \
-    --master yarn \
-    --queue thequeue \
-    $SPARK_HOME/examples/jars/spark-examples*.jar \
-    10
-```
-
-If the `SparkPi` passes, configure it in `$KYUUBI_HOME/conf/kyuubi-env.sh` or `$SPARK_HOME/conf/spark-env.sh`, e.g.
-
-```bash
-$ echo "export HADOOP_CONF_DIR=/path/to/hadoop/conf" >> $KYUUBI_HOME/conf/kyuubi-env.sh
-```
-
-#### Spark Properties
-
-These properties are defined by Spark and Kyuubi will pass them to `spark-submit` to create Spark applications.
-
-**Note:** None of these would take effect if the application for a particular user already exists.
-
-- Specify it in the JDBC connection URL, e.g. `jdbc:hive2://localhost:10009/;#spark.master=yarn;spark.yarn.queue=thequeue`
-- Specify it in `$KYUUBI_HOME/conf/kyuubi-defaults.conf`
-- Specify it in `$SPARK_HOME/conf/spark-defaults.conf`
-
-**Note:** The priority goes down from top to bottom.
-
-##### Master
-
-Setting `spark.master=yarn` tells Kyuubi to submit Spark SQL engine applications to the YARN cluster manager.
-
-##### Queue
-
-Set `spark.yarn.queue=thequeue` in the JDBC connection string to tell Kyuubi to use the QUEUE in the YARN cluster, otherwise,
-the QUEUE configured at Kyuubi server side will be used as default.
-
-##### Sizing
-
-Pass the configurations below through the JDBC connection string to set how many instances of Spark executor will be used
-and how many cpus and memory will Spark driver, ApplicationMaster and each executor take.
-
-Name | Default | Meaning
---- | --- | ---
-spark.executor.instances | 1 | The number of executors for static allocation
-spark.executor.cores | 1 | The number of cores to use on each executor
-spark.yarn.am.memory | 512m | Amount of memory to use for the YARN Application Master in client mode
-spark.yarn.am.memoryOverhead | amMemory * 0.10, with minimum of 384 | Amount of non-heap memory to be allocated per am process in client mode
-spark.driver.memory | 1g | Amount of memory to use for the driver process
-spark.driver.memoryOverhead | driverMemory * 0.10, with minimum of 384 | Amount of non-heap memory to be allocated per driver process in cluster mode
-spark.executor.memory | 1g | Amount of memory to use for the executor process
-spark.executor.memoryOverhead | executorMemory * 0.10, with minimum of 384 | Amount of additional memory to be allocated per executor process. This is memory that accounts for things like VM overheads, interned strings other native overheads, etc
-
-It is recommended to use [Dynamic Allocation](http://spark.apache.org/docs/3.0.1/configuration.html#dynamic-allocation) with Kyuubi,
-since the SQL engine will be long-running for a period, execute user's queries from clients periodically,
-and the demand for computing resources is not the same for those queries.
-It is better for Spark to release some executors when either the query is lightweight, or the SQL engine is being idled. 
-
-##### Tuning
-
-You can specify `spark.yarn.archive` or `spark.yarn.jars` to point to a world-readable location that contains Spark jars on HDFS,
-which allows YARN to cache it on nodes so that it doesn't need to be distributed each time an application runs. 
-
-##### Others
-
-Please refer to [Spark properties](http://spark.apache.org/docs/latest/running-on-yarn.html#spark-properties) to check other acceptable configs.
-
-### Kerberos
-
-Kyuubi currently does not support Spark's [YARN-specific Kerberos Configuration](http://spark.apache.org/docs/3.0.1/running-on-yarn.html#kerberos),
-so `spark.kerberos.keytab` and `spark.kerberos.principal` should not use now.
-
-Instead, you can schedule a periodically `kinit` process via `crontab` task on the local machine that hosts Kyuubi server or simply use [Kyuubi Kinit](settings.html#kinit).
-
-## Deploy Kyuubi Flink Engine on Yarn
-
-### Requirements
-
-When you want to deploy Kyuubi's Flink SQL engines on YARN, you'd better have cognition upon the following things.
-
-- Knowing the basics about [Running Flink on YARN](https://nightlies.apache.org/flink/flink-docs-stable/docs/deployment/resource-providers/yarn)
-- A binary distribution of Flink which is built with YARN support
-  - Download a recent Flink distribution from the [Flink official website](https://flink.apache.org/downloads.html) and unpack it
-- An active [Apache Hadoop YARN](https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html) cluster
-  - Make sure your YARN cluster is ready for accepting Flink applications by running yarn top. It should show no error messages
-- An active Object Storage cluster, e.g. [HDFS](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html), S3 and [Minio](https://min.io/) etc.
-- Setup Hadoop client configurations at the machine the Kyuubi server locates
-
-### Yarn Session Mode
-
-#### Flink Configurations
-
-```bash
-execution.target: yarn-session
-# Yarn Session Cluster application id.
-yarn.application.id: application_00000000XX_00XX
-```
-
-#### Environment
-
-Either `HADOOP_CONF_DIR` or `YARN_CONF_DIR` is configured and points to the Hadoop client configurations directory, usually, `$HADOOP_HOME/etc/hadoop`.
-
-If the `HADOOP_CONF_DIR` points to the YARN and HDFS cluster correctly, and the `HADOOP_CLASSPATH` environment variable is set, you can launch a Flink on YARN session, and submit an example job:
-```bash
-# we assume to be in the root directory of 
-# the unzipped Flink distribution
-
-# (0) export HADOOP_CLASSPATH
-export HADOOP_CLASSPATH=`hadoop classpath`
-
-# (1) Start YARN Session
-./bin/yarn-session.sh --detached
-
-# (2) You can now access the Flink Web Interface through the
-# URL printed in the last lines of the command output, or through
-# the YARN ResourceManager web UI.
-
-# (3) Submit example job
-./bin/flink run ./examples/streaming/TopSpeedWindowing.jar
-
-# (4) Stop YARN session (replace the application id based 
-# on the output of the yarn-session.sh command)
-echo "stop" | ./bin/yarn-session.sh -id application_XXXXX_XXX
- ```
-
-If the `TopSpeedWindowing` passes, configure it in `$KYUUBI_HOME/conf/kyuubi-env.sh`
-
-```bash
-$ echo "export HADOOP_CONF_DIR=/path/to/hadoop/conf" >> $KYUUBI_HOME/conf/kyuubi-env.sh
-```
-
-#### Required Environment Variable
-
-The `FLINK_HADOOP_CLASSPATH` is required, too.
-
-For users who are using Hadoop 3.x, Hadoop shaded client is recommended instead of Hadoop vanilla jars. 
-For users who are using Hadoop 2.x, `FLINK_HADOOP_CLASSPATH` should be set to hadoop classpath to use Hadoop 
-vanilla jars. For users which does not use Hadoop services, e.g. HDFS, YARN at all, Hadoop client jars 
-is also required, and recommend to use Hadoop shaded client as Hadoop 3.x's users do.
-
-See [HADOOP-11656](https://issues.apache.org/jira/browse/HADOOP-11656) for details of Hadoop shaded client.
-
-To use Hadoop shaded client, please configure $KYUUBI_HOME/conf/kyuubi-env.sh as follows:
-
-```bash
-$ echo "export FLINK_HADOOP_CLASSPATH=/path/to/hadoop-client-runtime-3.3.2.jar:/path/to/hadoop-client-api-3.3.2.jar" >> $KYUUBI_HOME/conf/kyuubi-env.sh
-```
-To use Hadoop vanilla jars, please configure $KYUUBI_HOME/conf/kyuubi-env.sh as follows:
-
-```bash
-$ echo "export FLINK_HADOOP_CLASSPATH=`hadoop classpath`" >> $KYUUBI_HOME/conf/kyuubi-env.sh
-```
-### Deployment Modes Supported by Flink on YARN
-
-For experiment use, we recommend deploying Kyuubi Flink SQL engine in [Session Mode](https://nightlies.apache.org/flink/flink-docs-stable/docs/deployment/resource-providers/yarn/#session-mode).
-At present, [Application Mode](https://nightlies.apache.org/flink/flink-docs-stable/docs/deployment/resource-providers/yarn/#application-mode) and [Per-Job Mode (deprecated)](https://nightlies.apache.org/flink/flink-docs-stable/docs/deployment/resource-providers/yarn/#per-job-mode-deprecated) are not supported for Flink engine.
-
-### Kerberos
-
-As Kyuubi Flink SQL engine wraps the Flink SQL client that currently does not support [Flink Kerberos Configuration](https://nightlies.apache.org/flink/flink-docs-stable/docs/deployment/config/#security-kerberos-login-keytab),
-so `security.kerberos.login.keytab` and `security.kerberos.login.principal` should not use now.
-
-Instead, you can schedule a periodically `kinit` process via `crontab` task on the local machine that hosts Kyuubi server or simply use [Kyuubi Kinit](settings.html#kinit).
-
-## Deploy Kyuubi Hive Engine on Yarn
-
-### Requirements
-
-When you want to deploy Kyuubi's Hive SQL engines on YARN, you'd better have cognition upon the following things.
-
-- Knowing the basics about [Running Hive on YARN](https://cwiki.apache.org/confluence/display/Hive/GettingStarted)
-- A binary distribution of Hive
-  - You can use the built-in Hive distribution
-  - Download a recent Hive distribution from the [Hive official website](https://hive.apache.org/downloads.html) and unpack it
-  - You can [Build Hive](https://cwiki.apache.org/confluence/display/Hive//GettingStarted#GettingStarted-BuildingHivefromSource)
-- An active [Apache Hadoop YARN](https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html) cluster
-  - Make sure your YARN cluster is ready for accepting Hive applications by running yarn top. It should show no error messages
-- An active [Apache Hadoop HDFS](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html) cluster
-- Setup Hadoop client configurations at the machine the Kyuubi server locates
-- An active [Hive Metastore Service](https://cwiki.apache.org/confluence/display/hive/design#Design-Metastore)
-
-### Configurations
-
-#### Environment
-
-Either `HADOOP_CONF_DIR` or `YARN_CONF_DIR` is configured and points to the Hadoop client configurations directory, usually, `$HADOOP_HOME/etc/hadoop`.
-
-If the `HADOOP_CONF_DIR` points to the YARN and HDFS cluster correctly, you should be able to run the `Hive SQL` example on YARN.
-
-```bash
-$ $HIVE_HOME/bin/hiveserver2
-# In another terminal
-$ $HIVE_HOME/bin/beeline -u 'jdbc:hive2://localhost:10000/default'
-0: jdbc:hive2://localhost:10000/default> CREATE TABLE pokes (foo INT, bar STRING);
-0: jdbc:hive2://localhost:10000/default> INSERT INTO TABLE pokes VALUES (1, 'hello');
-```
-
-If the `Hive SQL` passes and there is a job in Yarn Web UI, It indicates the hive environment is normal.
-
-#### Required Environment Variable
-
-The `HIVE_HADOOP_CLASSPATH` is required, too. It should contain `commons-collections-*.jar`, 
-`hadoop-client-runtime-*.jar`, `hadoop-client-api-*.jar` and `htrace-core4-*.jar`.
-All four jars are in the `HADOOP_HOME`. 
-
-For example, in Hadoop 3.1.0 version, the following is their location. 
-- `${HADOOP_HOME}/share/hadoop/common/lib/commons-collections-3.2.2.jar`
-- `${HADOOP_HOME}/share/hadoop/client/hadoop-client-runtime-3.1.0.jar`
-- `${HADOOP_HOME}/share/hadoop/client/hadoop-client-api-3.1.0.jar`
-- `${HADOOP_HOME}/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar`
-
-Configure them in `$KYUUBI_HOME/conf/kyuubi-env.sh` or `$HIVE_HOME/conf/hive-env.sh`, e.g.
-
-```bash
-$ echo "export HADOOP_CONF_DIR=/path/to/hadoop/conf" >> $KYUUBI_HOME/conf/kyuubi-env.sh
-$ echo "export HIVE_HADOOP_CLASSPATH=${HADOOP_HOME}/share/hadoop/common/lib/commons-collections-3.2.2.jar:${HADOOP_HOME}/share/hadoop/client/hadoop-client-runtime-3.1.0.jar:${HADOOP_HOME}/share/hadoop/client/hadoop-client-api-3.1.0.jar:${HADOOP_HOME}/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar" >> $KYUUBI_HOME/conf/kyuubi-env.sh
-```
diff --git a/content/docs/r1.6.0-incubating/_sources/deployment/hive_metastore.md.txt b/content/docs/r1.6.0-incubating/_sources/deployment/hive_metastore.md.txt
deleted file mode 100644
index d4592b7..0000000
--- a/content/docs/r1.6.0-incubating/_sources/deployment/hive_metastore.md.txt
+++ /dev/null
@@ -1,210 +0,0 @@
-<!--
- - Licensed to the Apache Software Foundation (ASF) under one or more
- - contributor license agreements.  See the NOTICE file distributed with
- - this work for additional information regarding copyright ownership.
- - The ASF licenses this file to You under the Apache License, Version 2.0
- - (the "License"); you may not use this file except in compliance with
- - the License.  You may obtain a copy of the License at
- -
- -   http://www.apache.org/licenses/LICENSE-2.0
- -
- - Unless required by applicable law or agreed to in writing, software
- - distributed under the License is distributed on an "AS IS" BASIS,
- - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- - See the License for the specific language governing permissions and
- - limitations under the License.
- -->
-
-
-# Integration with Hive Metastore
-
-In this section, you will learn how to configure Kyuubi to interact with Hive Metastore.
-
-- A common Hive metastore server could be set at Kyuubi server side
-- Individual Hive metastore servers could be used for end users to set
-
-## Requirements
-
-- A running Hive metastore server
-  - [Hive Metastore Administration](https://cwiki.apache.org/confluence/display/Hive/AdminManual+Metastore+Administration)
-  - [Configuring the Hive Metastore for CDH](https://docs.cloudera.com/documentation/enterprise/latest/topics/cdh_ig_hive_metastore_configure.html)
-- A Spark binary distribution built with `-Phive` support
-  - Use the built-in one in the Kyuubi distribution
-  - Download from [Spark official website](https://spark.apache.org/downloads.html)
-  - Build from Spark source, [Building With Hive and JDBC Support](http://spark.apache.org/docs/latest/building-spark.html#building-with-hive-and-jdbc-support)
-- A copy of Hive client configuration
-
-So the whole thing here is to let Spark applications use this copy of Hive configuration to start a Hive metastore client for their own to talk to the Hive metastore server.
-
-## Default Behavior
-
-By default, Kyuubi launches Spark SQL engines pointing to a dummy embedded [Apache Derby](https://db.apache.org/derby/)-based metastore for each application,
-and this metadata can only be seen by one user at a time, e.g.
-
-```shell script
-bin/beeline -u 'jdbc:hive2://localhost:10009/' -n kentyao
-Connecting to jdbc:hive2://localhost:10009/
-Connected to: Spark SQL (version 1.0.0-SNAPSHOT)
-Driver: Hive JDBC (version 2.3.7)
-Transaction isolation: TRANSACTION_REPEATABLE_READ
-Beeline version 2.3.7 by Apache Hive
-0: jdbc:hive2://localhost:10009/> show databases;
-2020-11-16 23:50:50.388 INFO operation.ExecuteStatement:
-           Spark application name: kyuubi_kentyao_spark_2020-11-16T15:50:08.968Z
-                 application ID:  local-1605541809797
-                 application web UI: http://192.168.1.14:60165
-                 master: local[*]
-                 deploy mode: client
-                 version: 3.0.1
-           Start time: 2020-11-16T15:50:09.123Z
-           User: kentyao
-2020-11-16 23:50:50.404 INFO metastore.HiveMetaStore: 2: get_databases: *
-2020-11-16 23:50:50.404 INFO HiveMetaStore.audit: ugi=kentyao	ip=unknown-ip-addr	cmd=get_databases: *
-2020-11-16 23:50:50.423 INFO operation.ExecuteStatement: Processing kentyao's query[8453e657-c1c4-4391-8406-ab4747a66c45]: RUNNING_STATE -> FINISHED_STATE, statement: show databases, time taken: 0.035 seconds
-+------------+
-| namespace  |
-+------------+
-| default    |
-+------------+
-1 row selected (0.122 seconds)
-0: jdbc:hive2://localhost:10009/> show tables;
-2020-11-16 23:50:52.957 INFO operation.ExecuteStatement:
-           Spark application name: kyuubi_kentyao_spark_2020-11-16T15:50:08.968Z
-                 application ID:  local-1605541809797
-                 application web UI: http://192.168.1.14:60165
-                 master: local[*]
-                 deploy mode: client
-                 version: 3.0.1
-           Start time: 2020-11-16T15:50:09.123Z
-           User: kentyao
-2020-11-16 23:50:52.968 INFO metastore.HiveMetaStore: 2: get_database: default
-2020-11-16 23:50:52.968 INFO HiveMetaStore.audit: ugi=kentyao	ip=unknown-ip-addr	cmd=get_database: default
-2020-11-16 23:50:52.970 INFO metastore.HiveMetaStore: 2: get_database: default
-2020-11-16 23:50:52.970 INFO HiveMetaStore.audit: ugi=kentyao	ip=unknown-ip-addr	cmd=get_database: default
-2020-11-16 23:50:52.972 INFO metastore.HiveMetaStore: 2: get_tables: db=default pat=*
-2020-11-16 23:50:52.972 INFO HiveMetaStore.audit: ugi=kentyao	ip=unknown-ip-addr	cmd=get_tables: db=default pat=*
-2020-11-16 23:50:52.986 INFO operation.ExecuteStatement: Processing kentyao's query[ff902582-ba29-433b-b70a-c25ead1353a8]: RUNNING_STATE -> FINISHED_STATE, statement: show tables, time taken: 0.03 seconds
-+-----------+------------+--------------+
-| database  | tableName  | isTemporary  |
-+-----------+------------+--------------+
-+-----------+------------+--------------+
-No rows selected (0.04 seconds)
-```
-Using this mode for experimental purposes only.
-
-In a real production environment, we always have a communal standalone metadata store,
-to manage the metadata of persistent relational entities, e.g. databases, tables, columns, partitions, for fast access.
-Usually, Hive metastore as the de facto.
-
-## Related Configurations
-
-These are the basic needs for a Hive metastore client to communicate with the remote Hive Metastore server.
-
-Use remote metastore database or server mode depends on the server-side configuration.
-
-### Remote Metastore Database
-
-Name | Value | Meaning
---- | --- | ---
-javax.jdo.option.ConnectionURL | jdbc:mysql://&lt;hostname&gt;/&lt;databaseName&gt;?<br>createDatabaseIfNotExist=true | metadata is stored in a MySQL server
-javax.jdo.option.ConnectionDriverName | com.mysql.jdbc.Driver | MySQL JDBC driver class
-javax.jdo.option.ConnectionUserName | &lt;username&gt; | user name for connecting to MySQL server
-javax.jdo.option.ConnectionPassword | &lt;password&gt; | password for connecting to MySQL server
-
-### Remote Metastore Server
-
-Name | Value | Meaning
---- | --- | ---
-hive.metastore.uris | thrift://&lt;host&gt;:&lt;port&gt;,thrift://&lt;host1&gt;:&lt;port1&gt; | <div style='width: 200pt;word-wrap: break-word;white-space: normal'>host and port for the Thrift metastore server.</div>
-
-## Activate Configurations
-
-### Via kyuubi-defaults.conf
-
-In `$KYUUBI_HOME/conf/kyuubi-defaults.conf`, all _**Hive primitive configurations**_, e.g. `hive.metastore.uris`,
-and the **_Spark derivatives_**, which are prefixed with `spark.hive.` or `spark.hadoop.`, e.g `spark.hive.metastore.uris` or `spark.hadoop.hive.metastore.uris`,
-will be loaded as Hive primitives by the Hive client inside the Spark application.
-
-Kyuubi will take these configurations as system wide defaults for all applications it launches.
-
-### Via hive-site.xml
-
-Place your copy of `hive-site.xml` into `$SPARK_HOME/conf`,
-every single Spark application will automatically load this config file to its classpath.
-
-This version of configuration has lower priority than those in `$KYUUBI_HOME/conf/kyuubi-defaults.conf`.
-
-### Via JDBC Connection URL
-
-We can pass _**Hive primitives**_ or **_Spark derivatives_** directly in the JDBC connection URL, e.g.
-
-```
-jdbc:hive2://localhost:10009/;#hive.metastore.uris=thrift://localhost:9083
-```
-
-This will override the defaults in `$SPARK_HOME/conf/hive-site.xml` and `$KYUUBI_HOME/conf/kyuubi-defaults.conf` for each _**user account**_.
-
-With this feature, end users are possible to visit different Hive metastore server instance.
-Similarly, this works for other services like HDFS, YARN too.
-
-**Limitation:** As most Hive configurations are final and unmodifiable in Spark at runtime,
-this only takes effect during instantiating the Spark applications and will be ignored when reusing an existing application.
-So, keep this in our mind.
-
-**!!!THIS WORKS ONLY ONCE!!!**
-
-**!!!THIS WORKS ONLY ONCE!!!**
-
-**!!!THIS WORKS ONLY ONCE!!!**
-
-### Via SET syntax
-
-Most Hive configurations are final and unmodifiable in Spark at runtime, so keep this in our mind.
-
-**!!!THIS WON'T WORK!!!**
-
-**!!!THIS WON'T WORK!!!**
-
-**!!!THIS WON'T WORK!!!**
-
-## Version Compatibility
-
-If backward compatibility is guaranteed by Hive versioning,
-we can always use a lower version Hive metastore client to communicate with the higher version Hive metastore server.
-
-For example, Spark 3.0 was released with a built-in Hive client (2.3.7), so, ideally, the version of server should &gt;= 2.3.x.
-
-If you do have a legacy Hive metastore server that cannot be easily upgraded, and you may face the issue by default like this,
-
-```java
-Caused by: org.apache.thrift.TApplicationException: Invalid method name: 'get_table_req'
-	at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79)
-	at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_table_req(ThriftHiveMetastore.java:1567)
-	at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_table_req(ThriftHiveMetastore.java:1554)
-	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:1350)
-	at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.getTable(SessionHiveMetaStoreClient.java:127)
-	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
-	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
-	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
-	at java.lang.reflect.Method.invoke(Method.java:498)
-	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:173)
-	at com.sun.proxy.$Proxy37.getTable(Unknown Source)
-	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
-	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
-	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
-	at java.lang.reflect.Method.invoke(Method.java:498)
-	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2336)
-	at com.sun.proxy.$Proxy37.getTable(Unknown Source)
-	at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1274)
-	... 93 more
-```
-
-To prevent this problem, we can use Spark's [Interacting with Different Versions of Hive Metastore](http://spark.apache.org/docs/latest/sql-data-sources-hive-tables.html#interacting-with-different-versions-of-hive-metastore).
-
-## Further Readings
-
-- Hive Wiki
-  - [Hive Metastore Administration](https://cwiki.apache.org/confluence/display/Hive/AdminManual+Metastore+Administration)
-- Spark Online Documentation
-  - [Custom Hadoop/Hive Configuration](http://spark.apache.org/docs/latest/configuration.html#custom-hadoophive-configuration)
-  - [Hive Tables](http://spark.apache.org/docs/latest/sql-data-sources-hive-tables.html)
diff --git a/content/docs/r1.6.0-incubating/_sources/deployment/index.rst.txt b/content/docs/r1.6.0-incubating/_sources/deployment/index.rst.txt
deleted file mode 100644
index e682680..0000000
--- a/content/docs/r1.6.0-incubating/_sources/deployment/index.rst.txt
+++ /dev/null
@@ -1,53 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-
-Deploying Kyuubi
-================
-
-In this section, you will learn how to deploy Kyuubi against different platforms.
-
-Basics
-------
-
-.. toctree::
-    :maxdepth: 2
-    :glob:
-
-    kyuubi_on_kubernetes
-    hive_metastore
-    high_availability_guide
-
-Configurations
---------------
-
-.. toctree::
-    :maxdepth: 2
-    :glob:
-
-    settings
-
-Engines
--------
-
-.. toctree::
-    :maxdepth: 2
-    :glob:
-
-    engine_on_yarn
-    engine_on_kubernetes
-    engine_share_level
-    engine_lifecycle
-    spark/index
\ No newline at end of file
diff --git a/content/docs/r1.6.0-incubating/_sources/deployment/kyuubi_on_kubernetes.md.txt b/content/docs/r1.6.0-incubating/_sources/deployment/kyuubi_on_kubernetes.md.txt
deleted file mode 100644
index 8125920..0000000
--- a/content/docs/r1.6.0-incubating/_sources/deployment/kyuubi_on_kubernetes.md.txt
+++ /dev/null
@@ -1,103 +0,0 @@
-<!--
- - Licensed to the Apache Software Foundation (ASF) under one or more
- - contributor license agreements.  See the NOTICE file distributed with
- - this work for additional information regarding copyright ownership.
- - The ASF licenses this file to You under the Apache License, Version 2.0
- - (the "License"); you may not use this file except in compliance with
- - the License.  You may obtain a copy of the License at
- -
- -   http://www.apache.org/licenses/LICENSE-2.0
- -
- - Unless required by applicable law or agreed to in writing, software
- - distributed under the License is distributed on an "AS IS" BASIS,
- - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- - See the License for the specific language governing permissions and
- - limitations under the License.
- -->
-
-
-# Deploy Kyuubi On Kubernetes
-
-## Requirements
-
-If you want to deploy Kyuubi on Kubernetes, you'd better get a sense of the following things.
-
-* Use Kyuubi official docker image or build Kyuubi docker image
-* An active Kubernetes cluster
-* Reading About [Deploy Kyuubi engines on Kubernetes](engine_on_kubernetes.md)
-* [Kubectl](https://kubernetes.io/docs/reference/kubectl/overview/)
-* KubeConfig of the target cluster
-
-## Kyuubi Official Docker Image 
-
-You can find the official docker image at [Apache Kyuubi (Incubating) Docker Hub](https://registry.hub.docker.com/r/apache/kyuubi).
-
-## Build Kyuubi Docker Image
-
-You can build custom Docker images from the `${KYUUBI_HOME}/bin/docker-image-tool.sh` contained in the binary package.
-
-Examples:
-```shell
-  - Build and push image with tag "v1.4.0" to docker.io/myrepo
-    $0 -r docker.io/myrepo -t v1.4.0 build
-    $0 -r docker.io/myrepo -t v1.4.0 push
-
-  - Build and push with tag "v1.4.0" and Spark-3.2.1 as base image to docker.io/myrepo
-    $0 -r docker.io/myrepo -t v1.4.0 -b BASE_IMAGE=repo/spark:3.2.1 build
-    $0 -r docker.io/myrepo -t v1.4.0 push
-
-  - Build and push for multiple archs to docker.io/myrepo
-    $0 -r docker.io/myrepo -t v1.4.0 -X build
-
-  - Build with Spark placed "/path/spark"
-    $0 -s /path/spark build
-    
-  - Build with Spark Image myrepo/spark:3.1.0
-    $0 -S /opt/spark -b BASE_IMAGE=myrepo/spark:3.1.0 build
-```
-
-`${KYUUBI_HOME}/bin/docker-image-tool.sh` use `Kyuubi Version` as default docker tag and always build `${repo}/kyuubi:${tag}` image.
-
-The script can also help build external Spark into a Kyuubi image that acts as a client for submitting tasks by `-s ${SPAAK_HOME}`.
-
-Of course, if you have an image that contains the Spark binary package, you don't have to copy Spark locally. Make your Spark Image as BASE_IMAGE by using the `-S ${SPARK_HOME_IN_DOCKER}` and `-b BASE_IMAGE=${SPARK_IMAGE}` arguments.
-
-You can use `${KYUUBI_HOME}/bin/docker-image-tool.sh -h` for more parameters.
-
-## Deploy
-
-Multiple YAML files are provided under `${KYUUBI_HOME}/docker/` to help you deploy Kyuubi.
-
-You can deploy single-node Kyuubi through `${KYUUBI_HOME}/docker/kyuubi-pod.yaml` or `${KYUUBI_HOME}/docker/kyuubi-deployment.yaml`.
-
-Also, you can use `${KYUUBI_HOME}/docker/kyuubi-service.yaml` to deploy Kyuubi Service.
-
-## Config
-
-You can configure Kyuubi the old-fashioned way by placing kyuubi-default.conf inside the image. Kyuubi do not recommend using this way on Kubernetes.
-
-Kyuubi provide `${KYUUBI_HOME}/docker/kyuubi-configmap.yaml` to build Configmap for Kyuubi.
-
-You can find out how to use it in the comments inside the above file.
-
-If you want to know kyuubi engine on kubernetes configurations, you can refer to [Deploy Kyuubi engines on Kubernetes](engine_on_kubernetes.md)
-
-## Connect
-
-If you do not use Service or HostNetwork to get the IP address of the node where Kyuubi deployed.
-You should connect like:
-```shell
-kubectl exec -it kyuubi-example -- /bin/bash
-${SPARK_HOME}/bin/beeline -u 'jdbc:hive2://localhost:10009'
-```
-
-Or you can submit tasks directly through local beeline:
-```shell
-${SPARK_HOME}/bin/beeline -u 'jdbc:hive2://${hostname}:${port}'
-```
-As using service nodePort, port means nodePort and hostname means any hostname of kubernetes node.
-
-As using HostNetwork, port means kyuubi containerPort and hostname means hostname of node where Kyuubi deployed.
-
-## TODO 
-Kyuubi will provide other connection methods in the future, like `Ingress`, `Load Balance`.
diff --git a/content/docs/r1.6.0-incubating/_sources/deployment/settings.md.txt b/content/docs/r1.6.0-incubating/_sources/deployment/settings.md.txt
deleted file mode 100644
index 6513ed0..0000000
--- a/content/docs/r1.6.0-incubating/_sources/deployment/settings.md.txt
+++ /dev/null
@@ -1,620 +0,0 @@
-<!--
- - Licensed to the Apache Software Foundation (ASF) under one or more
- - contributor license agreements.  See the NOTICE file distributed with
- - this work for additional information regarding copyright ownership.
- - The ASF licenses this file to You under the Apache License, Version 2.0
- - (the "License"); you may not use this file except in compliance with
- - the License.  You may obtain a copy of the License at
- -
- -   http://www.apache.org/licenses/LICENSE-2.0
- -
- - Unless required by applicable law or agreed to in writing, software
- - distributed under the License is distributed on an "AS IS" BASIS,
- - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- - See the License for the specific language governing permissions and
- - limitations under the License.
- -->
-
-<!-- DO NOT MODIFY THIS FILE DIRECTLY, IT IS AUTO GENERATED BY [org.apache.kyuubi.config.AllKyuubiConfiguration] -->
-
-
-# Introduction to the Kyuubi Configurations System
-
-Kyuubi provides several ways to configure the system and corresponding engines.
-
-
-## Environments
-
-
-You can configure the environment variables in `$KYUUBI_HOME/conf/kyuubi-env.sh`, e.g, `JAVA_HOME`, then this java runtime will be used both for Kyuubi server instance and the applications it launches. You can also change the variable in the subprocess's env configuration file, e.g.`$SPARK_HOME/conf/spark-env.sh` to use more specific ENV for SQL engine applications.
-```bash
-#!/usr/bin/env bash
-#
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-#
-# - JAVA_HOME               Java runtime to use. By default use "java" from PATH.
-#
-#
-# - KYUUBI_CONF_DIR         Directory containing the Kyuubi configurations to use.
-#                           (Default: $KYUUBI_HOME/conf)
-# - KYUUBI_LOG_DIR          Directory for Kyuubi server-side logs.
-#                           (Default: $KYUUBI_HOME/logs)
-# - KYUUBI_PID_DIR          Directory stores the Kyuubi instance pid file.
-#                           (Default: $KYUUBI_HOME/pid)
-# - KYUUBI_MAX_LOG_FILES    Maximum number of Kyuubi server logs can rotate to.
-#                           (Default: 5)
-# - KYUUBI_JAVA_OPTS        JVM options for the Kyuubi server itself in the form "-Dx=y".
-#                           (Default: none).
-# - KYUUBI_CTL_JAVA_OPTS    JVM options for the Kyuubi ctl itself in the form "-Dx=y".
-#                           (Default: none).
-# - KYUUBI_BEELINE_OPTS     JVM options for the Kyuubi BeeLine in the form "-Dx=Y".
-#                           (Default: none)
-# - KYUUBI_NICENESS         The scheduling priority for Kyuubi server.
-#                           (Default: 0)
-# - KYUUBI_WORK_DIR_ROOT    Root directory for launching sql engine applications.
-#                           (Default: $KYUUBI_HOME/work)
-# - HADOOP_CONF_DIR         Directory containing the Hadoop / YARN configuration to use.
-# - YARN_CONF_DIR           Directory containing the YARN configuration to use.
-#
-# - SPARK_HOME              Spark distribution which you would like to use in Kyuubi.
-# - SPARK_CONF_DIR          Optional directory where the Spark configuration lives.
-#                           (Default: $SPARK_HOME/conf)
-# - FLINK_HOME              Flink distribution which you would like to use in Kyuubi.
-# - FLINK_CONF_DIR          Optional directory where the Flink configuration lives.
-#                           (Default: $FLINK_HOME/conf)
-# - FLINK_HADOOP_CLASSPATH  Required Hadoop jars when you use the Kyuubi Flink engine.
-# - HIVE_HOME               Hive distribution which you would like to use in Kyuubi.
-# - HIVE_CONF_DIR           Optional directory where the Hive configuration lives.
-#                           (Default: $HIVE_HOME/conf)
-# - HIVE_HADOOP_CLASSPATH   Required Hadoop jars when you use the Kyuubi Hive engine.
-#
-
-
-## Examples ##
-
-# export JAVA_HOME=/usr/jdk64/jdk1.8.0_152
-# export SPARK_HOME=/opt/spark
-# export FLINK_HOME=/opt/flink
-# export HIVE_HOME=/opt/hive
-# export FLINK_HADOOP_CLASSPATH=/path/to/hadoop-client-runtime-3.3.2.jar:/path/to/hadoop-client-api-3.3.2.jar
-# export HIVE_HADOOP_CLASSPATH=${HADOOP_HOME}/share/hadoop/common/lib/commons-collections-3.2.2.jar:${HADOOP_HOME}/share/hadoop/client/hadoop-client-runtime-3.1.0.jar:${HADOOP_HOME}/share/hadoop/client/hadoop-client-api-3.1.0.jar:${HADOOP_HOME}/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar
-# export HADOOP_CONF_DIR=/usr/ndp/current/mapreduce_client/conf
-# export YARN_CONF_DIR=/usr/ndp/current/yarn/conf
-# export KYUUBI_JAVA_OPTS="-Xmx10g -XX:+UnlockDiagnosticVMOptions -XX:ParGCCardsPerStrideChunk=4096 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSConcurrentMTEnabled -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSClassUnloadingEnabled -XX:+CMSParallelRemarkEnabled -XX:+UseCondCardMark -XX:MaxDirectMemorySize=1024m  -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=./logs -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribut [...]
-# export KYUUBI_BEELINE_OPTS="-Xmx2g -XX:+UnlockDiagnosticVMOptions -XX:ParGCCardsPerStrideChunk=4096 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSConcurrentMTEnabled -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSClassUnloadingEnabled -XX:+CMSParallelRemarkEnabled -XX:+UseCondCardMark"
-```
-
-For the environment variables that only needed to be transferred into engine side, you can set it with a Kyuubi configuration item formatted `kyuubi.engineEnv.VAR_NAME`. For example, with `kyuubi.engineEnv.SPARK_DRIVER_MEMORY=4g`, the environment variable `SPARK_DRIVER_MEMORY` with value `4g` would be transferred into engine side. With `kyuubi.engineEnv.SPARK_CONF_DIR=/apache/confs/spark/conf`, the value of `SPARK_CONF_DIR` in engine side is set to `/apache/confs/spark/conf`.
-
-## Kyuubi Configurations
-
-You can configure the Kyuubi properties in `$KYUUBI_HOME/conf/kyuubi-defaults.conf`. For example:
-```bash
-#
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-
-## Kyuubi Configurations
-
-#
-# kyuubi.authentication           NONE
-# kyuubi.frontend.bind.host       localhost
-# kyuubi.frontend.bind.port       10009
-#
-
-# Details in https://kyuubi.apache.org/docs/latest/deployment/settings.html
-```
-
-### Authentication
-
-Key | Default | Meaning | Type | Since
---- | --- | --- | --- | ---
-kyuubi.authentication|NONE|A comma separated list of client authentication types.<ul> <li>NOSASL: raw transport.</li> <li>NONE: no authentication check.</li> <li>KERBEROS: Kerberos/GSSAPI authentication.</li> <li>CUSTOM: User-defined authentication.</li> <li>JDBC: JDBC query authentication.</li> <li>LDAP: Lightweight Directory Access Protocol authentication.</li></ul> Note that: For KERBEROS, it is SASL/GSSAPI mechanism, and for NONE, CUSTOM and LDAP, they are all SASL/PLAIN mechanism. I [...]
-kyuubi.authentication.custom.class|&lt;undefined&gt;|User-defined authentication implementation of org.apache.kyuubi.service.authentication.PasswdAuthenticationProvider|string|1.3.0
-kyuubi.authentication.jdbc.driver.class|&lt;undefined&gt;|Driver class name for JDBC Authentication Provider.|string|1.6.0
-kyuubi.authentication.jdbc.password|&lt;undefined&gt;|Database password for JDBC Authentication Provider.|string|1.6.0
-kyuubi.authentication.jdbc.query|&lt;undefined&gt;|Query SQL template with placeholders for JDBC Authentication Provider to execute. Authentication passes if the result set is not empty.The SQL statement must start with the `SELECT` clause. Available placeholders are `${user}` and `${password}`.|string|1.6.0
-kyuubi.authentication.jdbc.url|&lt;undefined&gt;|JDBC URL for JDBC Authentication Provider.|string|1.6.0
-kyuubi.authentication.jdbc.user|&lt;undefined&gt;|Database user for JDBC Authentication Provider.|string|1.6.0
-kyuubi.authentication.ldap.base.dn|&lt;undefined&gt;|LDAP base DN.|string|1.0.0
-kyuubi.authentication.ldap.domain|&lt;undefined&gt;|LDAP domain.|string|1.0.0
-kyuubi.authentication.ldap.guidKey|uid|LDAP attribute name whose values are unique in this LDAP server.For example:uid or cn.|string|1.2.0
-kyuubi.authentication.ldap.url|&lt;undefined&gt;|SPACE character separated LDAP connection URL(s).|string|1.0.0
-kyuubi.authentication.sasl.qop|auth|Sasl QOP enable higher levels of protection for Kyuubi communication with clients.<ul> <li>auth - authentication only (default)</li> <li>auth-int - authentication plus integrity protection</li> <li>auth-conf - authentication plus integrity and confidentiality protection. This is applicable only if Kyuubi is configured to use Kerberos authentication.</li> </ul>|string|1.0.0
-
-
-### Backend
-
-Key | Default | Meaning | Type | Since
---- | --- | --- | --- | ---
-kyuubi.backend.engine.exec.pool.keepalive.time|PT1M|Time(ms) that an idle async thread of the operation execution thread pool will wait for a new task to arrive before terminating in SQL engine applications|duration|1.0.0
-kyuubi.backend.engine.exec.pool.shutdown.timeout|PT10S|Timeout(ms) for the operation execution thread pool to terminate in SQL engine applications|duration|1.0.0
-kyuubi.backend.engine.exec.pool.size|100|Number of threads in the operation execution thread pool of SQL engine applications|int|1.0.0
-kyuubi.backend.engine.exec.pool.wait.queue.size|100|Size of the wait queue for the operation execution thread pool in SQL engine applications|int|1.0.0
-kyuubi.backend.server.event.json.log.path|file:///tmp/kyuubi/events|The location of server events go for the builtin JSON logger|string|1.4.0
-kyuubi.backend.server.event.loggers||A comma separated list of server history loggers, where session/operation etc events go.<ul> <li>JSON: the events will be written to the location of kyuubi.backend.server.event.json.log.path</li> <li>JDBC: to be done</li> <li>CUSTOM: to be done.</li></ul>|seq|1.4.0
-kyuubi.backend.server.exec.pool.keepalive.time|PT1M|Time(ms) that an idle async thread of the operation execution thread pool will wait for a new task to arrive before terminating in Kyuubi server|duration|1.0.0
-kyuubi.backend.server.exec.pool.shutdown.timeout|PT10S|Timeout(ms) for the operation execution thread pool to terminate in Kyuubi server|duration|1.0.0
-kyuubi.backend.server.exec.pool.size|100|Number of threads in the operation execution thread pool of Kyuubi server|int|1.0.0
-kyuubi.backend.server.exec.pool.wait.queue.size|100|Size of the wait queue for the operation execution thread pool of Kyuubi server|int|1.0.0
-
-
-### Batch
-
-Key | Default | Meaning | Type | Since
---- | --- | --- | --- | ---
-kyuubi.batch.application.check.interval|PT5S|The interval to check batch job application information.|duration|1.6.0
-kyuubi.batch.conf.ignore.list||A comma separated list of ignored keys for batch conf. If the batch conf contains any of them, the key and the corresponding value will be removed silently during batch job submission. Note that this rule is for server-side protection defined via administrators to prevent some essential configs from tampering. You can also pre-define some config for batch job submission with prefix: kyuubi.batchConf.[batchType]. For example, you can pre-define `spark.master [...]
-
-
-### Credentials
-
-Key | Default | Meaning | Type | Since
---- | --- | --- | --- | ---
-kyuubi.credentials.check.interval|PT5M|The interval to check the expiration of cached <user, CredentialsRef> pairs.|duration|1.6.0
-kyuubi.credentials.hadoopfs.enabled|true|Whether to renew Hadoop filesystem delegation tokens|boolean|1.4.0
-kyuubi.credentials.hadoopfs.uris||Extra Hadoop filesystem URIs for which to request delegation tokens. The filesystem that hosts fs.defaultFS does not need to be listed here.|seq|1.4.0
-kyuubi.credentials.hive.enabled|true|Whether to renew Hive metastore delegation token|boolean|1.4.0
-kyuubi.credentials.idle.timeout|PT6H|inactive users' credentials will be expired after a configured timeout|duration|1.6.0
-kyuubi.credentials.renewal.interval|PT1H|How often Kyuubi renews one user's delegation tokens|duration|1.4.0
-kyuubi.credentials.renewal.retry.wait|PT1M|How long to wait before retrying to fetch new credentials after a failure.|duration|1.4.0
-kyuubi.credentials.update.wait.timeout|PT1M|How long to wait until credentials are ready.|duration|1.5.0
-
-
-### Ctl
-
-Key | Default | Meaning | Type | Since
---- | --- | --- | --- | ---
-kyuubi.ctl.batch.log.query.interval|PT3S|The interval for fetching batch logs.|duration|1.6.0
-kyuubi.ctl.rest.auth.schema|basic|The authentication schema. Valid values are: basic, spnego.|string|1.6.0
-kyuubi.ctl.rest.base.url|&lt;undefined&gt;|The REST API base URL, which contains the scheme (http:// or https://), host name, port number|string|1.6.0
-kyuubi.ctl.rest.connect.timeout|PT30S|The timeout[ms] for establishing the connection with the kyuubi server.A timeout value of zero is interpreted as an infinite timeout.|duration|1.6.0
-kyuubi.ctl.rest.request.attempt.wait|PT3S|How long to wait between attempts of ctl rest request.|duration|1.6.0
-kyuubi.ctl.rest.request.max.attempts|3|The max attempts number for ctl rest request.|int|1.6.0
-kyuubi.ctl.rest.socket.timeout|PT2M|The timeout[ms] for waiting for data packets after connection is established.A timeout value of zero is interpreted as an infinite timeout.|duration|1.6.0
-kyuubi.ctl.rest.spnego.host|&lt;undefined&gt;|When auth schema is spnego, need to config spnego host.|string|1.6.0
-
-
-### Delegation
-
-Key | Default | Meaning | Type | Since
---- | --- | --- | --- | ---
-kyuubi.delegation.key.update.interval|PT24H|unused yet|duration|1.0.0
-kyuubi.delegation.token.gc.interval|PT1H|unused yet|duration|1.0.0
-kyuubi.delegation.token.max.lifetime|PT168H|unused yet|duration|1.0.0
-kyuubi.delegation.token.renew.interval|PT168H|unused yet|duration|1.0.0
-
-
-### Engine
-
-Key | Default | Meaning | Type | Since
---- | --- | --- | --- | ---
-kyuubi.engine.connection.url.use.hostname|true|(deprecated) When true, engine register with hostname to zookeeper. When spark run on k8s with cluster mode, set to false to ensure that server can connect to engine|boolean|1.3.0
-kyuubi.engine.deregister.exception.classes||A comma separated list of exception classes. If there is any exception thrown, whose class matches the specified classes, the engine would deregister itself.|seq|1.2.0
-kyuubi.engine.deregister.exception.messages||A comma separated list of exception messages. If there is any exception thrown, whose message or stacktrace matches the specified message list, the engine would deregister itself.|seq|1.2.0
-kyuubi.engine.deregister.exception.ttl|PT30M|Time to live(TTL) for exceptions pattern specified in kyuubi.engine.deregister.exception.classes and kyuubi.engine.deregister.exception.messages to deregister engines. Once the total error count hits the kyuubi.engine.deregister.job.max.failures within the TTL, an engine will deregister itself and wait for self-terminated. Otherwise, we suppose that the engine has recovered from temporary failures.|duration|1.2.0
-kyuubi.engine.deregister.job.max.failures|4|Number of failures of job before deregistering the engine.|int|1.2.0
-kyuubi.engine.event.json.log.path|file:///tmp/kyuubi/events|The location of all the engine events go for the builtin JSON logger.<ul><li>Local Path: start with 'file://'</li><li>HDFS Path: start with 'hdfs://'</li></ul>|string|1.3.0
-kyuubi.engine.event.loggers|SPARK|A comma separated list of engine history loggers, where engine/session/operation etc events go. We use spark logger by default.<ul> <li>SPARK: the events will be written to the spark listener bus.</li> <li>JSON: the events will be written to the location of kyuubi.engine.event.json.log.path</li> <li>JDBC: to be done</li> <li>CUSTOM: to be done.</li></ul>|seq|1.3.0
-kyuubi.engine.flink.extra.classpath|&lt;undefined&gt;|The extra classpath for the flink sql engine, for configuring location of hadoop client jars, etc|string|1.6.0
-kyuubi.engine.flink.java.options|&lt;undefined&gt;|The extra java options for the flink sql engine|string|1.6.0
-kyuubi.engine.flink.memory|1g|The heap memory for the flink sql engine|string|1.6.0
-kyuubi.engine.hive.extra.classpath|&lt;undefined&gt;|The extra classpath for the hive query engine, for configuring location of hadoop client jars, etc|string|1.6.0
-kyuubi.engine.hive.java.options|&lt;undefined&gt;|The extra java options for the hive query engine|string|1.6.0
-kyuubi.engine.hive.memory|1g|The heap memory for the hive query engine|string|1.6.0
-kyuubi.engine.initialize.sql|SHOW DATABASES|SemiColon-separated list of SQL statements to be initialized in the newly created engine before queries. i.e. use `SHOW DATABASES` to eagerly active HiveClient. This configuration can not be used in JDBC url due to the limitation of Beeline/JDBC driver.|seq|1.2.0
-kyuubi.engine.jdbc.connection.password|&lt;undefined&gt;|The password is used for connecting to server|string|1.6.0
-kyuubi.engine.jdbc.connection.properties||The additional properties are used for connecting to server|seq|1.6.0
-kyuubi.engine.jdbc.connection.provider|&lt;undefined&gt;|The connection provider is used for getting a connection from server|string|1.6.0
-kyuubi.engine.jdbc.connection.url|&lt;undefined&gt;|The server url that engine will connect to|string|1.6.0
-kyuubi.engine.jdbc.connection.user|&lt;undefined&gt;|The user is used for connecting to server|string|1.6.0
-kyuubi.engine.jdbc.driver.class|&lt;undefined&gt;|The driver class for jdbc engine connection|string|1.6.0
-kyuubi.engine.jdbc.extra.classpath|&lt;undefined&gt;|The extra classpath for the jdbc query engine, for configuring location of jdbc driver, etc|string|1.6.0
-kyuubi.engine.jdbc.java.options|&lt;undefined&gt;|The extra java options for the jdbc query engine|string|1.6.0
-kyuubi.engine.jdbc.memory|1g|The heap memory for the jdbc query engine|string|1.6.0
-kyuubi.engine.jdbc.type|&lt;undefined&gt;|The short name of jdbc type|string|1.6.0
-kyuubi.engine.operation.convert.catalog.database.enabled|true|When set to true, The engine converts the JDBC methods of set/get Catalog and set/get Schema to the implementation of different engines|boolean|1.6.0
-kyuubi.engine.operation.log.dir.root|engine_operation_logs|Root directory for query operation log at engine-side.|string|1.4.0
-kyuubi.engine.pool.name|engine-pool|The name of engine pool.|string|1.5.0
-kyuubi.engine.pool.size|-1|The size of engine pool. Note that, if the size is less than 1, the engine pool will not be enabled; otherwise, the size of the engine pool will be min(this, kyuubi.engine.pool.size.threshold).|int|1.4.0
-kyuubi.engine.pool.size.threshold|9|This parameter is introduced as a server-side parameter, and controls the upper limit of the engine pool.|int|1.4.0
-kyuubi.engine.session.initialize.sql||SemiColon-separated list of SQL statements to be initialized in the newly created engine session before queries. This configuration can not be used in JDBC url due to the limitation of Beeline/JDBC driver.|seq|1.3.0
-kyuubi.engine.share.level|USER|Engines will be shared in different levels, available configs are: <ul> <li>CONNECTION: engine will not be shared but only used by the current client connection</li> <li>USER: engine will be shared by all sessions created by a unique username, see also kyuubi.engine.share.level.subdomain</li> <li>GROUP: engine will be shared by all sessions created by all users belong to the same primary group name. The engine will be launched by the group name as the effec [...]
-kyuubi.engine.share.level.sub.domain|&lt;undefined&gt;|(deprecated) - Using kyuubi.engine.share.level.subdomain instead|string|1.2.0
-kyuubi.engine.share.level.subdomain|&lt;undefined&gt;|Allow end-users to create a subdomain for the share level of an engine. A subdomain is a case-insensitive string values that must be a valid zookeeper sub path. For example, for `USER` share level, an end-user can share a certain engine within a subdomain, not for all of its clients. End-users are free to create multiple engines in the `USER` share level. When disable engine pool, use 'default' if absent.|string|1.4.0
-kyuubi.engine.single.spark.session|false|When set to true, this engine is running in a single session mode. All the JDBC/ODBC connections share the temporary views, function registries, SQL configuration and the current database.|boolean|1.3.0
-kyuubi.engine.trino.extra.classpath|&lt;undefined&gt;|The extra classpath for the trino query engine, for configuring other libs which may need by the trino engine |string|1.6.0
-kyuubi.engine.trino.java.options|&lt;undefined&gt;|The extra java options for the trino query engine|string|1.6.0
-kyuubi.engine.trino.memory|1g|The heap memory for the trino query engine|string|1.6.0
-kyuubi.engine.type|SPARK_SQL|Specify the detailed engine that supported by the Kyuubi. The engine type bindings to SESSION scope. This configuration is experimental. Currently, available configs are: <ul> <li>SPARK_SQL: specify this engine type will launch a Spark engine which can provide all the capacity of the Apache Spark. Note, it's a default engine type.</li> <li>FLINK_SQL: specify this engine type will launch a Flink engine which can provide all the capacity of the Apache Flink.</l [...]
-kyuubi.engine.ui.retainedSessions|200|The number of SQL client sessions kept in the Kyuubi Query Engine web UI.|int|1.4.0
-kyuubi.engine.ui.retainedStatements|200|The number of statements kept in the Kyuubi Query Engine web UI.|int|1.4.0
-kyuubi.engine.ui.stop.enabled|true|When true, allows Kyuubi engine to be killed from the Spark Web UI.|boolean|1.3.0
-kyuubi.engine.user.isolated.spark.session|true|When set to false, if the engine is running in a group or server share level, all the JDBC/ODBC connections will be isolated against the user. Including: the temporary views, function registries, SQL configuration and the current database. Note that, it does not affect if the share level is connection or user.|boolean|1.6.0
-kyuubi.engine.user.isolated.spark.session.idle.interval|PT1M|The interval to check if the user isolated spark session is timeout.|duration|1.6.0
-kyuubi.engine.user.isolated.spark.session.idle.timeout|PT6H|If kyuubi.engine.user.isolated.spark.session is false, we will release the spark session if its corresponding user is inactive after this configured timeout.|duration|1.6.0
-
-
-### Frontend
-
-Key | Default | Meaning | Type | Since
---- | --- | --- | --- | ---
-kyuubi.frontend.backoff.slot.length|PT0.1S|(deprecated) Time to back off during login to the thrift frontend service.|duration|1.0.0
-kyuubi.frontend.bind.host|&lt;undefined&gt;|(deprecated) Hostname or IP of the machine on which to run the thrift frontend service via binary protocol.|string|1.0.0
-kyuubi.frontend.bind.port|10009|(deprecated) Port of the machine on which to run the thrift frontend service via binary protocol.|int|1.0.0
-kyuubi.frontend.connection.url.use.hostname|true|When true, frontend services prefer hostname, otherwise, ip address|boolean|1.5.0
-kyuubi.frontend.login.timeout|PT20S|(deprecated) Timeout for Thrift clients during login to the thrift frontend service.|duration|1.0.0
-kyuubi.frontend.max.message.size|104857600|(deprecated) Maximum message size in bytes a Kyuubi server will accept.|int|1.0.0
-kyuubi.frontend.max.worker.threads|999|(deprecated) Maximum number of threads in the of frontend worker thread pool for the thrift frontend service|int|1.0.0
-kyuubi.frontend.min.worker.threads|9|(deprecated) Minimum number of threads in the of frontend worker thread pool for the thrift frontend service|int|1.0.0
-kyuubi.frontend.mysql.bind.host|&lt;undefined&gt;|Hostname or IP of the machine on which to run the MySQL frontend service.|string|1.4.0
-kyuubi.frontend.mysql.bind.port|3309|Port of the machine on which to run the MySQL frontend service.|int|1.4.0
-kyuubi.frontend.mysql.max.worker.threads|999|Maximum number of threads in the command execution thread pool for the MySQL frontend service|int|1.4.0
-kyuubi.frontend.mysql.min.worker.threads|9|Minimum number of threads in the command execution thread pool for the MySQL frontend service|int|1.4.0
-kyuubi.frontend.mysql.netty.worker.threads|&lt;undefined&gt;|Number of thread in the netty worker event loop of MySQL frontend service. Use min(cpu_cores, 8) in default.|int|1.4.0
-kyuubi.frontend.mysql.worker.keepalive.time|PT1M|Time(ms) that an idle async thread of the command execution thread pool will wait for a new task to arrive before terminating in MySQL frontend service|duration|1.4.0
-kyuubi.frontend.protocols|THRIFT_BINARY|A comma separated list for all frontend protocols <ul> <li>THRIFT_BINARY - HiveServer2 compatible thrift binary protocol.</li> <li>THRIFT_HTTP - HiveServer2 compatible thrift http protocol.</li> <li>REST - Kyuubi defined REST API(experimental).</li>  <li>MYSQL - MySQL compatible text protocol(experimental).</li> </ul>|seq|1.4.0
-kyuubi.frontend.proxy.http.client.ip.header|X-Real-IP|The http header to record the real client ip address. If your server is behind a load balancer or other proxy, the server will see this load balancer or proxy IP address as the client IP address, to get around this common issue, most load balancers or proxies offer the ability to record the real remote IP address in an HTTP header that will be added to the request for other devices to use. Note that, because the header value can be sp [...]
-kyuubi.frontend.rest.bind.host|&lt;undefined&gt;|Hostname or IP of the machine on which to run the REST frontend service.|string|1.4.0
-kyuubi.frontend.rest.bind.port|10099|Port of the machine on which to run the REST frontend service.|int|1.4.0
-kyuubi.frontend.thrift.backoff.slot.length|PT0.1S|Time to back off during login to the thrift frontend service.|duration|1.4.0
-kyuubi.frontend.thrift.binary.bind.host|&lt;undefined&gt;|Hostname or IP of the machine on which to run the thrift frontend service via binary protocol.|string|1.4.0
-kyuubi.frontend.thrift.binary.bind.port|10009|Port of the machine on which to run the thrift frontend service via binary protocol.|int|1.4.0
-kyuubi.frontend.thrift.http.allow.user.substitution|true|Allow alternate user to be specified as part of open connection request when using HTTP transport mode.|boolean|1.6.0
-kyuubi.frontend.thrift.http.bind.host|&lt;undefined&gt;|Hostname or IP of the machine on which to run the thrift frontend service via http protocol.|string|1.6.0
-kyuubi.frontend.thrift.http.bind.port|10010|Port of the machine on which to run the thrift frontend service via http protocol.|int|1.6.0
-kyuubi.frontend.thrift.http.compression.enabled|true|Enable thrift http compression via Jetty compression support|boolean|1.6.0
-kyuubi.frontend.thrift.http.cookie.auth.enabled|true|When true, Kyuubi in HTTP transport mode, will use cookie based authentication mechanism|boolean|1.6.0
-kyuubi.frontend.thrift.http.cookie.domain|&lt;undefined&gt;|Domain for the Kyuubi generated cookies|string|1.6.0
-kyuubi.frontend.thrift.http.cookie.is.httponly|true|HttpOnly attribute of the Kyuubi generated cookie.|boolean|1.6.0
-kyuubi.frontend.thrift.http.cookie.max.age|86400|Maximum age in seconds for server side cookie used by Kyuubi in HTTP mode.|int|1.6.0
-kyuubi.frontend.thrift.http.cookie.path|&lt;undefined&gt;|Path for the Kyuubi generated cookies|string|1.6.0
-kyuubi.frontend.thrift.http.max.idle.time|PT30M|Maximum idle time for a connection on the server when in HTTP mode.|duration|1.6.0
-kyuubi.frontend.thrift.http.path|cliservice|Path component of URL endpoint when in HTTP mode.|string|1.6.0
-kyuubi.frontend.thrift.http.request.header.size|6144|Request header size in bytes, when using HTTP transport mode. Jetty defaults used.|int|1.6.0
-kyuubi.frontend.thrift.http.response.header.size|6144|Response header size in bytes, when using HTTP transport mode. Jetty defaults used.|int|1.6.0
-kyuubi.frontend.thrift.http.ssl.keystore.password|&lt;undefined&gt;|SSL certificate keystore password.|string|1.6.0
-kyuubi.frontend.thrift.http.ssl.keystore.path|&lt;undefined&gt;|SSL certificate keystore location.|string|1.6.0
-kyuubi.frontend.thrift.http.ssl.protocol.blacklist|SSLv2,SSLv3|SSL Versions to disable when using HTTP transport mode.|string|1.6.0
-kyuubi.frontend.thrift.http.use.SSL|false|Set this to true for using SSL encryption in http mode.|boolean|1.6.0
-kyuubi.frontend.thrift.http.xsrf.filter.enabled|false|If enabled, Kyuubi will block any requests made to it over http if an X-XSRF-HEADER header is not present|boolean|1.6.0
-kyuubi.frontend.thrift.login.timeout|PT20S|Timeout for Thrift clients during login to the thrift frontend service.|duration|1.4.0
-kyuubi.frontend.thrift.max.message.size|104857600|Maximum message size in bytes a Kyuubi server will accept.|int|1.4.0
-kyuubi.frontend.thrift.max.worker.threads|999|Maximum number of threads in the of frontend worker thread pool for the thrift frontend service|int|1.4.0
-kyuubi.frontend.thrift.min.worker.threads|9|Minimum number of threads in the of frontend worker thread pool for the thrift frontend service|int|1.4.0
-kyuubi.frontend.thrift.worker.keepalive.time|PT1M|Keep-alive time (in milliseconds) for an idle worker thread|duration|1.4.0
-kyuubi.frontend.worker.keepalive.time|PT1M|(deprecated) Keep-alive time (in milliseconds) for an idle worker thread|duration|1.0.0
-
-
-### Ha
-
-Key | Default | Meaning | Type | Since
---- | --- | --- | --- | ---
-kyuubi.ha.addresses||The connection string for the discovery ensemble|string|1.6.0
-kyuubi.ha.client.class|org.apache.kyuubi.ha.client.zookeeper.ZookeeperDiscoveryClient|Class name for service discovery client.<ul> <li>Zookeeper: org.apache.kyuubi.ha.client.zookeeper.ZookeeperDiscoveryClient</li> <li>Etcd: org.apache.kyuubi.ha.client.etcd.EtcdDiscoveryClient</li></ul>|string|1.6.0
-kyuubi.ha.etcd.lease.timeout|PT10S|Timeout for etcd keep alive lease. The kyuubi server will known unexpected loss of engine after up to this seconds.|duration|1.6.0
-kyuubi.ha.etcd.ssl.ca.path|&lt;undefined&gt;|Where the etcd CA certificate file is stored.|string|1.6.0
-kyuubi.ha.etcd.ssl.client.certificate.path|&lt;undefined&gt;|Where the etcd SSL certificate file is stored.|string|1.6.0
-kyuubi.ha.etcd.ssl.client.key.path|&lt;undefined&gt;|Where the etcd SSL key file is stored.|string|1.6.0
-kyuubi.ha.etcd.ssl.enabled|false|When set to true, will build a ssl secured etcd client.|boolean|1.6.0
-kyuubi.ha.namespace|kyuubi|The root directory for the service to deploy its instance uri|string|1.6.0
-kyuubi.ha.zookeeper.acl.enabled|false|Set to true if the zookeeper ensemble is kerberized|boolean|1.0.0
-kyuubi.ha.zookeeper.auth.digest|&lt;undefined&gt;|The digest auth string is used for zookeeper authentication, like: username:password.|string|1.3.2
-kyuubi.ha.zookeeper.auth.keytab|&lt;undefined&gt;|Location of Kyuubi server's keytab is used for zookeeper authentication.|string|1.3.2
-kyuubi.ha.zookeeper.auth.principal|&lt;undefined&gt;|Name of the Kerberos principal is used for zookeeper authentication.|string|1.3.2
-kyuubi.ha.zookeeper.auth.type|NONE|The type of zookeeper authentication, all candidates are <ul><li>NONE</li><li> KERBEROS</li><li> DIGEST</li></ul>|string|1.3.2
-kyuubi.ha.zookeeper.connection.base.retry.wait|1000|Initial amount of time to wait between retries to the zookeeper ensemble|int|1.0.0
-kyuubi.ha.zookeeper.connection.max.retries|3|Max retry times for connecting to the zookeeper ensemble|int|1.0.0
-kyuubi.ha.zookeeper.connection.max.retry.wait|30000|Max amount of time to wait between retries for BOUNDED_EXPONENTIAL_BACKOFF policy can reach, or max time until elapsed for UNTIL_ELAPSED policy to connect the zookeeper ensemble|int|1.0.0
-kyuubi.ha.zookeeper.connection.retry.policy|EXPONENTIAL_BACKOFF|The retry policy for connecting to the zookeeper ensemble, all candidates are: <ul><li>ONE_TIME</li><li> N_TIME</li><li> EXPONENTIAL_BACKOFF</li><li> BOUNDED_EXPONENTIAL_BACKOFF</li><li> UNTIL_ELAPSED</li></ul>|string|1.0.0
-kyuubi.ha.zookeeper.connection.timeout|15000|The timeout(ms) of creating the connection to the zookeeper ensemble|int|1.0.0
-kyuubi.ha.zookeeper.engine.auth.type|NONE|The type of zookeeper authentication for engine, all candidates are <ul><li>NONE</li><li> KERBEROS</li><li> DIGEST</li></ul>|string|1.3.2
-kyuubi.ha.zookeeper.namespace|kyuubi|(deprecated) The root directory for the service to deploy its instance uri|string|1.0.0
-kyuubi.ha.zookeeper.node.creation.timeout|PT2M|Timeout for creating zookeeper node|duration|1.2.0
-kyuubi.ha.zookeeper.publish.configs|false|When set to true, publish Kerberos configs to Zookeeper.Note that the Hive driver needs to be greater than 1.3 or 2.0 or apply HIVE-11581 patch.|boolean|1.4.0
-kyuubi.ha.zookeeper.quorum||(deprecated) The connection string for the zookeeper ensemble|string|1.0.0
-kyuubi.ha.zookeeper.session.timeout|60000|The timeout(ms) of a connected session to be idled|int|1.0.0
-
-
-### Kinit
-
-Key | Default | Meaning | Type | Since
---- | --- | --- | --- | ---
-kyuubi.kinit.interval|PT1H|How often will Kyuubi server run `kinit -kt [keytab] [principal]` to renew the local Kerberos credentials cache|duration|1.0.0
-kyuubi.kinit.keytab|&lt;undefined&gt;|Location of Kyuubi server's keytab.|string|1.0.0
-kyuubi.kinit.max.attempts|10|How many times will `kinit` process retry|int|1.0.0
-kyuubi.kinit.principal|&lt;undefined&gt;|Name of the Kerberos principal.|string|1.0.0
-
-
-### Kubernetes
-
-Key | Default | Meaning | Type | Since
---- | --- | --- | --- | ---
-kyuubi.kubernetes.context|&lt;undefined&gt;|The desired context from your kubernetes config file used to configure the K8S client for interacting with the cluster.|string|1.6.0
-
-
-### Metadata
-
-Key | Default | Meaning | Type | Since
---- | --- | --- | --- | ---
-kyuubi.metadata.cleaner.enabled|true|Whether to clean the metadata periodically. If it is enabled, Kyuubi will clean the metadata that is in terminate state with max age limitation.|boolean|1.6.0
-kyuubi.metadata.cleaner.interval|PT30M|The interval to check and clean expired metadata.|duration|1.6.0
-kyuubi.metadata.max.age|PT72H|The maximum age of metadata, the metadata that exceeds the age will be cleaned.|duration|1.6.0
-kyuubi.metadata.recovery.threads|10|The number of threads for recovery from metadata store when Kyuubi server restarting.|int|1.6.0
-kyuubi.metadata.request.retry.interval|PT5S|The interval to check and trigger the metadata request retry tasks.|duration|1.6.0
-kyuubi.metadata.request.retry.queue.size|65536|The maximum queue size for buffering metadata requests in memory when the external metadata storage is down. Requests will be dropped if the queue exceeds.|int|1.6.0
-kyuubi.metadata.request.retry.threads|10|Number of threads in the metadata request retry manager thread pool. The metadata store might be unavailable sometimes and the requests will fail, to tolerant for this case and unblock the main thread, we support to retry the failed requests in async way.|int|1.6.0
-kyuubi.metadata.store.class|org.apache.kyuubi.server.metadata.jdbc.JDBCMetadataStore|Fully qualified class name for server metadata store.|string|1.6.0
-kyuubi.metadata.store.jdbc.database.schema.init|true|Whether to init the jdbc metadata store database schema.|boolean|1.6.0
-kyuubi.metadata.store.jdbc.database.type|DERBY|The database type for server jdbc metadata store.<ul> <li>DERBY: Apache Derby, jdbc driver `org.apache.derby.jdbc.AutoloadedDriver`.</li> <li>MYSQL: MySQL, jdbc driver `com.mysql.jdbc.Driver`.</li> <li>CUSTOM: User-defined database type, need to specify corresponding jdbc driver.</li> Note that: The jdbc datasource is powered by HiKariCP, for datasource properties, please specify them with prefix: kyuubi.metadata.store.jdbc.datasource. For e [...]
-kyuubi.metadata.store.jdbc.driver|&lt;undefined&gt;|JDBC driver class name for server jdbc metadata store.|string|1.6.0
-kyuubi.metadata.store.jdbc.password||The password for server jdbc metadata store.|string|1.6.0
-kyuubi.metadata.store.jdbc.url|jdbc:derby:memory:kyuubi_state_store_db;create=true|The jdbc url for server jdbc metadata store. By defaults, it is a DERBY in-memory database url, and the state information is not shared across kyuubi instances. To enable multiple kyuubi instances high available, please specify a production jdbc url.|string|1.6.0
-kyuubi.metadata.store.jdbc.user||The username for server jdbc metadata store.|string|1.6.0
-
-
-### Metrics
-
-Key | Default | Meaning | Type | Since
---- | --- | --- | --- | ---
-kyuubi.metrics.console.interval|PT5S|How often should report metrics to console|duration|1.2.0
-kyuubi.metrics.enabled|true|Set to true to enable kyuubi metrics system|boolean|1.2.0
-kyuubi.metrics.json.interval|PT5S|How often should report metrics to json file|duration|1.2.0
-kyuubi.metrics.json.location|metrics|Where the json metrics file located|string|1.2.0
-kyuubi.metrics.prometheus.path|/metrics|URI context path of prometheus metrics HTTP server|string|1.2.0
-kyuubi.metrics.prometheus.port|10019|Prometheus metrics HTTP server port|int|1.2.0
-kyuubi.metrics.reporters|JSON|A comma separated list for all metrics reporters<ul> <li>CONSOLE - ConsoleReporter which outputs measurements to CONSOLE periodically.</li> <li>JMX - JmxReporter which listens for new metrics and exposes them as MBeans.</li>  <li>JSON - JsonReporter which outputs measurements to json file periodically.</li> <li>PROMETHEUS - PrometheusReporter which exposes metrics in prometheus format.</li> <li>SLF4J - Slf4jReporter which outputs measurements to system log p [...]
-kyuubi.metrics.slf4j.interval|PT5S|How often should report metrics to SLF4J logger|duration|1.2.0
-
-
-### Operation
-
-Key | Default | Meaning | Type | Since
---- | --- | --- | --- | ---
-kyuubi.operation.idle.timeout|PT3H|Operation will be closed when it's not accessed for this duration of time|duration|1.0.0
-kyuubi.operation.interrupt.on.cancel|true|When true, all running tasks will be interrupted if one cancels a query. When false, all running tasks will remain until finished.|boolean|1.2.0
-kyuubi.operation.language|SQL|Choose a programing language for the following inputs <ul><li>SQL: (Default) Run all following statements as SQL queries.</li> <li>SCALA: Run all following input a scala codes</li></ul>|string|1.5.0
-kyuubi.operation.log.dir.root|server_operation_logs|Root directory for query operation log at server-side.|string|1.4.0
-kyuubi.operation.plan.only.excludes|ResetCommand,SetCommand,SetNamespaceCommand,UseStatement,SetCatalogAndNamespace|Comma-separated list of query plan names, in the form of simple class names, i.e, for `set abc=xyz`, the value will be `SetCommand`. For those auxiliary plans, such as `switch databases`, `set properties`, or `create temporary view` e.t.c, which are used for setup evaluating environments for analyzing actual queries, we can use this config to exclude them and let them take  [...]
-kyuubi.operation.plan.only.mode|NONE|Whether to perform the statement in a PARSE, ANALYZE, OPTIMIZE, PHYSICAL, EXECUTION only way without executing the query. When it is NONE, the statement will be fully executed|string|1.4.0
-kyuubi.operation.progress.enabled|false|Whether to enable the operation progress. When true, the operation progress will be returned in `GetOperationStatus`.|boolean|1.6.0
-kyuubi.operation.query.timeout|&lt;undefined&gt;|Timeout for query executions at server-side, take affect with client-side timeout(`java.sql.Statement.setQueryTimeout`) together, a running query will be cancelled automatically if timeout. It's off by default, which means only client-side take fully control whether the query should timeout or not. If set, client-side timeout capped at this point. To cancel the queries right away without waiting task to finish, consider enabling kyuubi.ope [...]
-kyuubi.operation.result.max.rows|0|Max rows of Spark query results. Rows that exceeds the limit would be ignored. By setting this value to 0 to disable the max rows limit.|int|1.6.0
-kyuubi.operation.scheduler.pool|&lt;undefined&gt;|The scheduler pool of job. Note that, this config should be used after change Spark config spark.scheduler.mode=FAIR.|string|1.1.1
-kyuubi.operation.spark.listener.enabled|true|When set to true, Spark engine registers a SQLOperationListener before executing the statement, logs a few summary statistics when each stage completes.|boolean|1.6.0
-kyuubi.operation.status.polling.timeout|PT5S|Timeout(ms) for long polling asynchronous running sql query's status|duration|1.0.0
-
-
-### Server
-
-Key | Default | Meaning | Type | Since
---- | --- | --- | --- | ---
-kyuubi.server.limit.connections.per.ipaddress|&lt;undefined&gt;|Maximum kyuubi server connections per ipaddress. Any user exceeding this limit will not be allowed to connect.|int|1.6.0
-kyuubi.server.limit.connections.per.user|&lt;undefined&gt;|Maximum kyuubi server connections per user. Any user exceeding this limit will not be allowed to connect.|int|1.6.0
-kyuubi.server.limit.connections.per.user.ipaddress|&lt;undefined&gt;|Maximum kyuubi server connections per user:ipaddress combination. Any user-ipaddress exceeding this limit will not be allowed to connect.|int|1.6.0
-kyuubi.server.name|&lt;undefined&gt;|The name of Kyuubi Server.|string|1.5.0
-kyuubi.server.redaction.regex|&lt;undefined&gt;|Regex to decide which Kyuubi contain sensitive information. When this regex matches a property key or value, the value is redacted from the various logs.||1.6.0
-
-
-### Session
-
-Key | Default | Meaning | Type | Since
---- | --- | --- | --- | ---
-kyuubi.session.check.interval|PT5M|The check interval for session timeout.|duration|1.0.0
-kyuubi.session.conf.advisor|&lt;undefined&gt;|A config advisor plugin for Kyuubi Server. This plugin can provide some custom configs for different user or session configs and overwrite the session configs before open a new session. This config value should be a class which is a child of 'org.apache.kyuubi.plugin.SessionConfAdvisor' which has zero-arg constructor.|string|1.5.0
-kyuubi.session.conf.ignore.list||A comma separated list of ignored keys. If the client connection contains any of them, the key and the corresponding value will be removed silently during engine bootstrap and connection setup. Note that this rule is for server-side protection defined via administrators to prevent some essential configs from tampering but will not forbid users to set dynamic configurations via SET syntax.|seq|1.2.0
-kyuubi.session.conf.restrict.list||A comma separated list of restricted keys. If the client connection contains any of them, the connection will be rejected explicitly during engine bootstrap and connection setup. Note that this rule is for server-side protection defined via administrators to prevent some essential configs from tampering but will not forbid users to set dynamic configurations via SET syntax.|seq|1.2.0
-kyuubi.session.engine.alive.probe.enabled|false|Whether to enable the engine alive probe, it true, we will create a companion thrift client that sends simple request to check whether the engine is keep alive.|boolean|1.6.0
-kyuubi.session.engine.alive.probe.interval|PT10S|The interval for engine alive probe.|duration|1.6.0
-kyuubi.session.engine.alive.timeout|PT2M|The timeout for engine alive. If there is no alive probe success in the last timeout window, the engine will be marked as no-alive.|duration|1.6.0
-kyuubi.session.engine.check.interval|PT1M|The check interval for engine timeout|duration|1.0.0
-kyuubi.session.engine.flink.main.resource|&lt;undefined&gt;|The package used to create Flink SQL engine remote job. If it is undefined, Kyuubi will use the default|string|1.4.0
-kyuubi.session.engine.flink.max.rows|1000000|Max rows of Flink query results. For batch queries, rows that exceeds the limit would be ignored. For streaming queries, the query would be canceled if the limit is reached.|int|1.5.0
-kyuubi.session.engine.hive.main.resource|&lt;undefined&gt;|The package used to create Hive engine remote job. If it is undefined, Kyuubi will use the default|string|1.6.0
-kyuubi.session.engine.idle.timeout|PT30M|engine timeout, the engine will self-terminate when it's not accessed for this duration. 0 or negative means not to self-terminate.|duration|1.0.0
-kyuubi.session.engine.initialize.timeout|PT3M|Timeout for starting the background engine, e.g. SparkSQLEngine.|duration|1.0.0
-kyuubi.session.engine.launch.async|true|When opening kyuubi session, whether to launch backend engine asynchronously. When true, the Kyuubi server will set up the connection with the client without delay as the backend engine will be created asynchronously.|boolean|1.4.0
-kyuubi.session.engine.log.timeout|PT24H|If we use Spark as the engine then the session submit log is the console output of spark-submit. We will retain the session submit log until over the config value.|duration|1.1.0
-kyuubi.session.engine.login.timeout|PT15S|The timeout of creating the connection to remote sql query engine|duration|1.0.0
-kyuubi.session.engine.share.level|USER|(deprecated) - Using kyuubi.engine.share.level instead|string|1.0.0
-kyuubi.session.engine.spark.main.resource|&lt;undefined&gt;|The package used to create Spark SQL engine remote application. If it is undefined, Kyuubi will use the default|string|1.0.0
-kyuubi.session.engine.spark.max.lifetime|PT0S|Max lifetime for spark engine, the engine will self-terminate when it reaches the end of life. 0 or negative means not to self-terminate.|duration|1.6.0
-kyuubi.session.engine.spark.progress.timeFormat|yyyy-MM-dd HH:mm:ss.SSS|The time format of the progress bar|string|1.6.0
-kyuubi.session.engine.spark.progress.update.interval|PT1S|Update period of progress bar.|duration|1.6.0
-kyuubi.session.engine.spark.showProgress|false|When true, show the progress bar in the spark engine log.|boolean|1.6.0
-kyuubi.session.engine.startup.error.max.size|8192|During engine bootstrapping, if error occurs, using this config to limit the length error message(characters).|int|1.1.0
-kyuubi.session.engine.startup.maxLogLines|10|The maximum number of engine log lines when errors occur during engine startup phase. Note that this max lines is for client-side to help track engine startup issue.|int|1.4.0
-kyuubi.session.engine.startup.waitCompletion|true|Whether to wait for completion after engine starts. If false, the startup process will be destroyed after the engine is started. Note that only use it when the driver is not running locally, such as yarn-cluster mode; Otherwise, the engine will be killed.|boolean|1.5.0
-kyuubi.session.engine.trino.connection.catalog|&lt;undefined&gt;|The default catalog that trino engine will connect to|string|1.5.0
-kyuubi.session.engine.trino.connection.url|&lt;undefined&gt;|The server url that trino engine will connect to|string|1.5.0
-kyuubi.session.engine.trino.main.resource|&lt;undefined&gt;|The package used to create Trino engine remote job. If it is undefined, Kyuubi will use the default|string|1.5.0
-kyuubi.session.engine.trino.showProgress|true|When true, show the progress bar and final info in the trino engine log.|boolean|1.6.0
-kyuubi.session.engine.trino.showProgress.debug|false|When true, show the progress debug info in the trino engine log.|boolean|1.6.0
-kyuubi.session.idle.timeout|PT6H|session idle timeout, it will be closed when it's not accessed for this duration|duration|1.2.0
-kyuubi.session.local.dir.allow.list||The local dir list that are allowed to access by the kyuubi session application. User might set some parameters such as `spark.files` and it will upload some local files when launching the kyuubi engine, if the local dir allow list is defined, kyuubi will check whether the path to upload is in the allow list. Note that, if it is empty, there is no limitation for that and please use absolute path list.|seq|1.6.0
-kyuubi.session.name|&lt;undefined&gt;|A human readable name of session and we use empty string by default. This name will be recorded in event. Note that, we only apply this value from session conf.|string|1.4.0
-kyuubi.session.timeout|PT6H|(deprecated)session timeout, it will be closed when it's not accessed for this duration|duration|1.0.0
-
-
-### Spnego
-
-Key | Default | Meaning | Type | Since
---- | --- | --- | --- | ---
-kyuubi.spnego.keytab|&lt;undefined&gt;|Keytab file for SPNego principal|string|1.6.0
-kyuubi.spnego.principal|&lt;undefined&gt;|SPNego service principal, typical value would look like HTTP/_HOST@EXAMPLE.COM. SPNego service principal would be used when restful Kerberos security is enabled. This needs to be set only if SPNEGO is to be used in authentication.|string|1.6.0
-
-
-### Zookeeper
-
-Key | Default | Meaning | Type | Since
---- | --- | --- | --- | ---
-kyuubi.zookeeper.embedded.client.port|2181|clientPort for the embedded zookeeper server to listen for client connections, a client here could be Kyuubi server, engine and JDBC client|int|1.2.0
-kyuubi.zookeeper.embedded.client.port.address|&lt;undefined&gt;|clientPortAddress for the embedded zookeeper server to|string|1.2.0
-kyuubi.zookeeper.embedded.data.dir|embedded_zookeeper|dataDir for the embedded zookeeper server where stores the in-memory database snapshots and, unless specified otherwise, the transaction log of updates to the database.|string|1.2.0
-kyuubi.zookeeper.embedded.data.log.dir|embedded_zookeeper|dataLogDir for the embedded zookeeper server where writes the transaction log .|string|1.2.0
-kyuubi.zookeeper.embedded.directory|embedded_zookeeper|The temporary directory for the embedded zookeeper server|string|1.0.0
-kyuubi.zookeeper.embedded.max.client.connections|120|maxClientCnxns for the embedded zookeeper server to limits the number of concurrent connections of a single client identified by IP address|int|1.2.0
-kyuubi.zookeeper.embedded.max.session.timeout|60000|maxSessionTimeout in milliseconds for the embedded zookeeper server will allow the client to negotiate. Defaults to 20 times the tickTime|int|1.2.0
-kyuubi.zookeeper.embedded.min.session.timeout|6000|minSessionTimeout in milliseconds for the embedded zookeeper server will allow the client to negotiate. Defaults to 2 times the tickTime|int|1.2.0
-kyuubi.zookeeper.embedded.port|2181|The port of the embedded zookeeper server|int|1.0.0
-kyuubi.zookeeper.embedded.tick.time|3000|tickTime in milliseconds for the embedded zookeeper server|int|1.2.0
-
-## Spark Configurations
-
-### Via spark-defaults.conf
-
-Setting them in `$SPARK_HOME/conf/spark-defaults.conf` supplies with default values for SQL engine application. Available properties can be found at Spark official online documentation for [Spark Configurations](http://spark.apache.org/docs/latest/configuration.html)
-
-### Via kyuubi-defaults.conf
-
-Setting them in `$KYUUBI_HOME/conf/kyuubi-defaults.conf` supplies with default values for SQL engine application too. These properties will override all settings in `$SPARK_HOME/conf/spark-defaults.conf`
-
-### Via JDBC Connection URL
-
-Setting them in the JDBC Connection URL supplies session-specific for each SQL engine. For example: ```jdbc:hive2://localhost:10009/default;#spark.sql.shuffle.partitions=2;spark.executor.memory=5g```
-
-- **Runtime SQL Configuration**
-
-  - For [Runtime SQL Configurations](http://spark.apache.org/docs/latest/configuration.html#runtime-sql-configuration), they will take affect every time
-
-- **Static SQL and Spark Core Configuration**
-
-  - For [Static SQL Configurations](http://spark.apache.org/docs/latest/configuration.html#static-sql-configuration) and other spark core configs, e.g. `spark.executor.memory`, they will take affect if there is no existing SQL engine application. Otherwise, they will just be ignored
-
-### Via SET Syntax
-
-Please refer to the Spark official online documentation for [SET Command](http://spark.apache.org/docs/latest/sql-ref-syntax-aux-conf-mgmt-set.html)
-
-## Flink Configurations
-
-### Via flink-conf.yaml
-
-Setting them in `$FLINK_HOME/conf/flink-conf.yaml` supplies with default values for SQL engine application. Available properties can be found at Flink official online documentation for [Flink Configurations](https://nightlies.apache.org/flink/flink-docs-stable/docs/deployment/config/)
-
-### Via kyuubi-defaults.conf
-
-Setting them in `$KYUUBI_HOME/conf/kyuubi-defaults.conf` supplies with default values for SQL engine application too. You can use properties with the additional prefix `flink.` to override settings in `$FLINK_HOME/conf/flink-conf.yaml`.
-
-For example:
-```
-flink.parallelism.default 2
-flink.taskmanager.memory.process.size 5g
-```
-
-The below options in `kyuubi-defaults.conf` will set `parallelism.default: 2` and `taskmanager.memory.process.size: 5g` into flink configurations.
-
-### Via JDBC Connection URL
-
-Setting them in the JDBC Connection URL supplies session-specific for each SQL engine. For example: ```jdbc:hive2://localhost:10009/default;#parallelism.default=2;taskmanager.memory.process.size=5g```
-
-### Via SET Statements
-
-Please refer to the Flink official online documentation for [SET Statements](https://nightlies.apache.org/flink/flink-docs-stable/docs/dev/table/sql/set/)
-
-## Logging
-
-Kyuubi uses [log4j](https://logging.apache.org/log4j/2.x/) for logging. You can configure it using `$KYUUBI_HOME/conf/log4j2.xml`.
-```bash
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-  ~ Licensed to the Apache Software Foundation (ASF) under one or more
-  ~ contributor license agreements.  See the NOTICE file distributed with
-  ~ this work for additional information regarding copyright ownership.
-  ~ The ASF licenses this file to You under the Apache License, Version 2.0
-  ~ (the "License"); you may not use this file except in compliance with
-  ~ the License.  You may obtain a copy of the License at
-  ~
-  ~     http://www.apache.org/licenses/LICENSE-2.0
-  ~
-  ~ Unless required by applicable law or agreed to in writing, software
-  ~ distributed under the License is distributed on an "AS IS" BASIS,
-  ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  ~ See the License for the specific language governing permissions and
-  ~ limitations under the License.
-  -->
-
-<!-- Provide log4j2.xml.template to fix `ERROR Filters contains invalid attributes "onMatch", "onMismatch"`, see KYUUBI-2247 -->
-<!-- Extra logging related to initialization of Log4j.
- Set to debug or trace if log4j initialization is failing. -->
-<Configuration status="INFO">
-    <Appenders>
-        <Console name="stdout" target="SYSTEM_OUT">
-            <PatternLayout pattern="%d{yyyy-MM-dd HH:mm:ss.SSS} %p %c: %m%n"/>
-            <Filters>
-                <RegexFilter regex=".*Thrift error occurred during processing of message.*" onMatch="DENY" onMismatch="NEUTRAL"/>
-            </Filters>
-        </Console>
-    </Appenders>
-    <Loggers>
-        <Root level="INFO">
-            <AppenderRef ref="stdout"/>
-        </Root>
-        <Logger name="org.apache.kyuubi.ctl.ServiceControlCli" level="error" additivity="false">
-            <AppenderRef ref="stdout"/>
-        </Logger>
-        <!--
-        <Logger name="org.apache.kyuubi.server.mysql.codec" level="trace" additivity="false">
-            <AppenderRef ref="stdout"/>
-        </Logger>
-        -->
-        <Logger name="org.apache.hive.beeline.KyuubiBeeLine" level="error" additivity="false">
-            <AppenderRef ref="stdout"/>
-        </Logger>
-    </Loggers>
-</Configuration>
-```
-
-## Other Configurations
-
-### Hadoop Configurations
-
-Specifying `HADOOP_CONF_DIR` to the directory contains hadoop configuration files or treating them as Spark properties with a `spark.hadoop.` prefix. Please refer to the Spark official online documentation for [Inheriting Hadoop Cluster Configuration](http://spark.apache.org/docs/latest/configuration.html#inheriting-hadoop-cluster-configuration). Also, please refer to the [Apache Hadoop](http://hadoop.apache.org)'s online documentation for an overview on how to configure Hadoop.
-
-### Hive Configurations
-
-These configurations are used for SQL engine application to talk to Hive MetaStore and could be configured in a `hive-site.xml`. Placed it in `$SPARK_HOME/conf` directory, or treating them as Spark properties with a `spark.hadoop.` prefix.
-
-## User Defaults
-
-In Kyuubi, we can configure user default settings to meet separate needs. These user defaults override system defaults, but will be overridden by those from [JDBC Connection URL](#via-jdbc-connection-url) or [Set Command](#via-set-syntax) if could be. They will take effect when creating the SQL engine application ONLY.
-User default settings are in the form of `___{username}___.{config key}`. There are three continuous underscores(`_`) at both sides of the `username` and a dot(`.`) that separates the config key and the prefix. For example:
-```bash
-# For system defaults
-spark.master=local
-spark.sql.adaptive.enabled=true
-# For a user named kent
-___kent___.spark.master=yarn
-___kent___.spark.sql.adaptive.enabled=false
-# For a user named bob
-___bob___.spark.master=spark://master:7077
-___bob___.spark.executor.memory=8g
-```
-
-In the above case, if there are related configurations from [JDBC Connection URL](#via-jdbc-connection-url), `kent` will run his SQL engine application on YARN and prefer the Spark AQE to be off, while `bob` will activate his SQL engine application on a Spark standalone cluster with 8g heap memory for each executor and obey the Spark AQE behavior of Kyuubi system default. On the other hand, for those users who do not have custom configurations will use system defaults.
diff --git a/content/docs/r1.6.0-incubating/_sources/deployment/spark/aqe.md.txt b/content/docs/r1.6.0-incubating/_sources/deployment/spark/aqe.md.txt
deleted file mode 100644
index f85fcbf..0000000
--- a/content/docs/r1.6.0-incubating/_sources/deployment/spark/aqe.md.txt
+++ /dev/null
@@ -1,264 +0,0 @@
-<!--
- - Licensed to the Apache Software Foundation (ASF) under one or more
- - contributor license agreements.  See the NOTICE file distributed with
- - this work for additional information regarding copyright ownership.
- - The ASF licenses this file to You under the Apache License, Version 2.0
- - (the "License"); you may not use this file except in compliance with
- - the License.  You may obtain a copy of the License at
- -
- -   http://www.apache.org/licenses/LICENSE-2.0
- -
- - Unless required by applicable law or agreed to in writing, software
- - distributed under the License is distributed on an "AS IS" BASIS,
- - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- - See the License for the specific language governing permissions and
- - limitations under the License.
- -->
-
-
-# How To Use Spark Adaptive Query Execution (AQE) in Kyuubi
-
-## The Basics of AQE
-
-Spark Adaptive Query Execution (AQE) is a query re-optimization that occurs during query execution.
-
-In terms of technical architecture, the AQE is a framework of dynamic planning and replanning of queries based on runtime statistics,
-which supports a variety of optimizations such as,
-
-- Dynamically Switch Join Strategies
-- Dynamically Coalesce Shuffle Partitions
-- Dynamically Handle Skew Joins
-
-In Kyuubi, we strongly recommended that you turn on all capabilities of AQE by default for Kyuubi engines, no matter on what platform you run Kyuubi and Spark.
-
-### Dynamically Switch Join Strategies
-
-Spark supports several join strategies, among which `BroadcastHash Join` is usually the most performant when any join side fits well in memory. And for this reason, Spark plans a `BroadcastHash Join` if the estimated size of a join relation is less than the `spark.sql.autoBroadcastJoinThreshold`.
-
-```properties
-spark.sql.autoBroadcastJoinThreshold=10M
-```
-
-Without AQE, the estimated size of join relations comes from the statistics of the original table. It can go wrong in most real-world cases. For example, the join relation is a convergent but composite operation rather than a single table scan. In this case, Spark might not be able to switch the join-strategy to `BroadcastHash Join`.  While with AQE, we can runtime calculate the size of the composite operation accurately.  And then, Spark now can replan the join strategy unmistakably if  [...]
-
-<div align=center>
-
-![](../../imgs/spark/aqe_switch_join.png)
-
-</div>
-
-<p align=right>
-<em>
-<a href="https://databricks.com/blog/2020/05/29/adaptive-query-execution-speeding-up-spark-sql-at-runtime.html">[2] From Databricks Blog</a>
-</em>
-</p>
-
-What's more,  when `spark.sql.adaptive.localShuffleReader.enabled=true` and after converting `SortMerge Join` to `BroadcastHash Join`, Spark also does future optimize to reduce the network traffic by converting a regular shuffle to a localized shuffle.
-
-<div align=center>
-
-![](../../imgs/spark/localshufflereader.png)
-
-</div>
-
-As shown in the above fig, the local shuffle reader can read all necessary shuffle files from its local storage, actually without performing the shuffle across the network.
-
-The local shuffle reader optimization consists of avoiding shuffle when the `SortMerge Join` transforms to `BroadcastHash Join` after applying the AQE rules.
-
-### Dynamically Coalesce Shuffle Partitions
-
-Without this feature, Spark itself could be a small files maker sometimes, especially in a pure SQL way like Kyuubi does, for example,
-
-1. When `spark.sql.shuffle.partitions` is set too large compared to the total output size, there comes very small or empty files after a shuffle stage.
-2. When Spark performs a series of optimized `BroadcastHash Join` and `Union` together, the final output size for each partition might be reduced by the join conditions. However, the total final output file numbers get to explode.
-3. Some pipeline jobs with selective filters to produce temporary data.
-4. e.t.c
-
-Reading small files leads to very small partitions or tasks. Spark tasks will have worse I/O throughput and tend to suffer more from scheduling overhead and task setup overhead.
-
-<div align=center>
-
-![](../../imgs/spark/blog-adaptive-query-execution-2.png)
-
-</div>
-
-<p align=right>
-<em>
-<a href="https://databricks.com/blog/2020/05/29/adaptive-query-execution-speeding-up-spark-sql-at-runtime.html">[2] From Databricks Blog</a>
-</em>
-</p>
-
-Combining small partitions saves resources and improves cluster throughput. Spark provides several ways to handle small file issues, for example, adding an extra shuffle operation on the partition columns with the `distribute by` clause or using `HINT`[5]. In most scenarios, you need to have a good grasp of your data, Spark jobs, and configurations to apply these solutions case by case. Mostly, the daily used config - `spark.sql.shuffle.partitions` is data-dependent and unchangeable with [...]
-
-But with AQE, things become more comfortable for you as Spark will do the partition coalescing automatically.
-
-<div align=center>
-
-![](../../imgs/spark/blog-adaptive-query-execution-3.png)
-
-</div>
-<p align=right>
-<em>
-<a href="https://databricks.com/blog/2020/05/29/adaptive-query-execution-speeding-up-spark-sql-at-runtime.html">[2] From Databricks Blog</a>
-</em>
-</p>
-
-It can simplify the tuning of shuffle partition numbers when running Spark SQL queries. You do not need to set a proper shuffle partition number to fit your dataset.
-
-To enable this feature, we need to set the below two configs to true.
-
-```properties
-spark.sql.adaptive.enabled=true
-spark.sql.adaptive.coalescePartitions.enabled=true
-```
-
-#### Other Tips for Best Practises
-
-For further tuning our Spark jobs with this feature, we also need to be aware of these configs.
-
-```properties
-spark.sql.adaptive.advisoryPartitionSizeInBytes=128m
-spark.sql.adaptive.coalescePartitions.minPartitionNum=1
-spark.sql.adaptive.coalescePartitions.initialPartitionNum=200
-```
-
-##### How to set `spark.sql.adaptive.advisoryPartitionSizeInBytes`?
-
-It stands for the advisory size in bytes of the shuffle partition during adaptive query execution, which takes effect when Spark coalesces small shuffle partitions or splits skewed shuffle partition. The default value of `spark.sql.adaptive.advisoryPartitionSizeInBytes` is 64M.  Typically, if we are reading and writing data with HDFS, matching it with the block size of HDFS should be the best choice, i.e. 128MB or 256MB.
-
-Consequently, all blocks or partitions in Spark and files in HDFS are chopped up to 128MB/256MB chunks. And think about it, now all tasks for scans, sinks, and middle shuffle maps are dealing with mostly even-sized data partitions. It will make us much easier to set up executor resources or even one size to fit all.
-
-##### How to set `spark.sql.adaptive.coalescePartitions.minPartitionNum`?
-
-It stands for the suggested (not guaranteed) minimum number of shuffle partitions after coalescing. If not set, the default value is the default parallelism of the Spark application. The default parallelism is defined by `spark.default.parallelism` or else the total count of cores registered. I guess the motivation of this behavior made by the Spark community is to maximize the use of the resources and concurrency of the application.
-
-But there are always exceptions. Relating these two seemingly unrelated parameters can be somehow tricky for users. This config is optional by default which means users may not touch it in most real-world cases. But `spark.default.parallelism` has a long history and is well known then. If users set the default parallelism to an illegitimate high value unexpectedly, it could block AQE from coalescing partitions to a fair number. Another scenario that requires special attention is writing  [...]
-
-##### How to set `spark.sql.adaptive.coalescePartitions.initialPartitionNum`?
-
-It stands for the initial number of shuffle partitions before coalescing. By default, it equals to `spark.sql.shuffle.partitions(200)`. Firstly, it's better to set it explicitly rather than falling back to `spark.sql.shuffle.partitions`. Spark community suggests set a large number to it as Spark will dynamically coalesce shuffle partitions, which I cannot agree more.
-
-### Dynamically Handle Skew Joins
-
-Without AQE, the data skewness is very likely to occur for map-reduce computing models in the shuffle phase. Data skewness can cause Spark jobs to have one or more tailing tasks, severely downgrading queries' performance. This feature dynamically handles skew in `SortMerge Join` by splitting (and replicating if needed) skewed tasks into roughly evenly sized tasks. For example, The optimization will split oversized partitions into subpartitions and join them to the other join side's corre [...]
-
-<div align=center>
-
-![](../../imgs/spark/blog-adaptive-query-execution-6.png)
-
-</div>
-<p align=right>
-<em>
-<a href="https://databricks.com/blog/2020/05/29/adaptive-query-execution-speeding-up-spark-sql-at-runtime.html">[2] From Databricks Blog</a>
-</em>
-</p>
-
-To enable this feature, we need to set the below two configs to true.
-
-```properties
-spark.sql.adaptive.enabled=true
-spark.sql.adaptive.skewJoin.enabled=true
-```
-
-#### Other Tips for Best Practises
-
-For further tuning our Spark jobs with this feature, we also need to be aware of these configs.
-
-```properties
-spark.sql.adaptive.skewJoin.skewedPartitionFactor=5
-spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes=256M
-spark.sql.adaptive.advisoryPartitionSizeInBytes=64M
-```
-
-##### How to set `spark.sql.adaptive.skewJoin.skewedPartitionFactor` and `skewedPartitionThresholdInBytes`?
-
-Spark uses these two configs and the median(**not average**) partition size to detect whether a partition skew or not.
-
-```markdown
-partition size > skewedPartitionFactor * the median partition size && \
-skewedPartitionThresholdInBytes
-```
-
-As Spark splits skewed partitions targeting [spark.sql.adaptive.advisoryPartitionSizeInBytes](aqe.html#how-to-set-spark-sql-adaptive-advisorypartitionsizeinbytes), ideally `skewedPartitionThresholdInBytes` should be larger than `advisoryPartitionSizeInBytes`. In this case, anytime you increase `advisoryPartitionSizeInBytes`, you should also increase `skewedPartitionThresholdInBytes` if you tend to enable the feature.
-
-### Hidden Features
-
-#### DemoteBroadcastHashJoin
-
-Internally, Spark has an optimization rule that detects a join child with a high ratio of empty partitions and adds a no-broadcast-hash-join hint to avoid broadcasting it.
-
-```
-spark.sql.adaptive.nonEmptyPartitionRatioForBroadcastJoin=0.2
-```
-
-By default, if there are only less than 20% partitions of the dataset contain data, Spark will not broadcast the dataset.
-
-#### EliminateJoinToEmptyRelation
-
-This optimization rule detects and converts a Join to an empty LocalRelation.
-
-
-#### Disabling the Hidden Features
-
-We can exclude some of the AQE additional rules if performance regression or bug occurs. For example,
-
-```sql
-SET spark.sql.adaptive.optimizer.excludedRules=org.apache.spark.sql.execution.adaptive.DemoteBroadcastHashJoin
-```
-
-## Best Practices for Applying AQE to Kyuubi
-
-Kyuubi is a long-running service to make it easier for end-users to use Spark SQL without having much of Spark's basic knowledge. It is essential to have a basic configuration that works for most scenarios on the server-side.
-
-
-### Setting Default Configurations
-
-[Configuring by `spark-defaults.conf`](settings.html#via-spark-defaults-conf) at the engine side is the best way to set up Kyuubi with AQE. All engines will be instantiated with AQE enabled.
-
-Here is a config setting that we use in our platform when deploying Kyuubi.
-
-```properties
-spark.sql.adaptive.enabled=true
-spark.sql.adaptive.forceApply=false
-spark.sql.adaptive.logLevel=info
-spark.sql.adaptive.advisoryPartitionSizeInBytes=256m
-spark.sql.adaptive.coalescePartitions.enabled=true
-spark.sql.adaptive.coalescePartitions.minPartitionNum=1
-spark.sql.adaptive.coalescePartitions.initialPartitionNum=8192
-spark.sql.adaptive.fetchShuffleBlocksInBatch=true
-spark.sql.adaptive.localShuffleReader.enabled=true
-spark.sql.adaptive.skewJoin.enabled=true
-spark.sql.adaptive.skewJoin.skewedPartitionFactor=5
-spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes=400m
-spark.sql.adaptive.nonEmptyPartitionRatioForBroadcastJoin=0.2
-spark.sql.adaptive.optimizer.excludedRules
-spark.sql.autoBroadcastJoinThreshold=-1
-```
-#### Tips
-Turn on AQE by default can significantly improve the user experience.
-Other sub-features are all enabled.
-`advisoryPartitionSizeInBytes` is targeting the HDFS block size
-`minPartitionNum` is set to 1 for the reason of coalescing first.
-`initialPartitionNum` has a high value.
-Since AQE requires at least one shuffle, ideally, we need to set `autoBroadcastJoinThreshold` to -1 to involving `SortMerge Join` with a shuffle for all user queries with joins. But then, the  Dynamically Switch Join Strategies feature seems can not be applied later in this case. It appears to be a typo limitation of Spark AQE so far.
-
-### Dynamically Setting
-
-All AQE related configurations are runtime changeable, which means that it can still modify some specific configs by `SET` syntaxes for each SQL query with more precise control on the client-side.
-
-
-## Spark Known issues
-
-[SPARK-33933: Broadcast timeout happened unexpectedly in AQE](https://issues.apache.org/jira/browse/SPARK-33933)
-
-For Spark versions(<3.1), we need to increase `spark.sql.broadcastTimeout(300s)` higher even the broadcast relation is tiny.
-
-For other potential problems that may be found in the AQE features of Spark, you may refer to [SPARK-33828: SQL Adaptive Query Execution QA](https://issues.apache.org/jira/browse/SPARK-33828).
-
-## References
-
-1. [Adaptive Query Execution](https://spark.apache.org/docs/latest/sql-performance-tuning.html#adaptive-query-execution)
-2. [Adaptive Query Execution: Speeding Up Spark SQL at Runtime](https://databricks.com/blog/2020/05/29/adaptive-query-execution-speeding-up-spark-sql-at-runtime.html)
-3. [SPARK-31412: New Adaptive Query Execution in Spark SQL](https://issues.apache.org/jira/browse/SPARK-31412)
-4. [SPARK-28560: Optimize shuffle reader to local shuffle reader when smj converted to bhj in adaptive execution](https://issues.apache.org/jira/browse/SPARK-28560)
-5. [Coalesce and Repartition Hint for SQL Queries](https://issues.apache.org/jira/browse/SPARK-24940)
diff --git a/content/docs/r1.6.0-incubating/_sources/deployment/spark/dynamic_allocation.md.txt b/content/docs/r1.6.0-incubating/_sources/deployment/spark/dynamic_allocation.md.txt
deleted file mode 100644
index 7b35e4b..0000000
--- a/content/docs/r1.6.0-incubating/_sources/deployment/spark/dynamic_allocation.md.txt
+++ /dev/null
@@ -1,237 +0,0 @@
-<!--
- - Licensed to the Apache Software Foundation (ASF) under one or more
- - contributor license agreements.  See the NOTICE file distributed with
- - this work for additional information regarding copyright ownership.
- - The ASF licenses this file to You under the Apache License, Version 2.0
- - (the "License"); you may not use this file except in compliance with
- - the License.  You may obtain a copy of the License at
- -
- -   http://www.apache.org/licenses/LICENSE-2.0
- -
- - Unless required by applicable law or agreed to in writing, software
- - distributed under the License is distributed on an "AS IS" BASIS,
- - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- - See the License for the specific language governing permissions and
- - limitations under the License.
- -->
-
-
-# How To Use Spark Dynamic Resource Allocation (DRA) in Kyuubi
-
-
-When we adopt Kyuubi in a production environment,
-we always want to use the environment's computing resources more cost-effectively and efficiently.
-Cluster managers such as  K8S and Yarn manage the cluster compute resources,
-divided into different queues or namespaces with different ACLs and quotas.
-
-In Kyuubi, we acquire computing resources from the cluster manager to submit the engines.
-The engines respond to various types of client requests,
-some of which consume many computing resources to process,
-while others may require very few resources to complete.
-If we have fixed-sized engines,
-a.k.a. with a fixed number for `spark.executor.instances`,
-it may cause a waste of resources for some lightweight workloads,
-while for some heavyweight workloads,
-it should probably not have enough concurrency capacity resulting in poor performance.
-
-When the engine has executor idled,
-we should release it back to the resource pool promptly,
-and conversely, when the engine is doing chubby tasks,
-we should be able to get and use more resources more efficiently.
-On the one hand, we need to rely on the resource manager's capabilities for efficient resource allocation,
-resource isolation, and sharing.
-On the other hand, we need to enable Spark's DRA feature for the engines' executors' elastic scaling.
-
-
-## The Basics of Dynamic Resource Allocation
-
-Spark provides a mechanism to dynamically adjust the application resources based on the workload, which means that an application may give resources back to the cluster if they are no longer used and request them again later when there is demand.
-This feature is handy if multiple applications share resources on YARN, Kubernetes, and other platforms.
-
-For Kyuubi engines,
-which are typical Spark applications,
-the dynamic allocation allows Spark to dynamically scale the cluster resources allocated to them based on the workloads.
-When dynamic allocation is enabled,
-and an engine has a backlog of pending tasks,
-it can request executors via ```ExecutorAllocationManager```.
-When the engine has executors that become idle, the executors are released,
-and the occupied resources are given back to the cluster manager.
-Then other engines or other applications run in the same queue could acquire the resources.
-
-
-## How to Enable Dynamic Resource Allocation
-
-The prerequisite for enabling this feature is for downstream stages to have proper access to shuffle data, even if the executors that generated the data are recycled.
-
-Spark provides two implementations for shuffle data tracking. If either is enabled, we can use the  DRA feature properly.
-
-### Dynamic Resource Allocation w/ External Shuffle Service
-
-Having an external shuffle service (ESS) makes sure that all the data is stored outside of executors.
-This prerequisite was needed as Spark needed to ensure that the executors' removal does not remove shuffle data.
-When deploying Kyuubi with a cluster manager that provides ESS, enable DRA for all the engines with the configurations below.
-
-```properties
-spark.dynamicAllocation.enabled=true
-spark.shuffle.service.enabled=true
-```
-
-Another thing to be sure of is that ```spark.shuffle.service.port``` should be configured to point to the port on which the ESS is running.
-
-
-### Dynamic Allocation w/o External Shuffle Service
-
-Implementations of the ESS feature are cluster manager dependent. Yarn, for instance, where the ESS needs to be deployed cluster-widely and is actually running in the Yarn's `NodeManager` component. Nevertheless, if run Kyuubi's engines on Kubernetes, the ESS is not an option yet.
-Since Spark 3.0, the DRA can run without ESS. The relative feature called ```Shuffle Tracking``` was introduced by [SPARK-27963](https://issues.apache.org/jira/browse/SPARK-27963).
-
-When deploying Kyuubi with a cluster manager that without ESS or the ESS is not attractive, enable DRA with ```Shuffle Tracking``` instead for all the engines with the configurations below.
-
-```properties
-spark.dynamicAllocation.enabled=true
-spark.dynamicAllocation.shuffleTracking.enabled=true
-```
-
-When ```Shuffle Tracking``` is enabled, ```spark.dynamicAllocation.shuffleTracking.timeout(default: infinity)``` controls the timeout for executors that are holding shuffle data. Spark will rely on the shuffles being garbage collected to be able to release executors by default. When the garbage collection is not cleaning up shuffles quickly enough, this timeout forces Spark to delete executors even when they are storing shuffle data.
-
-## Sizing for engines w/ Dynamic Resource Allocation
-
-Resources for a single executor, such as CPUs and memory, can be fixed size. So, the range [```minExecutors```, ```maxExecutors```] determines how many recourses the engine can take from the cluster manager.
-
-On the one hand, the  ```minExecutors``` tells Spark to keep how many executors at least. If it is set too close to 0(default), the engine might complain about a lack of resources if the cluster manager is quite busy and for a long time.
-However, the larger the ```minExecutors``` goes, the more resources may be wasted during the engine's idle time.
-
-On the other hand, the ```maxExecutors``` determines the upper bound executors of an engine could reach. From the individual engine perspective, this value is the larger, the better, to handle heavier queries. However, we must limit it to a reasonable range in terms of the entire cluster's resources. Otherwise, a large query may trigger the engine where it runs to consume too many resources from the queue/namespace and occupy them for a considerable time, which could be a bad idea for us [...]
-
-The following Spark configurations consist of sizing for the DRA.
-
-
-```
-spark.dynamicAllocation.minExecutors=10
-spark.dynamicAllocation.maxExecutors=500
-```
-
-Additionally, another config called ```spark.dynamicAllocation.initialExecutors``` can be used to decide how many executors to request during engine bootstrapping or failover.
-
-Ideally,   the size relationship between them should be as ```minExecutors``` <= ```initialExecutors``` < ```maxExecutors```.
-
-## Resource Allocation Policy
-
-When the DRA notices that the current resources are insufficient for the current workload, it will request more executors.
-
-<div align=center>
-
-![](../../imgs/spark/dra_task_pending.png)
-
-</div>
-
-By default, the dynamic allocation will request enough executors to maximize the parallelism according to the number of tasks to process.
-
-<div align=center>
-
-![](../../imgs/spark/dra_executor_added.png)
-
-</div>
-
-
-While this minimizes the latency of the job, but with small tasks, the default behavior can waste many resources due to executor allocation overhead, as some executors might not even do any work.
-
-In this case, we can adjust ```spark.dynamicAllocation.executorAllocationRatio``` a bit lower to reduce the number of executors w.r.t. full parallelism.  For instance, 0.5 will divide the target number of executors by 2.
-
-
-<div align=center>
-
-![](../../imgs/spark/dra_executor_add_ratio.png)
-
-</div>
-
-After finish one task,  Spark Driver will schedule a new task for the executor with available cores. When pending tasks become fewer and fewer, some executors become idle for no new coming tasks.
-
-<div align=center>
-
-![](../../imgs/spark/dra_task_fin.png)
-
-</div>
-
-If one executor reached the maximum timeout, it will be removed.
-```properties
-spark.dynamicAllocation.executorIdleTimeout=60s
-spark.dynamicAllocation.cachedExecutorIdleTimeout=infinity
-```
-
-<div align=center>
-
-![](../../imgs/spark/dra_executor_removal.png)
-
-</div>
-
-
-If the DRA finds there have been pending tasks backlogged for more than the timeouts, new executors will be requested, controlled by the following configs.
-
-```properties
-spark.dynamicAllocation.schedulerBacklogTimeout=1s
-spark.dynamicAllocation.sustainedSchedulerBacklogTimeout=1s
-```
-
-## Best Practices for Applying DRA to Kyuubi
-
-Kyuubi is a long-running service to make it easier for end-users to use Spark SQL without having much of Spark's basic knowledge. It is essential to have a basic configuration for resource management that works for most scenarios on the server-side.
-
-
-### Setting Default Configurations
-
-[Configuring by `spark-defaults.conf`](settings.html#via-spark-defaults-conf) at the engine side is the best way to set up Kyuubi with DRA. All engines will be instantiated with DRA enabled.
-
-Here is a config setting that we use in our platform when deploying Kyuubi.
-
-```properties
-spark.dynamicAllocation.enabled=true
-##false if perfer shuffle tracking than ESS
-spark.shuffle.service.enabled=true
-spark.dynamicAllocation.initialExecutors=10
-spark.dynamicAllocation.minExecutors=10
-spark.dynamicAllocation.maxExecutors=500
-spark.dynamicAllocation.executorAllocationRatio=0.5
-spark.dynamicAllocation.executorIdleTimeout=60s
-spark.dynamicAllocation.cachedExecutorIdleTimeout=30min
-# true if perfer shuffle tracking than ESS
-spark.dynamicAllocation.shuffleTracking.enabled=false
-spark.dynamicAllocation.shuffleTracking.timeout=30min
-spark.dynamicAllocation.schedulerBacklogTimeout=1s
-spark.dynamicAllocation.sustainedSchedulerBacklogTimeout=1s
-spark.cleaner.periodicGC.interval=5min
-```
-
-Note that, ```spark.cleaner.periodicGC.interval=5min``` is useful here when ```spark.dynamicAllocation.shuffleTracking.enabled``` is enabled, as we can tell Spark to be more active for shuffle data GC.
-
-### Setting User Default Settings
-On the server-side, the workloads for different users might be different.
-
-Then we can set different defaults for them via the [User Defaults](../settings.html#user-defaults) in ```$KYUUBI_HOME/conf/kyuubi-defaults.conf```
-
-```properties
-# For a user named kent
-___kent___.spark.dynamicAllocation.maxExecutors=20
-# For a user named bob
-___bob___.spark.dynamicAllocation.maxExecutors=600
-```
-In this case, the user named `kent` can only use 20 executors for his engines, but `bob` can use 600 executors for better performance or handle heavy workloads.
-
-### Dynamically Setting
-
-All AQE related configurations are static of Spark core and unchangeable by `SET` syntaxes before each SQL query. For example,
-
-```sql
-SET spark.dynamicAllocation.maxExecutors=33;
-SELECT * FROM default.tableA;
-```
-
-For the above case, the value - 33 will not affect as Spark does not support change core configurations in runtime.
-
-Instead, end-users can set them via [JDBC Connection URL](../settings.html#via-jdbc-connection-url) for some specific cases.
-
-
-## References
-
-1. [Spark Official Online Document: Dynamic Resource Allocation](https://spark.apache.org/docs/latest/job-scheduling.html#dynamic-resource-allocation)
-2. [Spark Official Online Document: Dynamic Resource Allocation Configurations](https://spark.apache.org/docs/latest/configuration.html#dynamic-allocation)
-3. [SPARK-27963: Allow dynamic allocation without an external shuffle service](https://issues.apache.org/jira/browse/SPARK-27963)
diff --git a/content/docs/r1.6.0-incubating/_sources/deployment/spark/incremental_collection.md.txt b/content/docs/r1.6.0-incubating/_sources/deployment/spark/incremental_collection.md.txt
deleted file mode 100644
index 6883cdd..0000000
--- a/content/docs/r1.6.0-incubating/_sources/deployment/spark/incremental_collection.md.txt
+++ /dev/null
@@ -1,121 +0,0 @@
-<!--
- - Licensed to the Apache Software Foundation (ASF) under one or more
- - contributor license agreements.  See the NOTICE file distributed with
- - this work for additional information regarding copyright ownership.
- - The ASF licenses this file to You under the Apache License, Version 2.0
- - (the "License"); you may not use this file except in compliance with
- - the License.  You may obtain a copy of the License at
- -
- -   http://www.apache.org/licenses/LICENSE-2.0
- -
- - Unless required by applicable law or agreed to in writing, software
- - distributed under the License is distributed on an "AS IS" BASIS,
- - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- - See the License for the specific language governing permissions and
- - limitations under the License.
- -->
-
-
-# Solution for Big Result Sets
-
-Typically, when a user submits a SELECT query to Spark SQL engine, the Driver calls `collect` to trigger calculation and
-collect the entire data set of all tasks(a.k.a. partitions of an RDD), after all partitions data arrived, then the
-client pulls the result set from the Driver through the Kyuubi Server in small batch.
-
-Therefore, the bottleneck is the Spark Driver for a query with a big result set. To avoid OOM, Spark has a configuration
-`spark.driver.maxResultSize` which default is `1g`, you should enlarge it as well as `spark.driver.memory` if your
-query has result set in several GB. But what if the result set size is dozens GB or event hundreds GB? It would be best
-if you have incremental collection mode.
-
-## Incremental collection
-
-Since v1.4.0-incubating, Kyuubi supports incremental collection mode, it is a solution for big result sets. This feature
-is disabled in default, you can turn on it by setting the configuration `kyuubi.operation.incremental.collect` to `true`.
-
-The incremental collection changes the gather method from `collect` to `toLocalIterator`. `toLocalIterator` is a Spark
-action that sequentially submits Jobs to retrieve partitions. As each partition is retrieved, the client through pulls
-the result set from the Driver through the Kyuubi Server streamingly. It reduces the Driver memory significantly from
-the size of the complete result set to the maximum partition.
-
-The incremental collection is not the silver bullet, you should turn it on carefully, because it can significantly hurt
-performance. And even in incremental collection mode, when multiple queries execute concurrently, each query still requires
-one partition of data in Driver memory. Therefore, it is still important to control the number of concurrent queries to
-avoid OOM.
-
-## Use in single connections
-
-As above explains, the incremental collection mode is not suitable for common query sense, you can enable incremental
-collection mode for specific queries by using
-
-```
-beeline -u 'jdbc:hive2://kyuubi:10009/?spark.driver.maxResultSize=8g;spark.driver.memory=12g#kyuubi.engine.share.level=CONNECTION;kyuubi.operation.incremental.collect=true' \
-    --incremental=true \
-    -f big_result_query.sql
-```
-
-`--incremental=true` is required for beeline client, otherwise, the entire result sets is fetched and buffered before
-being displayed, which may cause client side OOM.
-
-## Change incremental collection mode in session
-
-The configuration `kyuubi.operation.incremental.collect` can also be changed using `SET` in session.
-
-```
-~ beeline -u 'jdbc:hive2://localhost:10009'
-Connected to: Apache Kyuubi (Incubating) (version 1.5.0-SNAPSHOT)
-
-0: jdbc:hive2://localhost:10009/> set kyuubi.operation.incremental.collect=true;
-+---------------------------------------+--------+
-|                  key                  | value  |
-+---------------------------------------+--------+
-| kyuubi.operation.incremental.collect  | true   |
-+---------------------------------------+--------+
-1 row selected (0.039 seconds)
-
-0: jdbc:hive2://localhost:10009/> select /*+ REPARTITION(5) */ * from range(1, 10);
-+-----+
-| id  |
-+-----+
-| 2   |
-| 6   |
-| 7   |
-| 0   |
-| 5   |
-| 3   |
-| 4   |
-| 1   |
-| 8   |
-| 9   |
-+-----+
-10 rows selected (1.929 seconds)
-
-0: jdbc:hive2://localhost:10009/> set kyuubi.operation.incremental.collect=false;
-+---------------------------------------+--------+
-|                  key                  | value  |
-+---------------------------------------+--------+
-| kyuubi.operation.incremental.collect  | false   |
-+---------------------------------------+--------+
-1 row selected (0.027 seconds)
-
-0: jdbc:hive2://localhost:10009/> select /*+ REPARTITION(5) */ * from range(1, 10);
-+-----+
-| id  |
-+-----+
-| 2   |
-| 6   |
-| 7   |
-| 0   |
-| 5   |
-| 3   |
-| 4   |
-| 1   |
-| 8   |
-| 9   |
-+-----+
-10 rows selected (0.128 seconds)
-```
-
-From the Spark UI, we can see that in incremental collection mode, the query produces 5 jobs (in red square), and in
-normal mode, only produces 1 job (in blue square).
-
-![](../../imgs/spark/incremental_collection.png)
diff --git a/content/docs/r1.6.0-incubating/_sources/deployment/spark/index.rst.txt b/content/docs/r1.6.0-incubating/_sources/deployment/spark/index.rst.txt
deleted file mode 100644
index 0d75c50..0000000
--- a/content/docs/r1.6.0-incubating/_sources/deployment/spark/index.rst.txt
+++ /dev/null
@@ -1,32 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-
-The Spark SQL Engine Configuration Guide
-========================================
-
-Kyuubi aims to bring Spark to end-users who need not qualify with Spark or something else related to the big data area.
-End-users can write SQL queries through JDBC against Kyuubi and nothing more.
-The Kyuubi server-side or the corresponding engines could do most of the optimization.
-On the other hand, we don't wholly restrict end-users to special handling of specific cases to benefit from the following documentations.
-Even if you don't use Kyuubi, as a simple Spark user, I'm sure you'll find the next articles instructive.
-
-.. toctree::
-    :maxdepth: 2
-    :glob:
-
-    dynamic_allocation
-    aqe
-    incremental_collection
diff --git a/content/docs/r1.6.0-incubating/_sources/develop_tools/build_document.md.txt b/content/docs/r1.6.0-incubating/_sources/develop_tools/build_document.md.txt
deleted file mode 100644
index c3c310d..0000000
--- a/content/docs/r1.6.0-incubating/_sources/develop_tools/build_document.md.txt
+++ /dev/null
@@ -1,74 +0,0 @@
-<!--
- - Licensed to the Apache Software Foundation (ASF) under one or more
- - contributor license agreements.  See the NOTICE file distributed with
- - this work for additional information regarding copyright ownership.
- - The ASF licenses this file to You under the Apache License, Version 2.0
- - (the "License"); you may not use this file except in compliance with
- - the License.  You may obtain a copy of the License at
- -
- -   http://www.apache.org/licenses/LICENSE-2.0
- -
- - Unless required by applicable law or agreed to in writing, software
- - distributed under the License is distributed on an "AS IS" BASIS,
- - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- - See the License for the specific language governing permissions and
- - limitations under the License.
- -->
-
-
-# Building Kyuubi Documentation
-
-Follow the steps below and learn how to build the Kyuubi documentation as the one you are watching now.
-
-## Install & Activate `virtualenv`
-
-Firstly, install `virtualenv`, this is optional but recommended as it is useful to create an independent environment to resolve dependency issues for building the documentation.
-
-```bash
-pip install virtualenv
-```
-
-Switch to the `docs` root directory.
-
-```bash
-cd $KYUUBI_SOURCE_PATH/docs
-```
-
-Create a virtual environment named 'kyuubi' or anything you like using `virtualenv` if it's not existing.
-
-```bash
-virtualenv kyuubi
-```
-
-Activate it,
-
-```bash
-source ./kyuubi/bin/activate
-```
-
-## Install all dependencies
-
-Install all dependencies enumerated in the `requirements.txt`.
-
-```bash
-pip install -r requirements.txt
-```
-
-## Create Documentation
-
-Make sure you are in the `$KYUUBI_SOURCE_PATH/docs` directory.
-
-linux & macos
-```bash
-make html
-```
-windows
-```bash
-make.bat html
-```
-
-If the build process succeed, the HTML pages are in `$KYUUBI_SOURCE_PATH/docs/_build/html`.
-
-## View Locally
-
-Open the `$KYUUBI_SOURCE_PATH/docs/_build/html/index.html` file in your favorite web browser.
diff --git a/content/docs/r1.6.0-incubating/_sources/develop_tools/building.md.txt b/content/docs/r1.6.0-incubating/_sources/develop_tools/building.md.txt
deleted file mode 100644
index 0de8c1c..0000000
--- a/content/docs/r1.6.0-incubating/_sources/develop_tools/building.md.txt
+++ /dev/null
@@ -1,86 +0,0 @@
-<!--
- - Licensed to the Apache Software Foundation (ASF) under one or more
- - contributor license agreements.  See the NOTICE file distributed with
- - this work for additional information regarding copyright ownership.
- - The ASF licenses this file to You under the Apache License, Version 2.0
- - (the "License"); you may not use this file except in compliance with
- - the License.  You may obtain a copy of the License at
- -
- -   http://www.apache.org/licenses/LICENSE-2.0
- -
- - Unless required by applicable law or agreed to in writing, software
- - distributed under the License is distributed on an "AS IS" BASIS,
- - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- - See the License for the specific language governing permissions and
- - limitations under the License.
- -->
-
-
-# Building Kyuubi
-
-## Building Kyuubi with Apache Maven
-
-**Kyuubi** is built based on [Apache Maven](http://maven.apache.org),
-
-```bash
-./build/mvn clean package -DskipTests
-```
-
-This results in the creation of all sub-modules of Kyuubi project without running any unit test.
-
-If you want to test it manually, you can start Kyuubi directly from the Kyuubi project root by running
-
-```bash
-bin/kyuubi start
-```
-
-## Building a Submodule Individually
-
-For instance, you can build the Kyuubi Common module using:
-
-```bash
-build/mvn clean package -pl kyuubi-common -DskipTests
-```
-
-## Building Submodules Individually
-
-For instance, you can build the Kyuubi Common module using:
-
-```bash
-build/mvn clean package -pl kyuubi-common,kyuubi-ha -DskipTests
-```
-
-## Skipping Some modules
-
-For instance, you can build the Kyuubi modules without Kyuubi Codecov and Assembly modules using:
-
-```bash
-mvn clean install -pl '!dev/kyuubi-codecov,!kyuubi-assembly' -DskipTests
-```
-
-## Building Kyuubi against Different Apache Spark versions
-
-Since v1.1.0, Kyuubi support building with different Spark profiles,
-
-Profile | Default  | Since
---- | --- | --- 
--Pspark-3.1 | No | 1.1.0
--Pspark-3.2 | Yes | 1.4.0
--Pspark-3.3 | No | 1.6.0
-
-
-## Building with Apache dlcdn site
-
-By default, we use `https://archive.apache.org/dist/` to download the built-in release packages of engines,
-such as Spark or Flink.
-But sometimes, you may find it hard to reach, or the download speed is too slow,
-then you can define the `apache.archive.dist` by `-Pmirror-cdn` to accelerate to download speed.
-For example,
-
-```bash
-build/mvn clean package -Pmirror-cdn
-```
-
-The profile migrates your download repo to the Apache offically suggested site - https://dlcdn.apache.org.
-Note that, this site only holds the latest versions of Apache releases. You may fail if the specific version
-defined by `spark.version` or `flink.version` is overdue.
diff --git a/content/docs/r1.6.0-incubating/_sources/develop_tools/debugging.md.txt b/content/docs/r1.6.0-incubating/_sources/develop_tools/debugging.md.txt
deleted file mode 100644
index 90ebd58..0000000
--- a/content/docs/r1.6.0-incubating/_sources/develop_tools/debugging.md.txt
+++ /dev/null
@@ -1,110 +0,0 @@
-<!--
- - Licensed to the Apache Software Foundation (ASF) under one or more
- - contributor license agreements.  See the NOTICE file distributed with
- - this work for additional information regarding copyright ownership.
- - The ASF licenses this file to You under the Apache License, Version 2.0
- - (the "License"); you may not use this file except in compliance with
- - the License.  You may obtain a copy of the License at
- -
- -   http://www.apache.org/licenses/LICENSE-2.0
- -
- - Unless required by applicable law or agreed to in writing, software
- - distributed under the License is distributed on an "AS IS" BASIS,
- - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- - See the License for the specific language governing permissions and
- - limitations under the License.
- -->
-
-
-# Debugging Kyuubi
-
-You can use the [Java Debug Wire Protocol](https://docs.oracle.com/javase/8/docs/technotes/guides/jpda/conninv.html#Plugin) to debug Kyuubi
-with your favorite IDE tool, e.g. IntelliJ IDEA.
-
-## Debugging Server
-
-We can configure the JDWP agent in `KYUUBI_JAVA_OPTS` for debugging.
- 
- 
-For example,
-```bash
-KYUUBI_JAVA_OPTS=-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005 \
-bin/kyuubi start
-```
-
-In the IDE, you set the corresponding parameters(host&port) in debug configurations, for example,
-<div align=center>
-
-![](../imgs/idea_debug.png)
-
-</div>
-
-## Debugging Engine
-
-We can configure the Kyuubi properties to enable debugging engine.
-
-### Flink Engine
-
-```bash
-kyuubi.engine.flink.java.options -agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005
-```
-
-### Trino Engine
-
-```bash
-kyuubi.engine.trino.java.options -agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005
-```
-
-### Hive Engine
-
-```bash
-kyuubi.engine.hive.java.options -agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005
-```
-
-## Debugging Apps
-
-### Spark Engine
-
-- Spark Driver
-
-```bash
-spark.driver.extraJavaOptions   -agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005
-```
-
-- Spark Executor
-
-```bash
-spark.executor.extraJavaOptions   -agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005
-```
-
-### Flink Engine
-
-- Flink Processes
-
-```bash
-env.java.opts   -agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005
-```
-
-- Flink JobManager
-
-```bash
-env.java.opts.jobmanager   -agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005
-```
-
-- Flink TaskManager
-
-```bash
-env.java.opts.taskmanager   -agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005
-```
-
-- Flink HistoryServer
-
-```bash
-env.java.opts.historyserver   -agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005
-```
-
-- Flink Client
-
-```bash
-env.java.opts.client   -agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005
-```
\ No newline at end of file
diff --git a/content/docs/r1.6.0-incubating/_sources/develop_tools/developer.md.txt b/content/docs/r1.6.0-incubating/_sources/develop_tools/developer.md.txt
deleted file mode 100644
index 5f69f4a..0000000
--- a/content/docs/r1.6.0-incubating/_sources/develop_tools/developer.md.txt
+++ /dev/null
@@ -1,63 +0,0 @@
-<!--
- - Licensed to the Apache Software Foundation (ASF) under one or more
- - contributor license agreements.  See the NOTICE file distributed with
- - this work for additional information regarding copyright ownership.
- - The ASF licenses this file to You under the Apache License, Version 2.0
- - (the "License"); you may not use this file except in compliance with
- - the License.  You may obtain a copy of the License at
- -
- -   http://www.apache.org/licenses/LICENSE-2.0
- -
- - Unless required by applicable law or agreed to in writing, software
- - distributed under the License is distributed on an "AS IS" BASIS,
- - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- - See the License for the specific language governing permissions and
- - limitations under the License.
- -->
-
-
-# Developer Tools
-
-## Update Project Version
-
-```bash
-
-build/mvn versions:set -DgenerateBackupPoms=false
-```
-
-## Update Document Version
-
-Whenever project version updates, please also update the document version at `docs/conf.py` to target the upcoming release.
-
-For example,
-
-```python
-release = '1.2.0'
-```
-
-## Update Dependency List
-
-Kyuubi uses the `dev/dependencyList` file to indicate what upstream dependencies will actually go to the server-side classpath.
-
-For Pull requests, a linter for dependency check will be automatically executed in GitHub Actions.
-
-You can run `build/dependency.sh` locally first to detect the potential dependency change first.
-
-If the changes look expected, run `build/dependency.sh --replace` to update `dev/dependencyList` in your Pull request.
-
-
-## Format All Code
-
-Kyuubi uses [Spotless](https://github.com/diffplug/spotless/tree/main/plugin-maven)
-with [google-java-format](https://github.com/google/google-java-format) and [Scalafmt](https://scalameta.org/scalafmt/)
-to format the Java and Scala code.
-
-You can run `dev/reformat` to format all Java and Scala code.
-
-
-## Append descriptions of new configurations to settings.md
-
-Kyuubi uses settings.md to explain available configurations.
-
-You can run `KYUUBI_UPDATE=1 build/mvn clean install -Pflink-provided,spark-provided,hive-provided -DwildcardSuites=org.apache.kyuubi.config.AllKyuubiConfiguration`
-to append descriptions of new configurations to settings.md.
\ No newline at end of file
diff --git a/content/docs/r1.6.0-incubating/_sources/develop_tools/distribution.md.txt b/content/docs/r1.6.0-incubating/_sources/develop_tools/distribution.md.txt
deleted file mode 100644
index 680f4e2..0000000
--- a/content/docs/r1.6.0-incubating/_sources/develop_tools/distribution.md.txt
+++ /dev/null
@@ -1,56 +0,0 @@
-<!--
- - Licensed to the Apache Software Foundation (ASF) under one or more
- - contributor license agreements.  See the NOTICE file distributed with
- - this work for additional information regarding copyright ownership.
- - The ASF licenses this file to You under the Apache License, Version 2.0
- - (the "License"); you may not use this file except in compliance with
- - the License.  You may obtain a copy of the License at
- -
- -   http://www.apache.org/licenses/LICENSE-2.0
- -
- - Unless required by applicable law or agreed to in writing, software
- - distributed under the License is distributed on an "AS IS" BASIS,
- - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- - See the License for the specific language governing permissions and
- - limitations under the License.
- -->
-
-
-# Building a Runnable Distribution
-
-To create a Kyuubi distribution like those distributed by [Kyuubi Release Page](https://kyuubi.apache.org/releases.html),
-and that is laid out to be runnable, use `./build/dist` in the project root directory.
-
-For more information on usage, run `./build/dist --help`
-
-```logtalk
-./build/dist - Tool for making binary distributions of Kyuubi
-
-Usage:
-+------------------------------------------------------------------------------------------------------+
-| ./build/dist [--name <custom_name>] [--tgz] [--flink-provided] [--spark-provided] [--hive-provided]  |
-|              [--mvn <maven_executable>] <maven build options>                                        |
-+------------------------------------------------------------------------------------------------------+
-name:           -  custom binary name, using project version if undefined
-tgz:            -  whether to make a whole bundled package
-flink-provided: -  whether to make a package without Flink binary
-spark-provided: -  whether to make a package without Spark binary
-hive-provided:  -  whether to make a package without Hive binary
-mvn:            -  external maven executable location
-```
-
-For instance,
-
-```bash
-./build/dist --name custom-name --tgz
-```
-
-This results in a Kyuubi distribution named `apache-kyuubi-{version}-bin-custom-name.tgz` for you.
-
-If you are planing to deploy Kyuubi where `spark`/`flink`/`hive` is provided, in other word, it's not required to bundle spark/flink/hive binary, use 
-
-```bash
-./build/dist --tgz --spark-provided --flink-provided --hive-provided
-```
-
-Then you will get a Kyuubi distribution without spark/flink/hive binary named `apache-kyuubi-{version}-bin.tgz`.
diff --git a/content/docs/r1.6.0-incubating/_sources/develop_tools/idea_setup.md.txt b/content/docs/r1.6.0-incubating/_sources/develop_tools/idea_setup.md.txt
deleted file mode 100644
index bf5c44d..0000000
--- a/content/docs/r1.6.0-incubating/_sources/develop_tools/idea_setup.md.txt
+++ /dev/null
@@ -1,96 +0,0 @@
-<!--
- - Licensed to the Apache Software Foundation (ASF) under one or more
- - contributor license agreements.  See the NOTICE file distributed with
- - this work for additional information regarding copyright ownership.
- - The ASF licenses this file to You under the Apache License, Version 2.0
- - (the "License"); you may not use this file except in compliance with
- - the License.  You may obtain a copy of the License at
- -
- -   http://www.apache.org/licenses/LICENSE-2.0
- -
- - Unless required by applicable law or agreed to in writing, software
- - distributed under the License is distributed on an "AS IS" BASIS,
- - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- - See the License for the specific language governing permissions and
- - limitations under the License.
- -->
-
-
-# IntelliJ IDEA Setup Guide
-
-## Copyright Profile
-
-Every file needs to include the Apache license as a header. This can be automated in IntelliJ by adding a Copyright
-profile:
-
-1. Go to "Settings/Preferences" → "Editor" → "Copyright" → "Copyright Profiles".
-2. Add a new profile and name it "Apache".
-3. Add the following text as the copyright text:
-
-   ```
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-   
-       http://www.apache.org/licenses/LICENSE-2.0
-   
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and 
-   limitations under the License.
-   ```
-4. Go to "Editor" → "Copyright" and choose the "Apache" profile as the default profile for this project.
-5. Click "Apply".
-
-## Required Plugins
-
-Go to "Settings/Preferences" → "Plugins" and select the "Marketplace" tab. Search for the following plugins, install
-them, and restart the IDE if prompted:
-
-* [Scala](https://plugins.jetbrains.com/plugin/1347-scala)
-
-You will also need to install the [google-java-format](https://github.com/google/google-java-format)
-plugin. However, a specific version of this plugin is required. Download
-[google-java-format v1.7.0.6](https://plugins.jetbrains.com/plugin/8527-google-java-format/versions/stable/115957)
-and install it as follows. Make sure to NEVER update this plugin.
-
-1. Go to "Settings/Preferences" → "Plugins".
-2. Click the gear icon and select "Install Plugin from Disk".
-3. Navigate to the downloaded ZIP file and select it.
-
-## Formatter For Java
-
-Kyuubi uses [Spotless](https://github.com/diffplug/spotless/tree/main/plugin-maven) together with
-[google-java-format](https://github.com/google/google-java-format) to format the Java code.
-
-It is recommended to automatically format your code by applying the following settings:
-
-1. Go to "Settings/Preferences" → "Other Settings" → "google-java-format Settings".
-2. Tick the checkbox to enable the plugin.
-3. Change the code style to "Default Google Java style".
-4. Go to "Settings/Preferences" → "Tools" → "Actions on Save".
-5. Select "Reformat code".
-
-If you use the IDEA version is 2021.1 and below, please replace the above steps 4 and 5 by using the
-[Save Actions](https://plugins.jetbrains.com/plugin/7642-save-actions) plugin.
-
-## Formatter For Scala
-
-Enable [Scalafmt](https://scalameta.org/scalafmt/) as follows:
-
-1. Go to "Settings/Preferences" → "Editor" → "Code Style" → "Scala"
-2. Set "Formatter" to "Scalafmt"
-3. Enable "Reformat on file save"
-
-## Checkstyle For Scala
-
-Enable [Scalastyle](http://www.scalastyle.org/) as follows:
-
-1. Go to "Settings/Preferences" → "Editor" → "Inspections".
-2. Search for "Scala style inspection" and enable it.
-
diff --git a/content/docs/r1.6.0-incubating/_sources/develop_tools/index.rst.txt b/content/docs/r1.6.0-incubating/_sources/develop_tools/index.rst.txt
deleted file mode 100644
index 30207c6..0000000
--- a/content/docs/r1.6.0-incubating/_sources/develop_tools/index.rst.txt
+++ /dev/null
@@ -1,31 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-.. image:: ../imgs/kyuubi_logo.png
-   :align: center
-
-Develop Tools
-=============
-
-.. toctree::
-    :maxdepth: 2
-
-    building
-    distribution
-    build_document
-    testing
-    debugging
-    developer
-    idea_setup
diff --git a/content/docs/r1.6.0-incubating/_sources/develop_tools/testing.md.txt b/content/docs/r1.6.0-incubating/_sources/develop_tools/testing.md.txt
deleted file mode 100644
index deb984f..0000000
--- a/content/docs/r1.6.0-incubating/_sources/develop_tools/testing.md.txt
+++ /dev/null
@@ -1,54 +0,0 @@
-<!--
- - Licensed to the Apache Software Foundation (ASF) under one or more
- - contributor license agreements.  See the NOTICE file distributed with
- - this work for additional information regarding copyright ownership.
- - The ASF licenses this file to You under the Apache License, Version 2.0
- - (the "License"); you may not use this file except in compliance with
- - the License.  You may obtain a copy of the License at
- -
- -   http://www.apache.org/licenses/LICENSE-2.0
- -
- - Unless required by applicable law or agreed to in writing, software
- - distributed under the License is distributed on an "AS IS" BASIS,
- - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- - See the License for the specific language governing permissions and
- - limitations under the License.
- -->
-
-# Running Tests
-
-**Kyuubi** can be tested based on [Apache Maven](http://maven.apache.org) and the ScalaTest Maven Plugin,
-please refer to the [ScalaTest documentation](http://www.scalatest.org/user_guide/using_the_scalatest_maven_plugin),
-
-## Running Tests Fully
-
-The following is an example of a command to run all the tests:
-
-```bash
-./build/mvn clean install
-```
-
-## Running Tests for a Module
-
-```bash
-./build/mvn clean install -pl kyuubi-common
-```
-
-## Running Tests for a Single Test
-
-When developing locally, it’s convenient to run one single test, or a couple of tests, rather than all.
-
-With Maven, you can use the -DwildcardSuites flag to run individual Scala tests:
-
-```bash
-./build/mvn clean install -Dtest=none -DwildcardSuites=org.apache.kyuubi.service.FrontendServiceSuite
-```
-
-If you want to make a single test that need to integrate with kyuubi-spark-sql-engine module, please build the package
-for kyuubi-spark-sql-engine module at first.
-
-You can leverage the ready-made tool for creating a binary distribution.
-
-```bash
-./build/dist
-```
diff --git a/content/docs/r1.6.0-incubating/_sources/extensions/engines/flink/index.rst.txt b/content/docs/r1.6.0-incubating/_sources/extensions/engines/flink/index.rst.txt
deleted file mode 100644
index 01bbecf..0000000
--- a/content/docs/r1.6.0-incubating/_sources/extensions/engines/flink/index.rst.txt
+++ /dev/null
@@ -1,25 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-Extensions for Flink
-====================
-
-.. toctree::
-    :maxdepth: 1
-
-    ../../../connector/flink/index
-
-.. warning::
-   This page is still in-progress.
diff --git a/content/docs/r1.6.0-incubating/_sources/extensions/engines/hive/index.rst.txt b/content/docs/r1.6.0-incubating/_sources/extensions/engines/hive/index.rst.txt
deleted file mode 100644
index 8aeebf1..0000000
--- a/content/docs/r1.6.0-incubating/_sources/extensions/engines/hive/index.rst.txt
+++ /dev/null
@@ -1,25 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-Extensions for Hive
-===================
-
-.. toctree::
-    :maxdepth: 2
-
-    ../../../connector/hive/index
-
-.. warning::
-   This page is still in-progress.
diff --git a/content/docs/r1.6.0-incubating/_sources/extensions/engines/index.rst.txt b/content/docs/r1.6.0-incubating/_sources/extensions/engines/index.rst.txt
deleted file mode 100644
index e0ff281..0000000
--- a/content/docs/r1.6.0-incubating/_sources/extensions/engines/index.rst.txt
+++ /dev/null
@@ -1,30 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-Engine Side Extensions
-======================
-
-Engine side extensions are applied to kyuubi engines, some of them can be
-managed by administrators, some of them can be applied by end-users dynamically
-at runtime.
-
-
-.. toctree::
-    :maxdepth: 2
-
-    spark/index
-    flink/index
-    hive/index
-    trino/index
diff --git a/content/docs/r1.6.0-incubating/_sources/extensions/engines/spark/functions.md.txt b/content/docs/r1.6.0-incubating/_sources/extensions/engines/spark/functions.md.txt
deleted file mode 100644
index b467a3a..0000000
--- a/content/docs/r1.6.0-incubating/_sources/extensions/engines/spark/functions.md.txt
+++ /dev/null
@@ -1,31 +0,0 @@
-<!--
- - Licensed to the Apache Software Foundation (ASF) under one or more
- - contributor license agreements.  See the NOTICE file distributed with
- - this work for additional information regarding copyright ownership.
- - The ASF licenses this file to You under the Apache License, Version 2.0
- - (the "License"); you may not use this file except in compliance with
- - the License.  You may obtain a copy of the License at
- -
- -   http://www.apache.org/licenses/LICENSE-2.0
- -
- - Unless required by applicable law or agreed to in writing, software
- - distributed under the License is distributed on an "AS IS" BASIS,
- - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- - See the License for the specific language governing permissions and
- - limitations under the License.
- -->
-
-<!-- DO NOT MODIFY THIS FILE DIRECTLY, IT IS AUTO GENERATED BY [org.apache.kyuubi.engine.spark.udf.KyuubiDefinedFunctionSuite] -->
-
-# Auxiliary SQL Functions
-
-Kyuubi provides several auxiliary SQL functions as supplement to Spark's [Built-in Functions](https://spark.apache.org/docs/latest/api/sql/index.html#built-in-functions)
-
-Name | Description | Return Type | Since
---- | --- | --- | ---
-kyuubi_version | Return the version of Kyuubi Server | string | 1.3.0
-engine_name | Return the spark application name for the associated query engine | string | 1.3.0
-engine_id | Return the spark application id for the associated query engine | string | 1.4.0
-system_user | Return the system user name for the associated query engine | string | 1.3.0
-session_user | Return the session username for the associated query engine | string | 1.4.0
-
diff --git a/content/docs/r1.6.0-incubating/_sources/extensions/engines/spark/index.rst.txt b/content/docs/r1.6.0-incubating/_sources/extensions/engines/spark/index.rst.txt
deleted file mode 100644
index 058823d..0000000
--- a/content/docs/r1.6.0-incubating/_sources/extensions/engines/spark/index.rst.txt
+++ /dev/null
@@ -1,26 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-Extensions for Spark
-====================
-
-.. toctree::
-    :maxdepth: 1
-
-    z-order
-    rules
-    ../../../security/authorization/spark/index
-    functions
-    ../../../connector/spark/index
diff --git a/content/docs/r1.6.0-incubating/_sources/extensions/engines/spark/rules.md.txt b/content/docs/r1.6.0-incubating/_sources/extensions/engines/spark/rules.md.txt
deleted file mode 100644
index 08750c8..0000000
--- a/content/docs/r1.6.0-incubating/_sources/extensions/engines/spark/rules.md.txt
+++ /dev/null
@@ -1,82 +0,0 @@
-<!--
- - Licensed to the Apache Software Foundation (ASF) under one or more
- - contributor license agreements.  See the NOTICE file distributed with
- - this work for additional information regarding copyright ownership.
- - The ASF licenses this file to You under the Apache License, Version 2.0
- - (the "License"); you may not use this file except in compliance with
- - the License.  You may obtain a copy of the License at
- -
- -   http://www.apache.org/licenses/LICENSE-2.0
- -
- - Unless required by applicable law or agreed to in writing, software
- - distributed under the License is distributed on an "AS IS" BASIS,
- - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- - See the License for the specific language governing permissions and
- - limitations under the License.
- -->
-
-
-# Auxiliary Optimization Rules
-
-Kyuubi provides SQL extension out of box. Due to the version compatibility with Apache Spark, currently we only support Apache Spark branch-3.1 (i.e 3.1.1 and 3.1.2).
-And don't worry, Kyuubi will support the new Apache Spark version in the future. Thanks to the adaptive query execution framework (AQE), Kyuubi can do these optimizations.
-
-## Features
-
-- merging small files automatically
-  
-  Small files is a long time issue with Apache Spark. Kyuubi can merge small files by adding an extra shuffle.
-  Currently, Kyuubi supports handle small files with datasource table and hive table, and also Kyuubi support optimize dynamic partition insertion.
-  For example, a common write query `INSERT INTO TABLE $table1 SELECT * FROM $table2`, Kyuubi will introduce an extra shuffle before write and then the small files will go away.
-
-
-- insert shuffle node before Join to make AQE OptimizeSkewedJoin work
-
-  In current implementation, Apache Spark can only optimize skewed join by the standard join which means a join must have two sort and shuffle node.
-  However, in complex scenario this assuming will be broken easily. Kyuubi can guarantee the join is standard by adding an extra shuffle node before join.
-  So that, OptimizeSkewedJoin can work better.
-
-
-- stage level config isolation in AQE
-
-  As we know, `spark.sql.adaptive.advisoryPartitionSizeInBytes` is a key config in Apache Spark AQE.
-  It controls how big data size per-task should handle during shuffle, so we always use a 64MB or a smaller value to make parallelism enough.
-  However, in general, we expect a file is big enough like 256MB or 512MB. Kyuubi can make the config isolation to solve the conflict so that
-  we can make staging partition data size small and last partition data size big.
-
-
-## Usage
-
-| Kyuubi Spark SQL extension | Supported Spark version(s) | Available since  | EOL              | Bundled in Binary release tarball | Maven profile
-| -------------------------- | -------------------------- | ---------------- | ---------------- | --------------------------------- | -------------
-| kyuubi-extension-spark-3-1 | 3.1.x                      | 1.3.0-incubating | N/A              | 1.3.0-incubating                  | spark-3.1
-| kyuubi-extension-spark-3-2 | 3.2.x                      | 1.4.0-incubating | N/A              | 1.4.0-incubating                  | spark-3.2
-| kyuubi-extension-spark-3-3 | 3.3.x                      | 1.6.0-incubating | N/A              | 1.6.0-incubating                  | spark-3.3
-
-1. Check the matrix that if you are using the supported Spark version, and find the corresponding Kyuubi Spark SQL Extension jar
-2. Get the Kyuubi Spark SQL Extension jar
-   1. Each Kyuubi binary release tarball only contains one default version of Kyuubi Spark SQL Extension jar, if you are looking for such version, you can find it under `$KYUUBI_HOME/extension`
-   2. All supported versions of Kyuubi Spark SQL Extension jar will be deployed to [Maven Central](https://search.maven.org/search?q=kyuubi-extension-spark)
-   3. If you like, you can compile Kyuubi Spark SQL Extension jar by yourself, please activate the corresponding Maven's profile on you compile command, i.e. you can get Kyuubi Spark SQL Extension jar for Spark 3.1 under `extensions/spark/kyuubi-extension-spark-3-1/target` when compile with `-Pspark-3.1`
-3. Put the Kyuubi Spark SQL extension jar `kyuubi-extension-spark-*.jar` into `$SPARK_HOME/jars`
-4. Enable `KyuubiSparkSQLExtension`, i.e. add a config into `$SPARK_HOME/conf/spark-defaults.conf`, `spark.sql.extensions=org.apache.kyuubi.sql.KyuubiSparkSQLExtension`
-
-Now, you can enjoy the Kyuubi SQL Extension.
-
-
-## Additional Configurations
-
-Kyuubi provides some configs to make these feature easy to use.
-
-Name | Default Value | Description | Since
---- | --- | --- | ---
-spark.sql.optimizer.insertRepartitionBeforeWrite.enabled | true | Add repartition node at the top of query plan. An approach of merging small files. | 1.2.0
-spark.sql.optimizer.insertRepartitionNum | none | The partition number if `spark.sql.optimizer.insertRepartitionBeforeWrite.enabled` is enabled. If AQE is disabled, the default value is `spark.sql.shuffle.partitions`. If AQE is enabled, the default value is none that means depend on AQE. | 1.2.0
-spark.sql.optimizer.dynamicPartitionInsertionRepartitionNum | 100 | The partition number of each dynamic partition if `spark.sql.optimizer.insertRepartitionBeforeWrite.enabled` is enabled. We will repartition by dynamic partition columns to reduce the small file but that can cause data skew. This config is to extend the partition of dynamic partition column to avoid skew but may generate some small files. | 1.2.0
-spark.sql.optimizer.forceShuffleBeforeJoin.enabled | false | Ensure shuffle node exists before shuffled join (shj and smj) to make AQE `OptimizeSkewedJoin` works (complex scenario join, multi table join). | 1.2.0
-spark.sql.optimizer.finalStageConfigIsolation.enabled | false | If true, the final stage support use different config with previous stage. The prefix of final stage config key should be `spark.sql.finalStage.`. For example, the raw spark config: `spark.sql.adaptive.advisoryPartitionSizeInBytes`, then the final stage config should be: `spark.sql.finalStage.adaptive.advisoryPartitionSizeInBytes`. | 1.2.0
-spark.sql.analyzer.classification.enabled | false | When true, allows Kyuubi engine to judge this SQL's classification and set `spark.sql.analyzer.classification` back into sessionConf. Through this configuration item, Spark can optimizing configuration dynamic. | 1.4.0
-spark.sql.optimizer.insertZorderBeforeWriting.enabled | true | When true, we will follow target table properties to insert zorder or not. The key properties are: 1) `kyuubi.zorder.enabled`: if this property is true, we will insert zorder before writing data. 2) `kyuubi.zorder.cols`: string split by comma, we will zorder by these cols. | 1.4.0
-spark.sql.optimizer.zorderGlobalSort.enabled | true | When true, we do a global sort using zorder. Note that, it can cause data skew issue if the zorder columns have less cardinality. When false, we only do local sort using zorder. | 1.4.0
-spark.sql.watchdog.maxPartitions | none | Set the max partition number when spark scans a data source. Enable MaxPartitionStrategy by specifying this configuration. Add maxPartitions Strategy to avoid scan excessive partitions on partitioned table, it's optional that works with defined | 1.4.0
-spark.sql.optimizer.dropIgnoreNonExistent | false | When true, do not report an error if DROP DATABASE/TABLE/VIEW/FUNCTION/PARTITION specifies a non-existent database/table/view/function/partition | 1.5.0
diff --git a/content/docs/r1.6.0-incubating/_sources/extensions/engines/spark/z-order-benchmark.md.txt b/content/docs/r1.6.0-incubating/_sources/extensions/engines/spark/z-order-benchmark.md.txt
deleted file mode 100644
index d820eee..0000000
--- a/content/docs/r1.6.0-incubating/_sources/extensions/engines/spark/z-order-benchmark.md.txt
+++ /dev/null
@@ -1,240 +0,0 @@
-<!--
- - x to the Apache Software Foundation (ASF) under one or more
- - contributor license agreements.  See the NOTICE file distributed with
- - this work for additional information regarding copyright ownership.
- - The ASF licenses this file to You under the Apache License, Version 2.0
- - (the "License"); you may not use this file except in compliance with
- - the License.  You may obtain a copy of the License at
- -
- -   http://www.apache.org/licenses/LICENSE-2.0
- -
- - Unless required by applicable law or agreed to in writing, software
- - distributed under the License is distributed on an "AS IS" BASIS,
- - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- - See the License for the specific language governing permissions and
- - limitations under the License.
- -->
-
-
-# Z-order Benchmark
-
-Z-order is a technique that allows you to map multidimensional data to a single dimension. We did a performance test.
-
-For this test ,we used aliyun Databricks Delta test case
-https://help.aliyun.com/document_detail/168137.html?spm=a2c4g.11186623.6.563.10d758ccclYtVb.
-
-Prepare data for the three scenarios:
-
-1. 10 billion data and 2 hundred files (parquet files): for big file(1G)
-2. 10 billion data and 1 thousand files (parquet files): for medium file(200m)
-3. 1 billion data and 10 thousand files (parquet files): for smaller file(200k)
-
-Test env:
-spark-3.1.2
-hadoop-2.7.2
-kyuubi-1.4.0
-
-Test step:
-
-Step1: create hive tables.
-
-```scala
-spark.sql(s"drop database if exists $dbName cascade")
-spark.sql(s"create database if not exists $dbName")
-spark.sql(s"use $dbName")
-spark.sql(s"create table $connRandomParquet (src_ip string, src_port int, dst_ip string, dst_port int) stored as parquet")
-spark.sql(s"create table $connOrderbyOnlyIp (src_ip string, src_port int, dst_ip string, dst_port int) stored as parquet")
-spark.sql(s"create table $connOrderby (src_ip string, src_port int, dst_ip string, dst_port int) stored as parquet")
-spark.sql(s"create table $connZorderOnlyIp (src_ip string, src_port int, dst_ip string, dst_port int) stored as parquet")
-spark.sql(s"create table $connZorder (src_ip string, src_port int, dst_ip string, dst_port int) stored as parquet")
-spark.sql(s"show tables").show(false)
-```
-
-Step2: prepare data for parquet table with three scenarios,
-we use the following code.
-
-```scala
-def randomIPv4(r: Random) = Seq.fill(4)(r.nextInt(256)).mkString(".")
-def randomPort(r: Random) = r.nextInt(65536)
-
-def randomConnRecord(r: Random) = ConnRecord(
-  src_ip = randomIPv4(r), src_port = randomPort(r),
-  dst_ip = randomIPv4(r), dst_port = randomPort(r))
-```
-
-Step3: do optimize with z-order only ip and do optimize with order by only ip, sort column: src_ip, dst_ip and shuffle partition just as file numbers.
-
-```
-INSERT overwrite table conn_order_only_ip select src_ip, src_port, dst_ip, dst_port from conn_random_parquet order by src_ip, dst_ip;
-OPTIMIZE conn_zorder_only_ip ZORDER BY src_ip, dst_ip;
-```
-
-Step4: do optimize with z-order and do optimize with order by, sort column: src_ip, src_port, dst_ip, dst_port and shuffle partition just as file numbers.
-
-```
-INSERT overwrite table conn_order select src_ip, src_port, dst_ip, dst_port from conn_random_parquet order by src_ip, src_port, dst_ip, dst_port;
-OPTIMIZE conn_zorder ZORDER BY src_ip, src_port, dst_ip, dst_port;
-```
-
-
-The complete code is as follows:
-
-```shell
-./spark-shell
-import org.apache.spark.SparkConf
-import org.apache.spark.sql.SparkSession
-
-case class ConnRecord(src_ip: String, src_port: Int, dst_ip: String, dst_port: Int)
-
-val  conf  = new SparkConf().setAppName("zorder_test")
-val spark = SparkSession.builder().config(conf).enableHiveSupport().getOrCreate()
-import spark.implicits._
-
-val sc = spark.sparkContext
-sc.setLogLevel("WARN")
-//ten billion rows and two hundred files
-val numRecords = 10*1000*1000*1000L
-val numFiles = 200
-
-val dbName = s"zorder_test_$numFiles"
-val baseLocation = s"hdfs://localhost:9000/zorder_test/$dbName/"
-val connRandomParquet = "conn_random_parquet"
-val connZorderOnlyIp = "conn_zorder_only_ip"
-val connZorder = "conn_zorder"
-spark.conf.set("spark.sql.shuffle.partitions", numFiles)
-spark.conf.get("spark.sql.shuffle.partitions")
-spark.conf.set("spark.sql.hive.convertMetastoreParquet",false)
-spark.sql(s"drop database if exists $dbName cascade")
-spark.sql(s"create database if not exists $dbName")
-spark.sql(s"use $dbName")
-spark.sql(s"create table $connRandomParquet (src_ip string, src_port int, dst_ip string, dst_port int) stored as parquet")
-spark.sql(s"create table $connOrderbyOnlyIp (src_ip string, src_port int, dst_ip string, dst_port int) stored as parquet")
-spark.sql(s"create table $connOrderby (src_ip string, src_port int, dst_ip string, dst_port int) stored as parquet")
-spark.sql(s"create table $connZorderOnlyIp (src_ip string, src_port int, dst_ip string, dst_port int) stored as parquet")
-spark.sql(s"create table $connZorder (src_ip string, src_port int, dst_ip string, dst_port int) stored as parquet")
-spark.sql(s"show tables").show(false)
-
-import scala.util.Random
-// Function for preparing Zorder_Test data
-def randomIPv4(r: Random) = Seq.fill(4)(r.nextInt(256)).mkString(".")
-def randomPort(r: Random) = r.nextInt(65536)
-
-def randomConnRecord(r: Random) = ConnRecord(
-src_ip = randomIPv4(r), src_port = randomPort(r),
-dst_ip = randomIPv4(r), dst_port = randomPort(r))
-
-val df = spark.range(0, numFiles, 1, numFiles).mapPartitions { it =>
-val partitionID = it.toStream.head
-val r = new Random(seed = partitionID)
-Iterator.fill((numRecords / numFiles).toInt)(randomConnRecord(r))
-}
-
-df.write
-.mode("overwrite")
-.format("parquet")
-.insertInto(connRandomParquet)
-
-spark.read.table(connRandomParquet)
-.write
-.mode("overwrite")
-.format("parquet")
-.insertInto(connZorderOnlyIp)
-
-spark.read.table(connRandomParquet)
-.write
-.mode("overwrite")
-.format("parquet")
-.insertInto(connZorder)
-spark.stop()
-
-```
-
-Z-order Optimize statement:
-
-```sql
-
-set spark.sql.hive.convertMetastoreParquet=false;
-
-OPTIMIZE conn_zorder_only_ip ZORDER BY src_ip, dst_ip;
-
-OPTIMIZE zorder_test.conn_zorder ZORDER BY src_ip, src_port, dst_ip, dst_port;
-```
-
-ORDER BY statement:
-
-```
-INSERT overwrite table conn_order_only_ip select src_ip, src_port, dst_ip, dst_port from conn_random_parquet order by src_ip, dst_ip;
-
-INSERT overwrite table conn_order select src_ip, src_port, dst_ip, dst_port from conn_random_parquet order by src_ip, src_port, dst_ip, dst_port;
-
-```
-
-Query statement:
-
-```sql
-
-set spark.sql.hive.convertMetastoreParquet=true;
-
-select count(*) from conn_random_parquet where src_ip like '157%' and dst_ip like '216.%';
-
-select count(*) from conn_zorder_only_ip where src_ip like '157%' and dst_ip like '216.%';
-
-select count(*) from conn_zorder where src_ip like '157%' and dst_ip like '216.%';
-```
-
-
-## Benchmark result
-
-We have done two performance tests: one is to compare the efficiency of  Z-order Optimize and Order by Sort, 
-and the other is to query based on the optimized Z-order by data and Random data.
-
-### Efficiency of Z-order Optimize and Order-by Sort
-
-**10 billion data and 1000 files and Query resource: 200 core 600G memory**
-
-Z-order by or order by only ip:
-
-| Table               | row count      | optimize  time     |
-| ------------------- | -------------- | ------------------ |
-| conn_order_only_ip  | 10,000,000,000 | 1591.99 s          |
-| conn_zorder_only_ip | 10,000,000,000 | 8371.405 s         |
-
-Z-order by or order by all columns:
-
-| Table               | row count      | optimize  time     |
-| ------------------- | -------------- | ------------------ |
-| conn_order          | 10,000,000,000 | 1515.298 s         |
-| conn_zorder         | 10,000,000,000 | 11057.194 s        |
-
-### Z-order by benchmark result
-
-By querying the tables before and after optimization, we find that:
-
-**10 billion data and 200 files and Query resource: 200 core 600G memory**
-
-| Table               | Average File Size | Scan row count | Average query time | row count Skipping ratio |
-| ------------------- | ----------------- | -------------- | ------------------ | ------------------------ |
-| conn_random_parquet | 1.2 G             | 10,000,000,000 | 27.554 s           | 0.0%                     |
-| conn_zorder_only_ip | 890 M             | 43,170,600     | 2.459 s            | 99.568%                  |
-| conn_zorder         | 890 M             | 54,841,302     | 3.185 s            | 99.451%                  |
-
-
-
-**10 billion data and 1000 files and Query resource: 200 core 600G memory**
-
-| Table               | Average File Size | Scan row count | Average query time | row count Skipping ratio |
-| ------------------- | ----------------- | -------------- | ------------------ | ------------------------ |
-| conn_random_parquet | 234.8 M           | 10,000,000,000 | 27.031 s           | 0.0%                     |
-| conn_zorder_only_ip | 173.9 M           | 53,499,068     | 3.120 s            | 99.465%                  |
-| conn_zorder         | 174.0 M           | 35,910,500     | 3.103 s            | 99.640%                  |
-
-
-
-**1 billion data and 10000 files and Query resource: 10 core 40G memory**
-
-| Table               | Average File Size | Scan row count | Average query time | row count Skipping ratio |
-| ------------------- | ----------------- | -------------- | ------------------ | ------------------------ |
-| conn_random_parquet | 2.7 M             | 1,000,000,000  | 76.772 s           | 0.0%                     |
-| conn_zorder_only_ip | 2.1 M             | 406,572        | 3.963 s            | 99.959%                  |
-| conn_zorder         | 2.2 M             | 387,942        | 3.621s             | 99.961%                  |
-
diff --git a/content/docs/r1.6.0-incubating/_sources/extensions/engines/spark/z-order.md.txt b/content/docs/r1.6.0-incubating/_sources/extensions/engines/spark/z-order.md.txt
deleted file mode 100644
index 0ac4152..0000000
--- a/content/docs/r1.6.0-incubating/_sources/extensions/engines/spark/z-order.md.txt
+++ /dev/null
@@ -1,121 +0,0 @@
-<!--
- - Licensed to the Apache Software Foundation (ASF) under one or more
- - contributor license agreements.  See the NOTICE file distributed with
- - this work for additional information regarding copyright ownership.
- - The ASF licenses this file to You under the Apache License, Version 2.0
- - (the "License"); you may not use this file except in compliance with
- - the License.  You may obtain a copy of the License at
- -
- -   http://www.apache.org/licenses/LICENSE-2.0
- -
- - Unless required by applicable law or agreed to in writing, software
- - distributed under the License is distributed on an "AS IS" BASIS,
- - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- - See the License for the specific language governing permissions and
- - limitations under the License.
- -->
-
-# Z-Ordering Support
-
-To improve query speed, Kyuubi supports Z-Ordering to optimize the layout of data
-stored in all kind of storage with various data format.
-
-Please check our benchmark report [here](z-order-benchmark.md).
-
-
-## Introduction
-
-The following picture shows the workflow of z-order.
-
-![](../../../imgs/extension/zorder-workflow.png)
-
-It contains three parties:
-- Upstream
-
-  Due to the extra sort, the upstream job will run a little slower than before
-
-- Table
-
-  Z-order has the good data clustering, so the compression ratio can be improved
-
-- Downstream
-
-  Improve the downstream read performance benefit from data skipping. Since the parquet and orc file support collect data statistic automatically when you write data e.g. minimum and maximum values, the good data clustering let the pushed down filter more efficient
-
-### Supported table format
-
-| Table Format | Supported |
-|--------------|-----------|
-| parquet      |     Y     |
-| orc          |     Y     |
-| json         |     N     |
-| csv          |     N     |
-| text         |     N     |
-
-### Supported column data type
-
-| Column Data Type | Supported |
-|------------------|-----------|
-| byte             |     Y     |
-| short            |     Y     |
-| int              |     Y     |
-| long             |     Y     |
-| float            |     Y     |
-| double           |     Y     |
-| boolean          |     Y     |
-| string           |     Y     |
-| decimal          |     Y     |
-| date             |     Y     |
-| timestamp        |     Y     |
-| array            |     N     |
-| map              |     N     |
-| struct           |     N     |
-| udt              |     N     |
-
-## How to use
-
-This feature is inside Kyuubi extension, so you should apply the extension to Spark by following steps.
-
-- add extension jar: `copy $KYUUBI_HOME/extension/kyuubi-extension-spark-3-1* $SPARK_HOME/jars/`
-- add config into `spark-defaults.conf`: `spark.sql.extensions=org.apache.kyuubi.sql.KyuubiSparkSQLExtension`
-
-Due to the extension, z-order only works with Spark-3.1 and higher version.
-
-### Optimize history data
-
-If you want to optimize the history data of a table, the `OPTIMIZE ...` syntax is good to go. Due to Spark SQL doesn't support read and overwrite same datasource table, the syntax can only support to optimize Hive table.
-
-#### Syntax
-```sql
-OPTIMIZE table_name [WHERE predicate] ZORDER BY col_name1 [, ...]
-```
-
-Note that, the `predicate` only supports partition spec.
-
-#### Examples
-```sql
-OPTIMIZE t1 ZORDER BY c3;
-
-OPTIMIZE t1 ZORDER BY c1,c2;
-
-OPTIMIZE t1 WHERE day = '2021-12-01' ZORDER BY c1,c2;
-```
-
-### Optimize incremental data
-
-Kyuubi supports optimize a table automatically for incremental data. e.g., time partitioned table. The only things you need to do is adding Kyuubi properties into the target table properties:
-```sql
-ALTER TABLE t1 SET TBLPROPERTIES('kyuubi.zorder.enabled'='true','kyuubi.zorder.cols'='c1,c2');
-```
-- the key `kyuubi.zorder.enabled` decide if the table allows Kyuubi to optimize by z-order.
-- the key `kyuubi.zorder.cols` decide which columns are used to optimize by z-order.
-
-Kyuubi will detect the properties and optimize SQL using z-order during SQL compilation, so you can enjoy z-order with all writing table command like:
-
-```sql
-INSERT INTO TABLE t1 PARTITION() ...;
-
-INSERT OVERWRITE TABLE t1 PARTITION() ...;
-
-CREATE TABLE t1 AS SELECT ...;
-```
diff --git a/content/docs/r1.6.0-incubating/_sources/extensions/engines/trino/index.rst.txt b/content/docs/r1.6.0-incubating/_sources/extensions/engines/trino/index.rst.txt
deleted file mode 100644
index 1e5f9bc..0000000
--- a/content/docs/r1.6.0-incubating/_sources/extensions/engines/trino/index.rst.txt
+++ /dev/null
@@ -1,26 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-Extensions for Trino
-====================
-
-
-.. warning::
-   This page is still in-progress.
-
-.. toctree::
-    :maxdepth: 1
-
-    ../../../connector/trino/index
diff --git a/content/docs/r1.6.0-incubating/_sources/extensions/index.rst.txt b/content/docs/r1.6.0-incubating/_sources/extensions/index.rst.txt
deleted file mode 100644
index f177a91..0000000
--- a/content/docs/r1.6.0-incubating/_sources/extensions/index.rst.txt
+++ /dev/null
@@ -1,39 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-Extensions
-==========
-
-Besides the base use case, Kyuubi also has some extension points for
-extending use cases. By defining plugin of your own or applying
-third-party ones, kyuubi allows to run your plugin's functionality at
-the specific point.
-
-The extension points can be divided into server side extensions and
-engine side extensions.
-
-Server side extensions are applied by kyuubi administrators to extend the
-ability of kyuubi servers.
-
-Engine side extensions are applied to kyuubi engines, some of them can be
-managed by administrators, some of them can be applied by end-users dynamically
-at runtime.
-
-
-.. toctree::
-    :maxdepth: 2
-
-    Server Side Extensions <server/index>
-    Engine Side Extensions <engines/index>
diff --git a/content/docs/r1.6.0-incubating/_sources/extensions/server/applications.rst.txt b/content/docs/r1.6.0-incubating/_sources/extensions/server/applications.rst.txt
deleted file mode 100644
index 4aebb6b..0000000
--- a/content/docs/r1.6.0-incubating/_sources/extensions/server/applications.rst.txt
+++ /dev/null
@@ -1,154 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-Manage Applications against Extra Cluster Managers
-==================================================
-
-.. versionadded:: 1.6.0
-
-.. caution:: unstable
-
-Inside Kyuubi, the Kyuubi server uses the ``ApplicationManager`` module to manage all applications launched by itself, including different kinds of Kyuubi engines and self-contained applications.
-
-The ``ApplicationManager`` leverages methods provided by application operation implementations derived from ``org.apache.kyuubi.engine.ApplicationOperation`` to monitor the status of those applications and kill abnormal applications in case they get orphaned and may introduce more methods in the future.
-
-An ``ApplicationOperation`` implementation is usually built upon clients or APIs provided by cluster managers, such as Hadoop YARN, Kubernetes, etc.
-
-For now, Kyuubi has already supported several built-in application operations:
-
-- ``JpsApplicationOperation``: an operation that can manage apps with a local process, e.g. a local mode spark application
-- ``YarnApplicationOperation``: an operation that can manage apps with a Hadoop Yarn cluster, e.g. a spark on yarn application
-- ``KubernetesApplicationOperation``: an operation that can manage apps with a k8s cluster, e.g. a spark on k8s application
-
-Besides those built-in ones, Kyuubi also supports loading custom ``ApplicationOperation`` through the Java `ServiceLoader` (SPI) for extra cluster managers.
-
-The rest of this article will show you the specifications and steps to build and enable a custom operation.
-
-   .. code-block:: scala
-
-      trait ApplicationOperation {
-
-        /**
-         * Step for initializing the instance.
-         */
-        def initialize(conf: KyuubiConf): Unit
-
-        /**
-         * Step to clean up the instance
-         */
-        def stop(): Unit
-
-        /**
-         * Called before other method to do a quick skip
-         *
-         * @param clusterManager the underlying cluster manager or just local instance
-         */
-        def isSupported(clusterManager: Option[String]): Boolean
-
-        /**
-         * Kill the app/engine by the unique application tag
-         *
-         * @param tag the unique application tag for engine instance.
-         *            For example,
-         *            if the Hadoop Yarn is used, for spark applications,
-         *            the tag will be preset via spark.yarn.tags
-         * @return a message contains response describing how the kill process.
-         *
-         * @note For implementations, please suppress exceptions and always return KillResponse
-         */
-        def killApplicationByTag(tag: String): KillResponse
-
-        /**
-         * Get the engine/application status by the unique application tag
-         *
-         * @param tag the unique application tag for engine instance.
-         * @return [[ApplicationInfo]]
-         */
-        def getApplicationInfoByTag(tag: String): ApplicationInfo
-      }
-
-   .. code-block:: scala
-
-      /**
-        * (killed or not, hint message)
-        */
-      type KillResponse = (Boolean, String)
-
-An ``ApplicationInfo`` is used to represented the application information, including application id, name, state, url address and error message.
-
-   .. code-block:: scala
-
-      object ApplicationState extends Enumeration {
-        type ApplicationState = Value
-        val PENDING, RUNNING, FINISHED, KILLED, FAILED, ZOMBIE, NOT_FOUND, UNKNOWN = Value
-      }
-
-      case class ApplicationInfo(
-          id: String,
-          name: String,
-          state: ApplicationState,
-          url: Option[String] = None,
-          error: Option[String] = None)
-
-For application state mapping, you can reference the implementation of yarn:
-
-   .. code-block:: scala
-
-      def toApplicationState(state: YarnApplicationState): ApplicationState = state match {
-        case YarnApplicationState.NEW => ApplicationState.PENDING
-        case YarnApplicationState.NEW_SAVING => ApplicationState.PENDING
-        case YarnApplicationState.SUBMITTED => ApplicationState.PENDING
-        case YarnApplicationState.ACCEPTED => ApplicationState.PENDING
-        case YarnApplicationState.RUNNING => ApplicationState.RUNNING
-        case YarnApplicationState.FINISHED => ApplicationState.FINISHED
-        case YarnApplicationState.FAILED => ApplicationState.FAILED
-        case YarnApplicationState.KILLED => ApplicationState.KILLED
-        case _ =>
-          warn(s"The yarn driver state: $state is not supported, " +
-            "mark the application state as UNKNOWN.")
-          ApplicationState.UNKNOWN
-      }
-
-Build A Custom Application Operation
-------------------------------------
-
-- reference kyuubi-server
-
-   .. code-block:: xml
-
-      <dependency>
-         <groupId>org.apache.kyuubi</groupId>
-         <artifactId>kyuubi-server_2.12</artifactId>
-         <version>1.5.2-incubating</version>
-         <scope>provided</scope>
-      </dependency>
-
-- create a custom class which implements the ``org.apache.kyuubi.engine.ApplicationOperation``.
-
-- create a directory META-INF.services and a file with ``org.apache.kyuubi.engine.ApplicationOperation``:
-
-   .. code-block:: java
-
-      META-INF.services/org.apache.kyuubi.engine.ApplicationOperation
-
-   then add your fully-qualified name of custom application operation into the file.
-
-
-Enable Custom Application Operation
------------------------------------
-
-.. note:: Kyuubi uses Java SPI to load the custom Application Operation
-
-- compile and put the jar into ``$KYUUBI_HOME/jars``
diff --git a/content/docs/r1.6.0-incubating/_sources/extensions/server/authentication.rst.txt b/content/docs/r1.6.0-incubating/_sources/extensions/server/authentication.rst.txt
deleted file mode 100644
index ab23804..0000000
--- a/content/docs/r1.6.0-incubating/_sources/extensions/server/authentication.rst.txt
+++ /dev/null
@@ -1,83 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-Configure Kyuubi to use Custom Authentication
-=============================================
-
-Besides the `builtin authentication`_ methods, kyuubi supports custom
-authentication implementations of `org.apache.kyuubi.service.authentication.PasswdAuthenticationProvider`.
-
-.. code-block:: scala
-
-   package org.apache.kyuubi.service.authentication
-
-   import javax.security.sasl.AuthenticationException
-
-   trait PasswdAuthenticationProvider {
-
-     /**
-      * The authenticate method is called by the Kyuubi Server authentication layer
-      * to authenticate users for their requests.
-      * If a user is to be granted, return nothing/throw nothing.
-      * When a user is to be disallowed, throw an appropriate [[AuthenticationException]].
-      *
-      * @param user     The username received over the connection request
-      * @param password The password received over the connection request
-      *
-      * @throws AuthenticationException When a user is found to be invalid by the implementation
-      */
-     @throws[AuthenticationException]
-     def authenticate(user: String, password: String): Unit
-   }
-
-Build A Custom Authenticator
-----------------------------
-
-To create custom Authenticator class derived from the above interface, we need to:
-
-- Referencing the library
-
-.. code-block:: xml
-
-   <dependency>
-      <groupId>org.apache.kyuubi</groupId>
-      <artifactId>kyuubi-common_2.12</artifactId>
-      <version>1.5.2-incubating</version>
-      <scope>provided</scope>
-   </dependency>
-
-- Implement PasswdAuthenticationProvider - `Sample Code`_
-
-
-Enable Custom Authentication
-----------------------------
-
-To enable the custom authentication method, we need to
-
-- Put the jar package to ``$KYUUBI_HOME/jars`` directory to make it visible for
-  the classpath of the kyuubi server.
-- Configure the following properties to ``$KYUUBI_HOME/conf/kyuubi-defaults.conf``
-  on each node where kyuubi server is installed.
-
-.. code-block:: property
-   :margin:
-
-   kyuubi.authentication=CUSTOM
-   kyuubi.authentication.custom.class=YourAuthenticationProvider
-
-- Restart all the kyuubi server instances
-
-.. _builtin authentication: ../../security/authentication.html
-.. _Sample Code: https://github.com/kyuubilab/example-custom-authentication/blob/main/src/main/scala/org/apache/kyuubi/example/MyAuthenticationProvider.scala
\ No newline at end of file
diff --git a/content/docs/r1.6.0-incubating/_sources/extensions/server/configuration.rst.txt b/content/docs/r1.6.0-incubating/_sources/extensions/server/configuration.rst.txt
deleted file mode 100644
index 94b1da9..0000000
--- a/content/docs/r1.6.0-incubating/_sources/extensions/server/configuration.rst.txt
+++ /dev/null
@@ -1,73 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-Inject Session Conf with Custom Config Advisor
-==============================================
-
-.. versionadded:: 1.5.0
-
-Session Conf Advisor
---------------------
-
-Kyuubi supports inject session configs with custom config advisor. It is usually used to append or overwrite session configs dynamically, so administrators of Kyuubi can have an ability to control the user specified configs.
-
-The steps of injecting session configs
---------------------------------------
-
-1. create a custom class which implements the ``org.apache.kyuubi.plugin.SessionConfAdvisor``.
-2. compile and put the jar into ``$KYUUBI_HOME/jars``
-3. adding configuration at ``kyuubi-defaults.conf``:
-
-   .. code-block:: java
-
-      kyuubi.session.conf.advisor=${classname}
-
-The ``org.apache.kyuubi.plugin.SessionConfAdvisor`` has a zero-arg constructor, holds one method with user and session conf and returns a new conf map.
-
-.. code-block:: java
-
-   public interface SessionConfAdvisor {
-     default Map<String, String> getConfOverlay(String user, Map<String, String> sessionConf) {
-       return Collections.EMPTY_MAP;
-     }
-   }
-
-.. note:: The returned conf map will overwrite the original session conf.
-
-Example
--------
-
-We have a custom class ``CustomSessionConfAdvisor``:
-
-.. code-block:: java
-
-   @Override
-   public class CustomSessionConfAdvisor {
-     Map<String, String> getConfOverlay(String user, Map<String, String> sessionConf) {
-       if ("uly".equals(user)) {
-         return Collections.singletonMap("spark.driver.memory", "1G");
-       } else {
-         return Collections.EMPTY_MAP;
-       }
-     }
-   }
-
-If a user `uly` creates a connection with:
-
-.. code-block:: java
-
-   jdbc:hive2://localhost:10009/;hive.server2.proxy.user=uly;#spark.driver.memory=2G
-
-The final Spark application will allocate ``1G`` rather than ``2G`` for the driver jvm.
diff --git a/content/docs/r1.6.0-incubating/_sources/extensions/server/events.rst.txt b/content/docs/r1.6.0-incubating/_sources/extensions/server/events.rst.txt
deleted file mode 100644
index b5d484c..0000000
--- a/content/docs/r1.6.0-incubating/_sources/extensions/server/events.rst.txt
+++ /dev/null
@@ -1,22 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-Handle Events with Custom Event Handler
-=======================================
-
-.. caution:: unstable
-
-.. warning::
-   This page is still in-progress.
diff --git a/content/docs/r1.6.0-incubating/_sources/extensions/server/index.rst.txt b/content/docs/r1.6.0-incubating/_sources/extensions/server/index.rst.txt
deleted file mode 100644
index d8c860d..0000000
--- a/content/docs/r1.6.0-incubating/_sources/extensions/server/index.rst.txt
+++ /dev/null
@@ -1,29 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-Server Side Extensions
-======================
-
-Server side extensions for injecting custom functionality to some of the modules at the
-kyuubi server. They are applied by kyuubi administrators to extend the
-ability of kyuubi servers.
-
-.. toctree::
-    :maxdepth: 1
-
-    authentication
-    configuration
-    events
-    applications
diff --git a/content/docs/r1.6.0-incubating/_sources/index.rst.txt b/content/docs/r1.6.0-incubating/_sources/index.rst.txt
deleted file mode 100644
index b3e32da..0000000
--- a/content/docs/r1.6.0-incubating/_sources/index.rst.txt
+++ /dev/null
@@ -1,145 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-.. Kyuubi documentation master file, created by
-   sphinx-quickstart on Wed Oct 28 14:23:28 2020.
-   You can adapt this file completely to your liking, but it should at least
-   contain the root `toctree` directive.
-
-HOME
-====
-
-Kyuubi™ is a unified multi-tenant JDBC interface for large-scale data processing and analytics, built on top of `Apache Spark™ <http://spark.apache.org/>`_.
-
-.. raw:: html
-
-   <body><div class="mxgraph" style="max-width:100%;border:1px solid transparent;" data-mxgraph="{&quot;lightbox&quot;:false,&quot;nav&quot;:true,&quot;resize&quot;:true,&quot;toolbar&quot;:&quot;zoom layers tags&quot;,&quot;edit&quot;:&quot;_blank&quot;,&quot;xml&quot;:&quot;&lt;mxfile host=\&quot;Electron\&quot; modified=\&quot;2021-12-08T06:25:12.937Z\&quot; agent=\&quot;5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/15.8.7 Chrome/91.0.4472.164  [...]
-
-In general, the complete ecosystem of Kyuubi falls into the hierarchies shown in the above figure, with each layer loosely coupled to the other.
-
-For example, you can use Kyuubi, Spark and `Apache Iceberg <https://iceberg.apache.org/>`_ to build and manage Data Lake with pure SQL for both data processing e.g. ETL, and analytics e.g. BI.
-All workloads can be done on one platform, using one copy of data, with one SQL interface.
-
-Kyuubi provides the following features:
-
-Multi-tenancy
--------------
-
-Kyuubi supports the end-to-end multi-tenancy,
-and this is why we want to create this project despite that the Spark `Thrift JDBC/ODBC server <http://spark.apache.org/docs/latest/sql-distributed-sql-engine.html#running-the-thrift-jdbcodbc-server>`_ already exists.
-
-1. Supports multi-client concurrency and authentication
-2. Supports one Spark application per account(SPA).
-3. Supports QUEUE/NAMESPACE Access Control Lists (ACL)
-4. Supports metadata & data Access Control Lists
-
-Users who have valid accounts could use all kinds of client tools, e.g.
-Hive Beeline, `HUE <https://gethue.com/>`_, `DBeaver <https://dbeaver.io/>`_,
-`SQuirreL SQL Client <http://squirrel-sql.sourceforge.net/>`_, etc,
-to operate with Kyuubi server concurrently.
-
-The SPA policy makes sure 1) a user account can only get computing resource with managed ACLs, e.g.
-`Queue Access Control Lists <https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/FairScheduler.html#Queue_Access_Control_Lists>`_,
-from cluster managers, e.g.
-`Apache Hadoop YARN <https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html>`_,
-`Kubernetes (K8s) <https://kubernetes.io/>`_ to create the Spark application;
-2) a user account can only access data and metadata from a storage system, e.g.
-`Apache Hadoop HDFS <https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html>`_,
-with permissions.
-
-Ease of Use
-------------
-
-You only need to be familiar with Structured Query Language (SQL) and Java Database Connectivity (JDBC) to handle massive data.
-It helps you focus on the design and implementation of your business system.
-
-Run Anywhere
-------------
-
-Kyuubi can submit Spark applications to all supported cluster managers, including YARN, Mesos, Kubernetes, Standalone, and local.
-
-The SPA policy also make it possible for you to launch different applications against different cluster managers.
-
-High Performance
-----------------
-
-Kyuubi is built on the Apache Spark, a lightning-fast unified analytics engine.
-
- - **Concurrent execution**: multiple Spark applications work together
- - **Quick response**: long-running Spark applications without startup cost
- - **Optimal execution plan**: fully supports Spark SQL Catalyst Optimizer,
-
-Authentication & Authorization
-------------------------------
-
-With strong authentication and fine-grained column/row level authorization,
-Kyuubi keeps your system and data secure.
-
-High Availability
------------------
-
-Kyuubi provides both high availability and load balancing solutions based on Zookeeper.
-
-.. toctree::
-   :caption: Admin Guide
-   :maxdepth: 2
-   :glob:
-
-   quick_start/index
-   deployment/index
-   Security <security/index>
-   monitor/index
-   tools/index
-
-.. toctree::
-   :caption: User Guide
-   :maxdepth: 2
-   :glob:
-
-   Clients & APIs <client/index>
-   SQL References <sql/index>
-
-.. toctree::
-   :caption: Extension Guide
-   :maxdepth: 2
-   :glob:
-
-   Extensions <extensions/index>
-
-.. toctree::
-   :caption: Connectors
-   :maxdepth: 2
-   :glob:
-
-   connector/index
-
-.. toctree::
-   :caption: Kyuubi Insider
-   :maxdepth: 2
-
-   overview/index
-
-.. toctree::
-   :caption: Contributing
-   :maxdepth: 2
-
-   develop_tools/index
-   community/index
-
-.. toctree::
-   :caption: Appendix
-   :maxdepth: 2
-
-   appendix/index
diff --git a/content/docs/r1.6.0-incubating/_sources/monitor/events.md.txt b/content/docs/r1.6.0-incubating/_sources/monitor/events.md.txt
deleted file mode 100644
index 3358d57..0000000
--- a/content/docs/r1.6.0-incubating/_sources/monitor/events.md.txt
+++ /dev/null
@@ -1,19 +0,0 @@
-<!--
- - Licensed to the Apache Software Foundation (ASF) under one or more
- - contributor license agreements.  See the NOTICE file distributed with
- - this work for additional information regarding copyright ownership.
- - The ASF licenses this file to You under the Apache License, Version 2.0
- - (the "License"); you may not use this file except in compliance with
- - the License.  You may obtain a copy of the License at
- -
- -   http://www.apache.org/licenses/LICENSE-2.0
- -
- - Unless required by applicable law or agreed to in writing, software
- - distributed under the License is distributed on an "AS IS" BASIS,
- - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- - See the License for the specific language governing permissions and
- - limitations under the License.
- -->
-
-
-# Monitoring Kyuubi - Events System
diff --git a/content/docs/r1.6.0-incubating/_sources/monitor/index.rst.txt b/content/docs/r1.6.0-incubating/_sources/monitor/index.rst.txt
deleted file mode 100644
index c69c023..0000000
--- a/content/docs/r1.6.0-incubating/_sources/monitor/index.rst.txt
+++ /dev/null
@@ -1,28 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-Monitoring
-==========
-
-In this section, you will learn how to monitor Kyuubi with logging, metrics etc..
-
-.. toctree::
-    :maxdepth: 2
-    :numbered: 3
-    :glob:
-
-    logging
-    metrics
-    trouble_shooting
diff --git a/content/docs/r1.6.0-incubating/_sources/monitor/logging.md.txt b/content/docs/r1.6.0-incubating/_sources/monitor/logging.md.txt
deleted file mode 100644
index 57c673c..0000000
--- a/content/docs/r1.6.0-incubating/_sources/monitor/logging.md.txt
+++ /dev/null
@@ -1,268 +0,0 @@
-<!--
- - Licensed to the Apache Software Foundation (ASF) under one or more
- - contributor license agreements.  See the NOTICE file distributed with
- - this work for additional information regarding copyright ownership.
- - The ASF licenses this file to You under the Apache License, Version 2.0
- - (the "License"); you may not use this file except in compliance with
- - the License.  You may obtain a copy of the License at
- -
- -   http://www.apache.org/licenses/LICENSE-2.0
- -
- - Unless required by applicable law or agreed to in writing, software
- - distributed under the License is distributed on an "AS IS" BASIS,
- - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- - See the License for the specific language governing permissions and
- - limitations under the License.
- -->
-
-
-# Monitoring Kyuubi - Logging System
-
-Kyuubi uses [Apache Log4j2](https://logging.apache.org/log4j/2.x/) for logging since version v1.5.0. For versions v1.4.1 and below, it uses [Apache Log4j](https://logging.apache.org).
-
-In general, there are mainly three components in the Kyuubi architecture that will produce component-oriented logs to help you trace breadcrumbs for SQL workloads against Kyuubi.
-
-- Logs of Kyuubi Server
-- Logs of Kyuubi Engines
-- Operation logs
-
-In addition, a Kyuubi deployment for production usually relies on some other external systems.
-For example, both Kyuubi servers and engines will use [Apache Zookeeper](https://zookeeper.apache.org/) for service discovery.
-The instructions for external system loggings will not be included in this article.
-
-## Logs of Kyuubi Server
-
-Logs of Kyuubi Server show us the activities of the server instance including how start/stop, how does it response client requests, etc.
-
-### Configuring Server Logging
-
-#### Basic Configurations
-
-You can configure it by adding a `log4j2.xml` file in the `$KYUUBI_HOME/conf` directory.
-One way to start is to make a copy of the existing `log4j2.xml.template` located there.
-
-For example,
-
-```shell
-# cd $KYUUBI_HOME
-cp conf/log4j2.xml.template conf/log4j2.xml
-```
-
-With or without the above step, by default the server logging will redirect the logs to a file named `kyuubi-${env:USER}-org.apache.kyuubi.server.KyuubiServer-${env:HOSTNAME}.out` under the directory of `$KYUUBI_HOME/logs`.
-
-For example, you can easily find where the server log goes when staring a Kyuubi server from the console output.
-
-```shell
-$ export SPARK_HOME=/Users/kentyao/Downloads/spark/spark-3.2.0-bin-hadoop3.2
-$ cd ~/svn-kyuubi/v1.3.1-incubating-rc0/apache-kyuubi-1.3.1-incubating-bin
-$ bin/kyuubi start
-```
-
-```log
-Starting Kyuubi Server from /Users/kentyao/svn-kyuubi/v1.3.1-incubating-rc0/apache-kyuubi-1.3.1-incubating-bin
-Warn: Not find kyuubi environment file /Users/kentyao/svn-kyuubi/v1.3.1-incubating-rc0/apache-kyuubi-1.3.1-incubating-bin/conf/kyuubi-env.sh, using default ones...
-JAVA_HOME: /Library/Java/JavaVirtualMachines/jdk1.8.0_251.jdk/Contents/Home
-KYUUBI_HOME: /Users/kentyao/svn-kyuubi/v1.3.1-incubating-rc0/apache-kyuubi-1.3.1-incubating-bin
-KYUUBI_CONF_DIR: /Users/kentyao/svn-kyuubi/v1.3.1-incubating-rc0/apache-kyuubi-1.3.1-incubating-bin/conf
-KYUUBI_LOG_DIR: /Users/kentyao/svn-kyuubi/v1.3.1-incubating-rc0/apache-kyuubi-1.3.1-incubating-bin/logs
-KYUUBI_PID_DIR: /Users/kentyao/svn-kyuubi/v1.3.1-incubating-rc0/apache-kyuubi-1.3.1-incubating-bin/pid
-KYUUBI_WORK_DIR_ROOT: /Users/kentyao/svn-kyuubi/v1.3.1-incubating-rc0/apache-kyuubi-1.3.1-incubating-bin/work
-SPARK_HOME: /Users/kentyao/Downloads/spark/spark-3.2.0-bin-hadoop3.2
-SPARK_CONF_DIR: /Users/kentyao/Downloads/spark/spark-3.2.0-bin-hadoop3.2/conf
-HADOOP_CONF_DIR:
-YARN_CONF_DIR:
-Starting org.apache.kyuubi.server.KyuubiServer, logging to /Users/kentyao/svn-kyuubi/v1.3.1-incubating-rc0/apache-kyuubi-1.3.1-incubating-bin/logs/kyuubi-kentyao-org.apache.kyuubi.server.KyuubiServer-hulk.local.out
-Welcome to
-  __  __                           __
- /\ \/\ \                         /\ \      __
- \ \ \/'/'  __  __  __  __  __  __\ \ \____/\_\
-  \ \ , <  /\ \/\ \/\ \/\ \/\ \/\ \\ \ '__`\/\ \
-   \ \ \\`\\ \ \_\ \ \ \_\ \ \ \_\ \\ \ \L\ \ \ \
-    \ \_\ \_\/`____ \ \____/\ \____/ \ \_,__/\ \_\
-     \/_/\/_/`/___/> \/___/  \/___/   \/___/  \/_/
-                /\___/
-                \/__/
-```
-
-#### KYUUBI_LOG_DIR
-
-You may also notice that there is an environment variable called `KYUUBI_LOG_DIR` in the above example.
-
-`KYUUBI_LOG_DIR` determines which folder we want to put our server log files.
-
-For example, the below command will locate the log files to `/Users/kentyao/tmp`.
-
-```shell
-$ mkdir /Users/kentyao/tmp
-$ KYUUBI_LOG_DIR=/Users/kentyao/tmp bin/kyuubi start
-```
-
-```log
-Starting org.apache.kyuubi.server.KyuubiServer, logging to /Users/kentyao/tmp/kyuubi-kentyao-org.apache.kyuubi.server.KyuubiServer-hulk.local.out
-```
-
-#### KYUUBI_MAX_LOG_FILES
-
-`KYUUBI_MAX_LOG_FILES` controls how many log files will be remained after a Kyuubi server reboots.
-
-#### Custom Log4j2 Settings
-
-Taking control of `$KYUUBI_HOME/conf/log4j2.xml` will also give us the ability of customizing server logging as we want.
-
-For example, we can disable the console appender and enable the file appender like,
-
-```xml
-<Configuration status="INFO">
-  <Appenders>
-    <File name="fa" fileName="log/dummy.log">
-      <PatternLayout pattern="%d{yyyy-MM-dd HH:mm:ss.SSS} %p %c: %m%n"/>
-      <Filters>
-        <RegexFilter regex=".*Thrift error occurred during processing of message.*" onMatch="DENY" onMismatch="NEUTRAL"/>
-      </Filters>
-    </File>
-  </Appenders>
-  <Loggers>
-    <Root level="INFO">
-      <AppenderRef ref="fa"/>
-    </Root>
-  </Loggers>
-</Configuration>
-```
-
-Then everything goes to `log/dummy.log`.
-
-## Logs of Spark SQL Engine
-
-Spark SQL Engine is one type of Kyuubi Engines and also a typical Spark application.
-Thus, its logs mainly contain the logs of a Spark Driver.
-Meanwhile, it also includes how all the services of an engine start/stop, how does it response the incoming calls from Kyuubi servers, etc.
-
-In general, when an exception occurs, we are able to find more information and clues in the engine's logs.
-
-#### Configuring Engine Logging
-
-Please refer to Apache Spark online documentation -[Configuring Logging](https://spark.apache.org/docs/latest/configuration.html#configuring-logging) for instructions.
-
-#### Where to Find the Engine Log
-
-The engine logs locate differently based on the deploy mode and the cluster manager.
-When using local backend or `client` deploy mode for other cluster managers, such as YARN, you can find the whole engine log in `$KYUUBI_WORK_DIR_ROOT/${session username}/kyuubi-spark-sql-engine.log.${num}`.
-Different session users have different folders to group all live and historical engine logs.
-Each engine will have one and only engine log.
-When using `cluster` deploy mode, the local engine logs only contain very little information, the main parts of engine logs are on the remote driver side, e.g. for YARN cluster, they are in ApplicationMasters' log.
-
-## Logs of Flink SQL Engine
-
-Flink SQL Engine is one type of Kyuubi Engines and also a typical Flink application.
-Thus, its logs mainly contain the logs of a Flink JobManager and TaskManager.
-Meanwhile, it also includes how all the services of an engine start/stop, how does it response the incoming calls from Kyuubi servers, etc.
-
-In general, when an exception occurs, we are able to find more information and clues in the engine's logs.
-
-#### Configuring Engine Logging
-
-Please refer to Apache Flink online documentation -[Configuring Logging](https://nightlies.apache.org/flink/flink-docs-stable/docs/deployment/advanced/logging) for instructions.
-
-#### Where to Find the Engine Log
-
-The engine logs locate differently based on the deploy mode and the cluster manager.
-When using local backend or `client` deploy mode for other cluster managers, such as YARN, you can find the whole engine log in `$KYUUBI_WORK_DIR_ROOT/${session username}/kyuubi-flink-sql-engine.log.${num}`.
-Different session users have different folders to group all live and historical engine logs.
-Each engine will have one and only engine log.
-When using `cluster` deploy mode, the local engine logs only contain very little information, the main parts of engine logs are on the remote driver side, e.g. for YARN cluster, they are in ApplicationMasters' log.
-
-## Operation Logs
-
-Operation log will show how SQL queries are executed, such as query planning, execution, and statistic reports.
-
-Operation logs can reveal directly to end-users how their queries are being executed on the server/engine-side, including some process-oriented information, and why their queries are slow or in error.
-
-For example, when you, as an end-user, use `beeline` to connect a Kyuubi server and execute query like below.
-
-```shell
-bin/beeline -u 'jdbc:hive2://10.242.189.214:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=kyuubi' -n kent -e 'select * from src;'
-```
-
-You will both get the final results and the corresponding operation logs telling you the journey of the query.
-
-```log
-0: jdbc:hive2://10.242.189.214:2181/> select * from src;
-2021-10-27 17:00:19.399 INFO operation.ExecuteStatement: Processing kent's query[fb5f57d2-2b50-4a46-961b-3a5c6a2d2597]: INITIALIZED_STATE -> PENDING_STATE, statement: select * from src
-2021-10-27 17:00:19.401 INFO operation.ExecuteStatement: Processing kent's query[fb5f57d2-2b50-4a46-961b-3a5c6a2d2597]: PENDING_STATE -> RUNNING_STATE, statement: select * from src
-2021-10-27 17:00:19.400 INFO operation.ExecuteStatement: Processing kent's query[26e169a2-6c06-450a-b758-e577ac673d70]: INITIALIZED_STATE -> PENDING_STATE, statement: select * from src
-2021-10-27 17:00:19.401 INFO operation.ExecuteStatement: Processing kent's query[26e169a2-6c06-450a-b758-e577ac673d70]: PENDING_STATE -> RUNNING_STATE, statement: select * from src
-2021-10-27 17:00:19.402 INFO operation.ExecuteStatement:
-           Spark application name: kyuubi_USER_kent_6d4b5e53-ddd2-420c-b04f-326fb2b17e18
-                 application ID: local-1635318669122
-                 application web UI: http://10.242.189.214:50250
-                 master: local[*]
-                 deploy mode: client
-                 version: 3.2.0
-           Start time: 2021-10-27T15:11:08.416
-           User: kent
-2021-10-27 17:00:19.408 INFO metastore.HiveMetaStore: 6: get_database: default
-2021-10-27 17:00:19.408 INFO HiveMetaStore.audit: ugi=kent	ip=unknown-ip-addr	cmd=get_database: default
-2021-10-27 17:00:19.424 WARN conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
-2021-10-27 17:00:19.424 WARN conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
-2021-10-27 17:00:19.424 WARN conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
-2021-10-27 17:00:19.424 INFO metastore.HiveMetaStore: 6: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore
-2021-10-27 17:00:19.425 INFO metastore.ObjectStore: ObjectStore, initialize called
-2021-10-27 17:00:19.430 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
-2021-10-27 17:00:19.431 INFO metastore.ObjectStore: Initialized ObjectStore
-2021-10-27 17:00:19.434 INFO metastore.HiveMetaStore: 6: get_table : db=default tbl=src
-2021-10-27 17:00:19.434 INFO HiveMetaStore.audit: ugi=kent	ip=unknown-ip-addr	cmd=get_table : db=default tbl=src
-2021-10-27 17:00:19.449 INFO metastore.HiveMetaStore: 6: get_table : db=default tbl=src
-2021-10-27 17:00:19.450 INFO HiveMetaStore.audit: ugi=kent	ip=unknown-ip-addr	cmd=get_table : db=default tbl=src
-2021-10-27 17:00:19.510 INFO operation.ExecuteStatement: Processing kent's query[26e169a2-6c06-450a-b758-e577ac673d70]: RUNNING_STATE -> RUNNING_STATE, statement: select * from src
-2021-10-27 17:00:19.544 INFO memory.MemoryStore: Block broadcast_5 stored as values in memory (estimated size 343.6 KiB, free 408.6 MiB)
-2021-10-27 17:00:19.558 INFO memory.MemoryStore: Block broadcast_5_piece0 stored as bytes in memory (estimated size 33.5 KiB, free 408.5 MiB)
-2021-10-27 17:00:19.559 INFO spark.SparkContext: Created broadcast 5 from
-2021-10-27 17:00:19.600 INFO mapred.FileInputFormat: Total input files to process : 1
-2021-10-27 17:00:19.627 INFO spark.SparkContext: Starting job: collect at ExecuteStatement.scala:97
-2021-10-27 17:00:19.629 INFO kyuubi.SQLOperationListener: Query [26e169a2-6c06-450a-b758-e577ac673d70]: Job 5 started with 1 stages, 1 active jobs running
-2021-10-27 17:00:19.631 INFO kyuubi.SQLOperationListener: Query [26e169a2-6c06-450a-b758-e577ac673d70]: Stage 5 started with 1 tasks, 1 active stages running
-2021-10-27 17:00:19.713 INFO kyuubi.SQLOperationListener: Finished stage: Stage(5, 0); Name: 'collect at ExecuteStatement.scala:97'; Status: succeeded; numTasks: 1; Took: 83 msec
-2021-10-27 17:00:19.713 INFO scheduler.DAGScheduler: Job 5 finished: collect at ExecuteStatement.scala:97, took 0.085454 s
-2021-10-27 17:00:19.713 INFO scheduler.StatsReportListener: task runtime:(count: 1, mean: 78.000000, stdev: 0.000000, max: 78.000000, min: 78.000000)
-2021-10-27 17:00:19.713 INFO scheduler.StatsReportListener: 	0%	5%	10%	25%	50%	75%	90%	95%	100%
-2021-10-27 17:00:19.713 INFO scheduler.StatsReportListener: 	78.0 ms	78.0 ms	78.0 ms	78.0 ms	78.0 ms	78.0 ms	78.0 ms	78.0 ms	78.0 ms
-2021-10-27 17:00:19.714 INFO scheduler.StatsReportListener: shuffle bytes written:(count: 1, mean: 0.000000, stdev: 0.000000, max: 0.000000, min: 0.000000)
-2021-10-27 17:00:19.714 INFO scheduler.StatsReportListener: 	0%	5%	10%	25%	50%	75%	90%	95%	100%
-2021-10-27 17:00:19.714 INFO scheduler.StatsReportListener: 	0.0 B	0.0 B	0.0 B	0.0 B	0.0 B	0.0 B	0.0 B	0.0 B	0.0 B
-2021-10-27 17:00:19.714 INFO scheduler.StatsReportListener: fetch wait time:(count: 1, mean: 0.000000, stdev: 0.000000, max: 0.000000, min: 0.000000)
-2021-10-27 17:00:19.714 INFO scheduler.StatsReportListener: 	0%	5%	10%	25%	50%	75%	90%	95%	100%
-2021-10-27 17:00:19.714 INFO scheduler.StatsReportListener: 	0.0 ms	0.0 ms	0.0 ms	0.0 ms	0.0 ms	0.0 ms	0.0 ms	0.0 ms	0.0 ms
-2021-10-27 17:00:19.715 INFO scheduler.StatsReportListener: remote bytes read:(count: 1, mean: 0.000000, stdev: 0.000000, max: 0.000000, min: 0.000000)
-2021-10-27 17:00:19.715 INFO scheduler.StatsReportListener: 	0%	5%	10%	25%	50%	75%	90%	95%	100%
-2021-10-27 17:00:19.715 INFO scheduler.StatsReportListener: 	0.0 B	0.0 B	0.0 B	0.0 B	0.0 B	0.0 B	0.0 B	0.0 B	0.0 B
-2021-10-27 17:00:19.715 INFO scheduler.StatsReportListener: task result size:(count: 1, mean: 1471.000000, stdev: 0.000000, max: 1471.000000, min: 1471.000000)
-2021-10-27 17:00:19.715 INFO scheduler.StatsReportListener: 	0%	5%	10%	25%	50%	75%	90%	95%	100%
-2021-10-27 17:00:19.715 INFO scheduler.StatsReportListener: 	1471.0 B	1471.0 B	1471.0 B	1471.0 B	1471.0 B	1471.0 B	1471.0 B	1471.0 B	1471.0 B
-2021-10-27 17:00:19.717 INFO scheduler.StatsReportListener: executor (non-fetch) time pct: (count: 1, mean: 61.538462, stdev: 0.000000, max: 61.538462, min: 61.538462)
-2021-10-27 17:00:19.717 INFO scheduler.StatsReportListener: 	0%	5%	10%	25%	50%	75%	90%	95%	100%
-2021-10-27 17:00:19.717 INFO scheduler.StatsReportListener: 	62 %	62 %	62 %	62 %	62 %	62 %	62 %	62 %	62 %
-2021-10-27 17:00:19.718 INFO scheduler.StatsReportListener: fetch wait time pct: (count: 1, mean: 0.000000, stdev: 0.000000, max: 0.000000, min: 0.000000)
-2021-10-27 17:00:19.718 INFO scheduler.StatsReportListener: 	0%	5%	10%	25%	50%	75%	90%	95%	100%
-2021-10-27 17:00:19.718 INFO scheduler.StatsReportListener: 	 0 %	 0 %	 0 %	 0 %	 0 %	 0 %	 0 %	 0 %	 0 %
-2021-10-27 17:00:19.718 INFO scheduler.StatsReportListener: other time pct: (count: 1, mean: 38.461538, stdev: 0.000000, max: 38.461538, min: 38.461538)
-2021-10-27 17:00:19.718 INFO scheduler.StatsReportListener: 	0%	5%	10%	25%	50%	75%	90%	95%	100%
-2021-10-27 17:00:19.718 INFO scheduler.StatsReportListener: 	38 %	38 %	38 %	38 %	38 %	38 %	38 %	38 %	38 %
-2021-10-27 17:00:19.719 INFO kyuubi.SQLOperationListener: Query [26e169a2-6c06-450a-b758-e577ac673d70]: Job 5 succeeded, 0 active jobs running
-2021-10-27 17:00:19.728 INFO codegen.CodeGenerator: Code generated in 12.277091 ms
-2021-10-27 17:00:19.729 INFO operation.ExecuteStatement: Processing kent's query[26e169a2-6c06-450a-b758-e577ac673d70]: RUNNING_STATE -> FINISHED_STATE, statement: select * from src, time taken: 0.328 seconds
-2021-10-27 17:00:19.731 INFO operation.ExecuteStatement: Query[fb5f57d2-2b50-4a46-961b-3a5c6a2d2597] in FINISHED_STATE
-2021-10-27 17:00:19.731 INFO operation.ExecuteStatement: Processing kent's query[fb5f57d2-2b50-4a46-961b-3a5c6a2d2597]: RUNNING_STATE -> FINISHED_STATE, statement: select * from src, time taken: 0.33 seconds
-+-------------------------------------------------+--------------------+
-|                    version()                    | DATE '2021-10-27'  |
-+-------------------------------------------------+--------------------+
-| 3.2.0 5d45a415f3a29898d92380380cfd82bfc7f579ea  | 2021-10-27         |
-+-------------------------------------------------+--------------------+
-1 row selected (0.341 seconds)
-```
-## Further Readings
-
-- [Monitoring Kyuubi - Events System](events.md)
-- [Monitoring Kyuubi - Server Metrics](metrics.md)
-- [Trouble Shooting](trouble_shooting.md)
-- Spark Online Documentation
-    - [Monitoring and Instrumentation](http://spark.apache.org/docs/latest/monitoring.html)
diff --git a/content/docs/r1.6.0-incubating/_sources/monitor/metrics.md.txt b/content/docs/r1.6.0-incubating/_sources/monitor/metrics.md.txt
deleted file mode 100644
index 1123299..0000000
--- a/content/docs/r1.6.0-incubating/_sources/monitor/metrics.md.txt
+++ /dev/null
@@ -1,95 +0,0 @@
-<!--
- - Licensed to the Apache Software Foundation (ASF) under one or more
- - contributor license agreements.  See the NOTICE file distributed with
- - this work for additional information regarding copyright ownership.
- - The ASF licenses this file to You under the Apache License, Version 2.0
- - (the "License"); you may not use this file except in compliance with
- - the License.  You may obtain a copy of the License at
- -
- -   http://www.apache.org/licenses/LICENSE-2.0
- -
- - Unless required by applicable law or agreed to in writing, software
- - distributed under the License is distributed on an "AS IS" BASIS,
- - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- - See the License for the specific language governing permissions and
- - limitations under the License.
- -->
-
-# Monitoring Kyuubi - Server Metrics
-
-Kyuubi has a configurable metrics system based on the [Dropwizard Metrics Library](https://metrics.dropwizard.io/).
-This allows users to report Kyuubi metrics to a variety of `kyuubi.metrics.reporters`. 
-The metrics provide instrumentation for specific activities and Kyuubi server.
-
-## Configurations
-
-The metrics system is configured via `$KYUUBI_HOME/conf/kyuubi-defaults.conf`.
-
-Key | Default | Meaning | Type | Since
---- | --- | --- | --- | ---
-`kyuubi.metrics.enabled`|<div style='width: 65pt;word-wrap: break-word;white-space: normal'>true</div>|<div style='width: 170pt;word-wrap: break-word;white-space: normal'>Set to true to enable kyuubi metrics system</div>|<div style='width: 30pt'>boolean</div>|<div style='width: 20pt'>1.2.0</div>
-`kyuubi.metrics.reporters`|<div style='width: 65pt;word-wrap: break-word;white-space: normal'>JSON</div>|<div style='width: 170pt;word-wrap: break-word;white-space: normal'>A comma separated list for all metrics reporters<ul> <li>CONSOLE - ConsoleReporter which outputs measurements to CONSOLE periodically.</li> <li>JMX - JmxReporter which listens for new metrics and exposes them as MBeans.</li>  <li>JSON - JsonReporter which outputs measurements to json file periodically.</li> <li>PROMET [...]
-`kyuubi.metrics.console.interval`|<div style='width: 65pt;word-wrap: break-word;white-space: normal'>PT5S</div>|<div style='width: 170pt;word-wrap: break-word;white-space: normal'>How often should report metrics to console</div>|<div style='width: 30pt'>duration</div>|<div style='width: 20pt'>1.2.0</div>
-`kyuubi.metrics.json.interval`|<div style='width: 65pt;word-wrap: break-word;white-space: normal'>PT5S</div>|<div style='width: 170pt;word-wrap: break-word;white-space: normal'>How often should report metrics to json file</div>|<div style='width: 30pt'>duration</div>|<div style='width: 20pt'>1.2.0</div>
-`kyuubi.metrics.json.location`|<div style='width: 65pt;word-wrap: break-word;white-space: normal'>metrics</div>|<div style='width: 170pt;word-wrap: break-word;white-space: normal'>Where the json metrics file located</div>|<div style='width: 30pt'>string</div>|<div style='width: 20pt'>1.2.0</div>
-`kyuubi.metrics.prometheus.path`|<div style='width: 65pt;word-wrap: break-word;white-space: normal'>/metrics</div>|<div style='width: 170pt;word-wrap: break-word;white-space: normal'>URI context path of prometheus metrics HTTP server</div>|<div style='width: 30pt'>string</div>|<div style='width: 20pt'>1.2.0</div>
-`kyuubi.metrics.prometheus.port`|<div style='width: 65pt;word-wrap: break-word;white-space: normal'>10019</div>|<div style='width: 170pt;word-wrap: break-word;white-space: normal'>Prometheus metrics HTTP server port</div>|<div style='width: 30pt'>int</div>|<div style='width: 20pt'>1.2.0</div>
-`kyuubi.metrics.slf4j.interval`|<div style='width: 65pt;word-wrap: break-word;white-space: normal'>PT5S</div>|<div style='width: 170pt;word-wrap: break-word;white-space: normal'>How often should report metrics to SLF4J logger</div>|<div style='width: 30pt'>duration</div>|<div style='width: 20pt'>1.2.0</div>
-
-## Metrics
-
-These metrics include:
-
-Metrics Prefix | Metrics Suffix | Type | Since | Description
----|---|---|---|---
... 372218 lines suppressed ...