You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kyuubi.apache.org by gi...@apache.org on 2022/09/06 15:10:20 UTC

[incubator-kyuubi-website] branch asf-site updated: deploy: 63bdf091eed76963a06899c44e6e472114c691bb

This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-kyuubi-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 6d97949  deploy: 63bdf091eed76963a06899c44e6e472114c691bb
6d97949 is described below

commit 6d97949e19110a0b2799ca8d906c607cd84ec296
Author: pan3793 <pa...@users.noreply.github.com>
AuthorDate: Tue Sep 6 15:10:14 2022 +0000

    deploy: 63bdf091eed76963a06899c44e6e472114c691bb
---
 content/docs/latest/.buildinfo                     |    4 -
 content/docs/latest/404.html                       | 1292 ++++-
 .../_images/configure_database_connection.png      |  Bin 0 -> 609661 bytes
 .../_images/configure_database_connection_ha.png   |  Bin 0 -> 610635 bytes
 content/docs/latest/_images/connected.png          |  Bin 17478 -> 0 bytes
 .../_images/dbeaver_connnect_to_database.png       |  Bin 146299 -> 0 bytes
 .../dbeaver_connnect_to_database_connection.png    |  Bin 120468 -> 0 bytes
 .../dbeaver_connnect_to_database_driver.png        |  Bin 119670 -> 0 bytes
 .../_images/dbeaver_connnect_to_database_port.png  |  Bin 133784 -> 0 bytes
 .../docs/latest/_images/delta_lake_functions.png   |  Bin 104054 -> 0 bytes
 content/docs/latest/_images/desc_database.png      |  Bin 389808 -> 0 bytes
 content/docs/latest/_images/download_driver.png    |  Bin 121709 -> 0 bytes
 content/docs/latest/_images/kyuubi.png             |  Bin 129424 -> 0 bytes
 content/docs/latest/_images/metadata.png           |  Bin 396949 -> 891987 bytes
 .../latest/_images/new_database_connection.png     |  Bin 0 -> 719371 bytes
 content/docs/latest/_images/query.png              |  Bin 384802 -> 0 bytes
 content/docs/latest/_images/query41_result.png     |  Bin 555745 -> 0 bytes
 content/docs/latest/_images/tpcds_schema.png       |  Bin 95662 -> 0 bytes
 content/docs/latest/_images/trino-query-page.png   |  Bin 0 -> 133468 bytes
 content/docs/latest/_images/viewdata.png           |  Bin 778197 -> 0 bytes
 .../docs/latest/_sources/appendix/index.rst.txt    |   24 +
 .../latest/_sources/appendix/terminology.md.txt    |  164 +
 .../_sources/changelog/v1.5.1-incubating.md.txt    |   11 +
 .../_sources/changelog/v1.5.2-incubating.md.txt    |   16 +
 .../_sources/changelog/v1.6.0-incubating.md.txt    |  618 +++
 .../client/advanced/configurations.rst.txt         |   17 +
 .../client/advanced/features/engine_pool.rst.txt   |   18 +
 .../advanced/features/engine_resouces.rst.txt      |   18 +
 .../advanced/features/engine_share_level.rst.txt   |   18 +
 .../client/advanced/features/engine_ttl.rst.txt    |   18 +
 .../client/advanced/features/engine_type.rst.txt   |   18 +
 .../client/advanced/features/index.rst.txt         |   30 +
 .../client/advanced/features/plan_only.rst.txt     |   18 +
 .../client/advanced/features/scala.rst.txt         |   18 +
 .../latest/_sources/client/advanced/index.rst.txt  |   25 +
 .../_sources/client/advanced/kerberos.md.txt       |  224 +
 .../_sources/client/advanced/logging.rst.txt       |   17 +
 .../_sources/client/bi_tools/datagrip.md.txt       |   57 +
 .../_sources/client/bi_tools/dbeaver.rst.txt       |  125 +
 .../latest/_sources/client/bi_tools/hue.md.txt     |  130 +
 .../latest/_sources/client/bi_tools/index.rst.txt  |   32 +
 .../_sources/client/bi_tools/powerbi.rst.txt       |   21 +
 .../_sources/client/bi_tools/superset.rst.txt      |   21 +
 .../_sources/client/bi_tools/tableau.rst.txt       |   21 +
 .../_sources/client/cli/hive_beeline.rst.txt       |   31 +
 .../docs/latest/_sources/client/cli/index.rst.txt  |   23 +
 .../_sources/client/cli/kyuubi_beeline.rst.txt     |   22 +
 content/docs/latest/_sources/client/index.rst.txt  |   39 +
 .../latest/_sources/client/jdbc/hive_jdbc.md.txt   |   82 +
 .../docs/latest/_sources/client/jdbc/index.rst.txt |   25 +
 .../_sources/client/jdbc/kyuubi_jdbc.rst.txt       |  160 +
 .../latest/_sources/client/jdbc/mysql_jdbc.rst.txt |   26 +
 .../docs/latest/_sources/client/odbc/index.rst.txt |   24 +
 .../latest/_sources/client/python/index.rst.txt    |   24 +
 .../latest/_sources/client/python/pyhive.rst.txt   |   22 +
 .../docs/latest/_sources/client/rest/index.rst.txt |   24 +
 .../latest/_sources/client/rest/rest_api.md.txt    |  124 +
 .../latest/_sources/client/thrift/index.rst.txt    |   24 +
 .../docs/latest/_sources/client/ui/index.rst.txt   |   24 +
 .../latest/_sources/community/CONTRIBUTING.md.txt  |   61 +
 .../latest/_sources/community/collaborators.md.txt |   22 +
 .../docs/latest/_sources/community/index.rst.txt   |   27 +
 .../docs/latest/_sources/community/release.md.txt  |  283 ++
 .../connector/flink/flink_table_store.rst.txt      |  111 +
 .../latest/_sources/connector/flink/hudi.rst.txt   |  117 +
 .../_sources/connector/flink/iceberg.rst.txt       |  121 +
 .../latest/_sources/connector/flink/index.rst.txt  |   24 +
 .../latest/_sources/connector/hive/index.rst.txt   |   20 +
 .../docs/latest/_sources/connector/index.rst.txt   |   42 +
 .../_sources/connector/spark/delta_lake.rst.txt    |   95 +
 .../spark/delta_lake_with_azure_blob.rst.txt       |  345 ++
 .../connector/spark/flink_table_store.rst.txt      |   90 +
 .../latest/_sources/connector/spark/hudi.rst.txt   |  112 +
 .../_sources/connector/spark/iceberg.rst.txt       |  124 +
 .../latest/_sources/connector/spark/index.rst.txt  |   42 +
 .../latest/_sources/connector/spark/kudu.md.txt    |  185 +
 .../latest/_sources/connector/spark/tidb.rst.txt   |  103 +
 .../latest/_sources/connector/spark/tpcds.rst.txt  |  108 +
 .../latest/_sources/connector/spark/tpch.rst.txt   |  104 +
 .../connector/trino/flink_table_store.rst.txt      |   94 +
 .../_sources/connector/trino/iceberg.rst.txt       |   92 +
 .../latest/_sources/connector/trino/index.rst.txt  |   23 +
 .../deployment/engine_lifecycle.md.txt}            |  367 +-
 .../deployment/engine_on_kubernetes.md.txt         |  121 +
 .../_sources/deployment/engine_on_yarn.md.txt      |  258 +
 .../deployment/engine_share_level.md.txt}          |  534 +--
 .../deployment/high_availability_guide.md.txt}     |  406 +-
 .../_sources/deployment/hive_metastore.md.txt      |  210 +
 .../docs/latest/_sources/deployment/index.rst.txt  |   53 +
 .../deployment/kyuubi_on_kubernetes.md.txt         |  103 +
 .../latest/_sources/deployment/settings.md.txt     |  620 +++
 .../latest/_sources/deployment/spark/aqe.md.txt    |  264 ++
 .../deployment/spark/dynamic_allocation.md.txt     |  237 +
 .../deployment/spark/incremental_collection.md.txt |  121 +
 .../latest/_sources/deployment/spark/index.rst.txt |   32 +
 .../_sources/develop_tools/build_document.md.txt   |   74 +
 .../latest/_sources/develop_tools/building.md.txt  |   86 +
 .../latest/_sources/develop_tools/debugging.md.txt |  110 +
 .../latest/_sources/develop_tools/developer.md.txt |   63 +
 .../_sources/develop_tools/distribution.md.txt     |   56 +
 .../_sources/develop_tools/idea_setup.md.txt       |   96 +
 .../latest/_sources/develop_tools/index.rst.txt    |   31 +
 .../latest/_sources/develop_tools/testing.md.txt   |   54 +
 .../extensions/engines/flink/index.rst.txt         |   25 +
 .../_sources/extensions/engines/hive/index.rst.txt |   25 +
 .../_sources/extensions/engines/index.rst.txt      |   30 +
 .../extensions/engines/spark/functions.md.txt      |   31 +
 .../extensions/engines/spark/index.rst.txt         |   26 +
 .../_sources/extensions/engines/spark/rules.md.txt |   82 +
 .../engines/spark/z-order-benchmark.md.txt         |  240 +
 .../extensions/engines/spark/z-order.md.txt        |  121 +
 .../extensions/engines/trino/index.rst.txt         |   26 +
 .../docs/latest/_sources/extensions/index.rst.txt  |   39 +
 .../extensions/server/applications.rst.txt         |  154 +
 .../extensions/server/authentication.rst.txt       |   83 +
 .../extensions/server/configuration.rst.txt        |   73 +
 .../_sources/extensions/server/events.rst.txt      |   22 +
 .../_sources/extensions/server/index.rst.txt       |   29 +
 content/docs/latest/_sources/index.rst.txt         |  145 +
 content/docs/latest/_sources/monitor/events.md.txt |   19 +
 content/docs/latest/_sources/monitor/index.rst.txt |   28 +
 .../docs/latest/_sources/monitor/logging.md.txt    |  268 ++
 .../docs/latest/_sources/monitor/metrics.md.txt    |   95 +
 .../_sources/monitor/trouble_shooting.md.txt       |  265 ++
 .../overview/architecture.md.txt}                  |  484 +-
 .../docs/latest/_sources/overview/index.rst.txt    |   25 +
 .../latest/_sources/overview/kyuubi_vs_hive.md.txt |   53 +
 .../overview/kyuubi_vs_thriftserver.md.txt         |  258 +
 .../docs/latest/_sources/quick_start/index.rst.txt |   29 +
 .../latest/_sources/quick_start/quick_start.md.txt |  524 +++
 .../quick_start/quick_start_with_helm.md.txt       |  106 +
 .../quick_start/quick_start_with_jdbc.md.txt       |   93 +
 .../quick_start/quick_start_with_jupyter.md.txt    |   20 +
 content/docs/latest/_sources/requirements.txt      |   26 +
 .../_sources/security/authentication.rst.txt       |   46 +
 .../_sources/security/authorization/index.rst.txt  |   22 +
 .../security/authorization/spark/build.md.txt      |  103 +
 .../security/authorization/spark/index.rst.txt     |   27 +
 .../security/authorization/spark/install.md.txt    |  142 +
 .../security/authorization/spark/overview.rst.txt  |   62 +
 .../security/hadoop_credentials_manager.md.txt     |   85 +
 .../docs/latest/_sources/security/index.rst.txt    |   28 +
 content/docs/latest/_sources/security/jdbc.md.txt  |   49 +
 .../docs/latest/_sources/security/kerberos.rst.txt |  118 +
 content/docs/latest/_sources/security/kinit.md.txt |  107 +
 content/docs/latest/_sources/security/ldap.rst.txt |   21 +
 content/docs/latest/_sources/tools/index.rst.txt   |   27 +
 .../latest/_sources/tools/kyuubi-admin.rst.txt     |   71 +
 .../docs/latest/_sources/tools/kyuubi-ctl.md.txt   |  162 +
 .../_sources/tools/spark_block_cleaner.md.txt      |  129 +
 content/docs/latest/_static/basic.css              |    5 +-
 content/docs/latest/_static/css/badge_only.css     |    1 -
 content/docs/latest/_static/css/custom.css         |   33 +-
 .../latest/_static/css/fonts/Roboto-Slab-Bold.woff |  Bin 87624 -> 0 bytes
 .../_static/css/fonts/Roboto-Slab-Bold.woff2       |  Bin 67312 -> 0 bytes
 .../_static/css/fonts/Roboto-Slab-Regular.woff     |  Bin 86288 -> 0 bytes
 .../_static/css/fonts/Roboto-Slab-Regular.woff2    |  Bin 66444 -> 0 bytes
 .../_static/css/fonts/fontawesome-webfont.eot      |  Bin 165742 -> 0 bytes
 .../_static/css/fonts/fontawesome-webfont.svg      | 2671 -----------
 .../_static/css/fonts/fontawesome-webfont.ttf      |  Bin 165548 -> 0 bytes
 .../_static/css/fonts/fontawesome-webfont.woff     |  Bin 98024 -> 0 bytes
 .../_static/css/fonts/fontawesome-webfont.woff2    |  Bin 77160 -> 0 bytes
 .../latest/_static/css/fonts/lato-bold-italic.woff |  Bin 323344 -> 0 bytes
 .../_static/css/fonts/lato-bold-italic.woff2       |  Bin 193308 -> 0 bytes
 .../docs/latest/_static/css/fonts/lato-bold.woff   |  Bin 309728 -> 0 bytes
 .../docs/latest/_static/css/fonts/lato-bold.woff2  |  Bin 184912 -> 0 bytes
 .../_static/css/fonts/lato-normal-italic.woff      |  Bin 328412 -> 0 bytes
 .../_static/css/fonts/lato-normal-italic.woff2     |  Bin 195704 -> 0 bytes
 .../docs/latest/_static/css/fonts/lato-normal.woff |  Bin 309192 -> 0 bytes
 .../latest/_static/css/fonts/lato-normal.woff2     |  Bin 182708 -> 0 bytes
 content/docs/latest/_static/css/theme.css          |    4 -
 content/docs/latest/_static/doctools.js            |   77 +-
 .../docs/latest/_static/documentation_options.js   |    6 +-
 .../docs/latest/_static/fonts/Inconsolata-Bold.ttf |  Bin 109948 -> 0 bytes
 .../latest/_static/fonts/Inconsolata-Regular.ttf   |  Bin 96964 -> 0 bytes
 content/docs/latest/_static/fonts/Inconsolata.ttf  |  Bin 63184 -> 0 bytes
 content/docs/latest/_static/fonts/Lato-Bold.ttf    |  Bin 656544 -> 0 bytes
 content/docs/latest/_static/fonts/Lato-Regular.ttf |  Bin 656568 -> 0 bytes
 .../docs/latest/_static/fonts/Lato/lato-bold.eot   |  Bin 256056 -> 0 bytes
 .../docs/latest/_static/fonts/Lato/lato-bold.ttf   |  Bin 600856 -> 0 bytes
 .../docs/latest/_static/fonts/Lato/lato-bold.woff  |  Bin 309728 -> 0 bytes
 .../docs/latest/_static/fonts/Lato/lato-bold.woff2 |  Bin 184912 -> 0 bytes
 .../latest/_static/fonts/Lato/lato-bolditalic.eot  |  Bin 266158 -> 0 bytes
 .../latest/_static/fonts/Lato/lato-bolditalic.ttf  |  Bin 622572 -> 0 bytes
 .../latest/_static/fonts/Lato/lato-bolditalic.woff |  Bin 323344 -> 0 bytes
 .../_static/fonts/Lato/lato-bolditalic.woff2       |  Bin 193308 -> 0 bytes
 .../docs/latest/_static/fonts/Lato/lato-italic.eot |  Bin 268604 -> 0 bytes
 .../docs/latest/_static/fonts/Lato/lato-italic.ttf |  Bin 639388 -> 0 bytes
 .../latest/_static/fonts/Lato/lato-italic.woff     |  Bin 328412 -> 0 bytes
 .../latest/_static/fonts/Lato/lato-italic.woff2    |  Bin 195704 -> 0 bytes
 .../latest/_static/fonts/Lato/lato-regular.eot     |  Bin 253461 -> 0 bytes
 .../latest/_static/fonts/Lato/lato-regular.ttf     |  Bin 607720 -> 0 bytes
 .../latest/_static/fonts/Lato/lato-regular.woff    |  Bin 309192 -> 0 bytes
 .../latest/_static/fonts/Lato/lato-regular.woff2   |  Bin 182708 -> 0 bytes
 .../docs/latest/_static/fonts/RobotoSlab-Bold.ttf  |  Bin 170616 -> 0 bytes
 .../latest/_static/fonts/RobotoSlab-Regular.ttf    |  Bin 169064 -> 0 bytes
 .../fonts/RobotoSlab/roboto-slab-v7-bold.eot       |  Bin 79520 -> 0 bytes
 .../fonts/RobotoSlab/roboto-slab-v7-bold.ttf       |  Bin 170616 -> 0 bytes
 .../fonts/RobotoSlab/roboto-slab-v7-bold.woff      |  Bin 87624 -> 0 bytes
 .../fonts/RobotoSlab/roboto-slab-v7-bold.woff2     |  Bin 67312 -> 0 bytes
 .../fonts/RobotoSlab/roboto-slab-v7-regular.eot    |  Bin 78331 -> 0 bytes
 .../fonts/RobotoSlab/roboto-slab-v7-regular.ttf    |  Bin 169064 -> 0 bytes
 .../fonts/RobotoSlab/roboto-slab-v7-regular.woff   |  Bin 86288 -> 0 bytes
 .../fonts/RobotoSlab/roboto-slab-v7-regular.woff2  |  Bin 66444 -> 0 bytes
 .../latest/_static/fonts/fontawesome-webfont.eot   |  Bin 165742 -> 0 bytes
 .../latest/_static/fonts/fontawesome-webfont.svg   | 2671 -----------
 .../latest/_static/fonts/fontawesome-webfont.ttf   |  Bin 165548 -> 0 bytes
 .../latest/_static/fonts/fontawesome-webfont.woff  |  Bin 98024 -> 0 bytes
 .../latest/_static/fonts/fontawesome-webfont.woff2 |  Bin 77160 -> 0 bytes
 content/docs/latest/_static/images/logo_binder.svg |   19 +
 content/docs/latest/_static/images/logo_colab.png  |  Bin 0 -> 7601 bytes
 .../docs/latest/_static/images/logo_deepnote.svg   |    1 +
 .../docs/latest/_static/images/logo_jupyterhub.svg |    1 +
 content/docs/latest/_static/js/badge_only.js       |    1 -
 .../latest/_static/js/html5shiv-printshiv.min.js   |    4 -
 content/docs/latest/_static/js/html5shiv.min.js    |    4 -
 content/docs/latest/_static/js/modernizr.min.js    |    4 -
 content/docs/latest/_static/js/theme.js            |    1 -
 content/docs/latest/_static/kyuubi_logo.png        |  Bin 0 -> 23347 bytes
 content/docs/latest/_static/kyuubi_logo_gray.png   |  Bin 6739 -> 0 bytes
 content/docs/latest/_static/kyuubi_logo_red.png    |  Bin 0 -> 1179 bytes
 content/docs/latest/_static/language_data.js       |    2 +-
 .../_static/locales/ar/LC_MESSAGES/booktheme.po    |   81 +
 .../_static/locales/bg/LC_MESSAGES/booktheme.po    |   81 +
 .../_static/locales/bn/LC_MESSAGES/booktheme.po    |   66 +
 .../_static/locales/ca/LC_MESSAGES/booktheme.po    |   69 +
 .../_static/locales/cs/LC_MESSAGES/booktheme.po    |   81 +
 .../_static/locales/da/LC_MESSAGES/booktheme.po    |   81 +
 .../_static/locales/de/LC_MESSAGES/booktheme.po    |   81 +
 .../_static/locales/el/LC_MESSAGES/booktheme.po    |   81 +
 .../_static/locales/eo/LC_MESSAGES/booktheme.po    |   81 +
 .../_static/locales/es/LC_MESSAGES/booktheme.po    |   81 +
 .../_static/locales/et/LC_MESSAGES/booktheme.po    |   81 +
 .../_static/locales/fi/LC_MESSAGES/booktheme.po    |   81 +
 .../_static/locales/fr/LC_MESSAGES/booktheme.po    |   81 +
 .../_static/locales/hr/LC_MESSAGES/booktheme.po    |   81 +
 .../_static/locales/id/LC_MESSAGES/booktheme.po    |   81 +
 .../_static/locales/it/LC_MESSAGES/booktheme.po    |   81 +
 .../_static/locales/iw/LC_MESSAGES/booktheme.po    |   81 +
 .../_static/locales/ja/LC_MESSAGES/booktheme.po    |   81 +
 .../_static/locales/ko/LC_MESSAGES/booktheme.po    |   81 +
 .../_static/locales/lt/LC_MESSAGES/booktheme.po    |   81 +
 .../_static/locales/lv/LC_MESSAGES/booktheme.po    |   81 +
 .../_static/locales/ml/LC_MESSAGES/booktheme.po    |   69 +
 .../_static/locales/mr/LC_MESSAGES/booktheme.po    |   69 +
 .../_static/locales/ms/LC_MESSAGES/booktheme.po    |   69 +
 .../_static/locales/nl/LC_MESSAGES/booktheme.po    |   81 +
 .../_static/locales/no/LC_MESSAGES/booktheme.po    |   81 +
 .../_static/locales/pl/LC_MESSAGES/booktheme.po    |   81 +
 .../_static/locales/pt/LC_MESSAGES/booktheme.po    |   81 +
 .../_static/locales/ro/LC_MESSAGES/booktheme.po    |   81 +
 .../_static/locales/ru/LC_MESSAGES/booktheme.po    |   81 +
 .../_static/locales/sk/LC_MESSAGES/booktheme.po    |   81 +
 .../_static/locales/sl/LC_MESSAGES/booktheme.po    |   81 +
 .../_static/locales/sr/LC_MESSAGES/booktheme.po    |   81 +
 .../_static/locales/sv/LC_MESSAGES/booktheme.po    |   81 +
 .../_static/locales/ta/LC_MESSAGES/booktheme.po    |   69 +
 .../_static/locales/te/LC_MESSAGES/booktheme.po    |   69 +
 .../_static/locales/tg/LC_MESSAGES/booktheme.po    |   81 +
 .../_static/locales/th/LC_MESSAGES/booktheme.po    |   81 +
 .../_static/locales/tl/LC_MESSAGES/booktheme.po    |   69 +
 .../_static/locales/tr/LC_MESSAGES/booktheme.po    |   81 +
 .../_static/locales/uk/LC_MESSAGES/booktheme.po    |   81 +
 .../_static/locales/ur/LC_MESSAGES/booktheme.po    |   69 +
 .../_static/locales/vi/LC_MESSAGES/booktheme.po    |   81 +
 .../_static/locales/zh_CN/LC_MESSAGES/booktheme.po |   78 +
 .../_static/locales/zh_TW/LC_MESSAGES/booktheme.po |   81 +
 .../docs/latest/_static/sbt-webpack-macros.html    |   11 +
 .../latest/_static/scripts/pydata-sphinx-theme.js  |   32 +
 .../latest/_static/scripts/sphinx-book-theme.js    |    2 +
 .../_static/scripts/sphinx-book-theme.js.map       |    1 +
 content/docs/latest/_static/searchtools.js         |   10 +-
 .../latest/_static/styles/pydata-sphinx-theme.css  |    6 +
 .../latest/_static/styles/sphinx-book-theme.css    |    8 +
 content/docs/latest/_static/styles/theme.css       |  134 +
 .../_static/vendor/fontawesome/5.13.0/LICENSE.txt  |   34 +
 .../vendor/fontawesome/5.13.0/css/all.min.css      |    5 +
 .../fontawesome/5.13.0/webfonts/fa-brands-400.eot  |  Bin 0 -> 133034 bytes
 .../fontawesome/5.13.0/webfonts/fa-brands-400.svg  | 3570 ++++++++++++++
 .../fontawesome/5.13.0/webfonts/fa-brands-400.ttf  |  Bin 0 -> 132728 bytes
 .../fontawesome/5.13.0/webfonts/fa-brands-400.woff |  Bin 0 -> 89824 bytes
 .../5.13.0/webfonts/fa-brands-400.woff2            |  Bin 0 -> 76612 bytes
 .../fontawesome/5.13.0/webfonts/fa-regular-400.eot |  Bin 0 -> 34390 bytes
 .../fontawesome/5.13.0/webfonts/fa-regular-400.svg |  803 ++++
 .../fontawesome/5.13.0/webfonts/fa-regular-400.ttf |  Bin 0 -> 34092 bytes
 .../5.13.0/webfonts/fa-regular-400.woff            |  Bin 0 -> 16800 bytes
 .../5.13.0/webfonts/fa-regular-400.woff2           |  Bin 0 -> 13584 bytes
 .../fontawesome/5.13.0/webfonts/fa-solid-900.eot   |  Bin 0 -> 202902 bytes
 .../fontawesome/5.13.0/webfonts/fa-solid-900.svg   | 4938 ++++++++++++++++++++
 .../fontawesome/5.13.0/webfonts/fa-solid-900.ttf   |  Bin 0 -> 202616 bytes
 .../fontawesome/5.13.0/webfonts/fa-solid-900.woff  |  Bin 0 -> 103300 bytes
 .../fontawesome/5.13.0/webfonts/fa-solid-900.woff2 |  Bin 0 -> 79444 bytes
 content/docs/latest/_static/webpack-macros.html    |   29 +
 content/docs/latest/appendix/index.html            | 1379 +++++-
 content/docs/latest/appendix/terminology.html      | 1704 ++++++-
 .../docs/latest/changelog/v1.5.1-incubating.html   | 1262 +++++
 .../docs/latest/changelog/v1.5.2-incubating.html   | 1262 +++++
 .../docs/latest/changelog/v1.6.0-incubating.html   | 1262 +++++
 .../latest/client/advanced/configurations.html     | 1277 +++++
 .../client/advanced/features/engine_pool.html      | 1277 +++++
 .../client/advanced/features/engine_resouces.html  | 1261 +++++
 .../advanced/features/engine_share_level.html      | 1277 +++++
 .../client/advanced/features/engine_ttl.html       | 1277 +++++
 .../client/advanced/features/engine_type.html      | 1277 +++++
 .../latest/client/advanced/features/index.html     | 1287 +++++
 .../latest/client/advanced/features/plan_only.html | 1277 +++++
 .../latest/client/advanced/features/scala.html     | 1277 +++++
 content/docs/latest/client/advanced/index.html     | 1292 +++++
 content/docs/latest/client/advanced/kerberos.html  | 1675 +++++++
 content/docs/latest/client/advanced/logging.html   | 1277 +++++
 content/docs/latest/client/bi_tools/datagrip.html  | 1488 ++++++
 content/docs/latest/client/bi_tools/dbeaver.html   | 1510 ++++++
 content/docs/latest/client/bi_tools/hue.html       | 1498 ++++++
 content/docs/latest/client/bi_tools/index.html     | 1293 +++++
 content/docs/latest/client/bi_tools/powerbi.html   | 1281 +++++
 content/docs/latest/client/bi_tools/superset.html  | 1281 +++++
 content/docs/latest/client/bi_tools/tableau.html   | 1281 +++++
 content/docs/latest/client/cli/hive_beeline.html   | 1332 ++++++
 content/docs/latest/client/cli/index.html          | 1286 +++++
 content/docs/latest/client/cli/kyuubi_beeline.html | 1281 +++++
 content/docs/latest/client/hive_jdbc.html          |  366 --
 content/docs/latest/client/index.html              | 1443 +++++-
 content/docs/latest/client/jdbc/hive_jdbc.html     | 1456 ++++++
 content/docs/latest/client/jdbc/index.html         | 1284 +++++
 content/docs/latest/client/jdbc/kyuubi_jdbc.html   | 1573 +++++++
 content/docs/latest/client/jdbc/mysql_jdbc.html    | 1285 +++++
 content/docs/latest/client/kerberized_kyuubi.html  |  538 ---
 content/docs/latest/client/odbc/index.html         | 1279 +++++
 content/docs/latest/client/python/index.html       | 1282 +++++
 content/docs/latest/client/python/pyhive.html      | 1281 +++++
 content/docs/latest/client/rest/index.html         | 1285 +++++
 content/docs/latest/client/rest/rest_api.html      | 1813 +++++++
 content/docs/latest/client/thrift/index.html       | 1279 +++++
 content/docs/latest/client/ui/index.html           | 1279 +++++
 content/docs/latest/community/CONTRIBUTING.html    | 1481 +++++-
 content/docs/latest/community/collaborators.html   | 1384 +++++-
 content/docs/latest/community/index.html           | 1414 +++++-
 content/docs/latest/community/release.html         | 1740 ++++++-
 .../latest/connector/flink/flink_table_store.html  | 1413 ++++++
 content/docs/latest/connector/flink/hudi.html      | 1420 ++++++
 content/docs/latest/connector/flink/iceberg.html   | 1421 ++++++
 content/docs/latest/connector/flink/index.html     | 1296 +++++
 content/docs/latest/connector/hive/index.html      | 1279 +++++
 content/docs/latest/connector/index.html           | 1325 ++++++
 .../docs/latest/connector/spark/delta_lake.html    | 1412 ++++++
 .../spark/delta_lake_with_azure_blob.html          | 1796 +++++++
 .../latest/connector/spark/flink_table_store.html  | 1404 ++++++
 content/docs/latest/connector/spark/hudi.html      | 1412 ++++++
 content/docs/latest/connector/spark/iceberg.html   | 1432 ++++++
 content/docs/latest/connector/spark/index.html     | 1339 ++++++
 .../{integrations => connector/spark}/kudu.html    | 1619 ++++++-
 content/docs/latest/connector/spark/tidb.html      | 1419 ++++++
 content/docs/latest/connector/spark/tpcds.html     | 1421 ++++++
 content/docs/latest/connector/spark/tpch.html      | 1417 ++++++
 .../latest/connector/trino/flink_table_store.html  | 1406 ++++++
 content/docs/latest/connector/trino/iceberg.html   | 1395 ++++++
 content/docs/latest/connector/trino/index.html     | 1291 +++++
 .../docs/latest/deployment/engine_lifecycle.html   | 1502 +++++-
 .../latest/deployment/engine_on_kubernetes.html    | 1565 ++++++-
 content/docs/latest/deployment/engine_on_yarn.html | 1893 +++++++-
 .../docs/latest/deployment/engine_share_level.html | 1583 ++++++-
 .../latest/deployment/high_availability_guide.html | 1550 +++++-
 content/docs/latest/deployment/hive_metastore.html | 1619 ++++++-
 content/docs/latest/deployment/index.html          | 1563 ++++++-
 .../latest/deployment/kyuubi_on_kubernetes.html    | 1542 +++++-
 content/docs/latest/deployment/settings.html       | 4343 ++++++++++++-----
 content/docs/latest/deployment/spark/aqe.html      | 1832 +++++++-
 content/docs/latest/deployment/spark/basics.html   |  271 --
 content/docs/latest/deployment/spark/driver.html   |  268 --
 .../deployment/spark/dynamic_allocation.html       | 1604 ++++++-
 .../deployment/spark/dynamicpartitionpruning.html  |  265 --
 content/docs/latest/deployment/spark/ess.html      |  262 --
 .../docs/latest/deployment/spark/eventqueue.html   |  267 --
 content/docs/latest/deployment/spark/executor.html |  264 --
 .../docs/latest/deployment/spark/heartbeart.html   |  264 --
 .../deployment/spark/incremental_collection.html   | 1468 +++++-
 content/docs/latest/deployment/spark/index.html    | 1425 +++++-
 content/docs/latest/deployment/spark/locality.html |  263 --
 .../docs/latest/deployment/spark/monitering.html   |  265 --
 content/docs/latest/deployment/spark/shuffle.html  |  285 --
 .../docs/latest/deployment/spark/speculation.html  |  264 --
 content/docs/latest/deployment/spark/sql.html      |  269 --
 .../docs/latest/develop_tools/build_document.html  | 1503 +++++-
 content/docs/latest/develop_tools/building.html    | 1530 +++++-
 content/docs/latest/develop_tools/debugging.html   | 1551 +++++-
 content/docs/latest/develop_tools/developer.html   | 1499 +++++-
 .../docs/latest/develop_tools/distribution.html    | 1409 +++++-
 content/docs/latest/develop_tools/idea_setup.html  | 1448 ++++++
 content/docs/latest/develop_tools/index.html       | 1444 +++++-
 content/docs/latest/develop_tools/testing.html     | 1475 +++++-
 .../latest/extensions/engines/flink/index.html     | 1286 +++++
 .../docs/latest/extensions/engines/hive/index.html | 1288 +++++
 content/docs/latest/extensions/engines/index.html  | 1304 ++++++
 .../latest/extensions/engines/spark/functions.html | 1334 ++++++
 .../latest/extensions/engines/spark/index.html     | 1286 +++++
 .../latest/extensions/engines/spark/rules.html     | 1500 ++++++
 .../engines/spark/z-order-benchmark.html           | 1633 +++++++
 .../latest/extensions/engines/spark/z-order.html   | 1584 +++++++
 .../latest/extensions/engines/trino/index.html     | 1286 +++++
 content/docs/latest/extensions/index.html          | 1306 ++++++
 .../latest/extensions/server/applications.html     | 1463 ++++++
 .../latest/extensions/server/authentication.html   | 1379 ++++++
 .../latest/extensions/server/configuration.html    | 1386 ++++++
 content/docs/latest/extensions/server/events.html  | 1285 +++++
 content/docs/latest/extensions/server/index.html   | 1288 +++++
 content/docs/latest/genindex.html                  | 1292 ++++-
 content/docs/latest/index.html                     | 1599 ++++++-
 content/docs/latest/integrations/delta_lake.html   |  296 --
 .../integrations/delta_lake_with_azure_blob.html   |  648 ---
 content/docs/latest/integrations/index.html        |  282 --
 content/docs/latest/monitor/events.html            | 1362 +++++-
 content/docs/latest/monitor/index.html             | 1381 +++++-
 content/docs/latest/monitor/logging.html           | 1697 ++++++-
 content/docs/latest/monitor/metrics.html           | 1485 +++++-
 content/docs/latest/monitor/trouble_shooting.html  | 1567 ++++++-
 content/docs/latest/objects.inv                    |  Bin 1837 -> 3144 bytes
 content/docs/latest/overview/architecture.html     | 1529 +++++-
 content/docs/latest/overview/index.html            | 1381 +++++-
 content/docs/latest/overview/kyuubi_vs_hive.html   | 1494 +++++-
 .../latest/overview/kyuubi_vs_thriftserver.html    | 1747 ++++++-
 content/docs/latest/quick_start/index.html         | 1439 +++++-
 content/docs/latest/quick_start/quick_start.html   | 1860 ++++++--
 .../quick_start/quick_start_with_beeline.html      |  353 --
 .../quick_start/quick_start_with_datagrip.html     |  344 --
 .../quick_start/quick_start_with_dbeaver.html      |  484 --
 .../latest/quick_start/quick_start_with_helm.html  | 1569 ++++++-
 .../latest/quick_start/quick_start_with_hue.html   |  391 --
 .../latest/quick_start/quick_start_with_jdbc.html  | 1524 +++++-
 .../quick_start/quick_start_with_jupyter.html      | 1362 +++++-
 content/docs/latest/requirements.html              | 1310 ++++++
 content/docs/latest/search.html                    | 1328 +++++-
 content/docs/latest/searchindex.js                 |    2 +-
 content/docs/latest/security/authentication.html   | 1557 ++++--
 content/docs/latest/security/authorization.html    |  295 --
 .../docs/latest/security/authorization/index.html  | 1287 +++++
 .../latest/security/authorization/spark/build.html | 1507 ++++++
 .../latest/security/authorization/spark/index.html | 1300 ++++++
 .../security/authorization/spark/install.html      | 1538 ++++++
 .../security/authorization/spark/overview.html     | 1384 ++++++
 .../security/hadoop_credentials_manager.html       | 1571 ++++++-
 content/docs/latest/security/index.html            | 1412 +++++-
 content/docs/latest/security/jdbc.html             | 1368 ++++++
 content/docs/latest/security/kerberos.html         | 1456 ++++++
 content/docs/latest/security/kinit.html            | 1481 +++++-
 content/docs/latest/security/ldap.html             | 1281 +++++
 content/docs/latest/sql/functions.html             |  316 --
 content/docs/latest/sql/index.html                 |  277 --
 content/docs/latest/sql/rules.html                 |  417 --
 content/docs/latest/sql/z-order-benchmark.html     |  577 ---
 content/docs/latest/sql/z-order-introduction.html  |  456 --
 content/docs/latest/tools/index.html               | 1404 +++++-
 content/docs/latest/tools/kyuubi-admin.html        | 1377 ++++++
 content/docs/latest/tools/kyuubi-ctl.html          | 1576 +++++++
 content/docs/latest/tools/spark_block_cleaner.html | 1548 +++++-
 454 files changed, 205966 insertions(+), 27970 deletions(-)

diff --git a/content/docs/latest/.buildinfo b/content/docs/latest/.buildinfo
deleted file mode 100644
index d02a3cf..0000000
--- a/content/docs/latest/.buildinfo
+++ /dev/null
@@ -1,4 +0,0 @@
-# Sphinx build info version 1
-# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
-config: 54979cdc96d7fec1e4892cfc1eb9bf2a
-tags: 645f666f9bcd5a90fca523b33c5a78b7
diff --git a/content/docs/latest/404.html b/content/docs/latest/404.html
index 30c6da7..77ee4c8 100644
--- a/content/docs/latest/404.html
+++ b/content/docs/latest/404.html
@@ -1,193 +1,1153 @@
 
-
 <!DOCTYPE html>
-<html class="writer-html5" lang="en" >
-<head>
-  <meta charset="utf-8" />
-  
-  <meta name="viewport" content="width=device-width, initial-scale=1.0" />
-  
-  <title>Page not found &mdash; Kyuubi 1.5.1-incubating documentation</title>
-  
-
-  
-  <link rel="stylesheet" href="/en/latest/_static/css/custom.css" type="text/css" />
-  <link rel="stylesheet" href="/en/latest/_static/pygments.css" type="text/css" />
-  <link rel="stylesheet" href="/en/latest/_static/pygments.css" type="text/css" />
-  <link rel="stylesheet" href="/en/latest/_static/css/custom.css" type="text/css" />
-
-  
-  
-
-  
-  
 
-  
-
-  
-  <!--[if lt IE 9]>
-    <script src="/en/latest/_static/js/html5shiv.min.js"></script>
-  <![endif]-->
-  
+<html>
+  <head>
+    <meta charset="utf-8" />
+    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
+    <title>Page not found &#8212; Apache Kyuubi</title>
     
-      <script type="text/javascript" id="documentation_options" data-url_root="" src="/en/latest/_static/documentation_options.js"></script>
-        <script data-url_root="#" id="documentation_options" src="/en/latest/_static/documentation_options.js"></script>
-        <script src="/en/latest/_static/jquery.js"></script>
-        <script src="/en/latest/_static/underscore.js"></script>
-        <script src="/en/latest/_static/doctools.js"></script>
-    
-    <script type="text/javascript" src="/en/latest/_static/js/theme.js"></script>
+  <!-- Loaded before other Sphinx assets -->
+  <link href="/en/latest/_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
+<link href="/en/latest/_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
 
     
-    <link rel="index" title="Index" href="/en/latest/genindex.html" />
-    <link rel="search" title="Search" href="/en/latest/search.html" /> 
-</head>
+  <link rel="stylesheet"
+    href="/en/latest/_static/vendor/fontawesome/5.13.0/css/all.min.css">
+  <link rel="preload" as="font" type="font/woff2" crossorigin
+    href="/en/latest/_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2">
+  <link rel="preload" as="font" type="font/woff2" crossorigin
+    href="/en/latest/_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2">
 
-<body class="wy-body-for-nav">
-
-   
-  <div class="wy-grid-for-nav">
+    <link rel="stylesheet" type="text/css" href="/en/latest/_static/pygments.css" />
+    <link rel="stylesheet" href="/en/latest/_static/styles/sphinx-book-theme.css?digest=62ba249389abaaa9ffc34bf36a076bdc1d65ee18" type="text/css" />
+    <link rel="stylesheet" type="text/css" href="/en/latest/_static/css/custom.css" />
     
-    <nav data-toggle="wy-nav-shift" class="wy-nav-side">
-      <div class="wy-side-scroll">
-        <div class="wy-side-nav-search" >
-          
+  <!-- Pre-loaded scripts that we'll load fully later -->
+  <link rel="preload" as="script" href="/en/latest/_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf">
 
-          
-            <a href="/en/latest/index.html" class="icon icon-home"> Kyuubi
-          
+    <script data-url_root="#" id="documentation_options" src="/en/latest/_static/documentation_options.js"></script>
+    <script src="/en/latest/_static/jquery.js"></script>
+    <script src="/en/latest/_static/underscore.js"></script>
+    <script src="/en/latest/_static/doctools.js"></script>
+    <script src="/en/latest/_static/scripts/sphinx-book-theme.js?digest=f31d14ad54b65d19161ba51d4ffff3a77ae00456"></script>
+    <link rel="shortcut icon" href="/en/latest/_static/kyuubi_logo_red.png"/>
+    <link rel="index" title="Index" href="/en/latest/genindex.html" />
+    <link rel="search" title="Search" href="/en/latest/search.html" />
+    <meta name="viewport" content="width=device-width, initial-scale=1" />
+    <meta name="docsearch:language" content="None">
+    
 
-          
-            
-            <img src="/en/latest/_static/kyuubi_logo_gray.png" class="logo" alt="Logo"/>
-          
-          </a>
+    <!-- Google Analytics -->
+    
+  </head>
+  <body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60">
+<!-- Checkboxes to toggle the left sidebar -->
+<input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar">
+<label class="overlay overlay-navbar" for="__navigation">
+    <div class="visually-hidden">Toggle navigation sidebar</div>
+</label>
+<!-- Checkboxes to toggle the in-page toc -->
+<input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents">
+<label class="overlay overlay-pagetoc" for="__page-toc">
+    <div class="visually-hidden">Toggle in-page Table of Contents</div>
+</label>
+<!-- Headers at the top -->
+<div class="announcement header-item noprint">&#129418; Welcome to Kyuubi’s online documentation &#x2728;, v1.6.0-incubating</div>
+<div class="header header-item noprint"></div>
 
-          
-            
-            
-          
+    
+    <div class="container-fluid" id="banner"></div>
 
-          
-<div role="search">
-  <form id="rtd-search-form" class="wy-form" action="/en/latest/search.html" method="get">
-    <input type="text" name="q" placeholder="Search docs" />
-    <input type="hidden" name="check_keywords" value="yes" />
-    <input type="hidden" name="area" value="default" />
-  </form>
-</div>
+    
 
+    <div class="container-xl">
+      <div class="row">
           
-        </div>
-
+<!-- Sidebar -->
+<div class="bd-sidebar noprint" id="site-navigation">
+    <div class="bd-sidebar__content">
+        <div class="bd-sidebar__top"><div class="navbar-brand-box">
+    <a class="navbar-brand text-wrap" href="/en/latest/index.html">
+      
+        <!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 -->
         
-        <div class="wy-menu wy-menu-vertical" data-spy="affix" role="navigation" aria-label="main navigation">
-          
-            
-            
-              
-            
-            
-              <p class="caption" role="heading"><span class="caption-text">Usage Guide</span></p>
-<ul>
-<li class="toctree-l1"><a class="reference internal" href="/en/latest/quick_start/index.html">Quick Start</a></li>
-<li class="toctree-l1"><a class="reference internal" href="/en/latest/deployment/index.html">Deploying Kyuubi</a></li>
-<li class="toctree-l1"><a class="reference internal" href="/en/latest/security/index.html">Security</a></li>
-<li class="toctree-l1"><a class="reference internal" href="/en/latest/client/index.html">Client Documentation</a></li>
-<li class="toctree-l1"><a class="reference internal" href="/en/latest/integrations/index.html">Integrations</a></li>
-<li class="toctree-l1"><a class="reference internal" href="/en/latest/monitor/index.html">Monitoring</a></li>
-<li class="toctree-l1"><a class="reference internal" href="/en/latest/sql/index.html">SQL References</a></li>
-<li class="toctree-l1"><a class="reference internal" href="/en/latest/tools/index.html">Tools</a></li>
+      
+      
+      <img src="/en/latest/_static/kyuubi_logo.png" class="logo" alt="logo">
+      
+      
+    </a>
+</div><form class="bd-search d-flex align-items-center" action="/en/latest/search.html" method="get">
+  <i class="icon fas fa-search"></i>
+  <input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off" >
+</form><nav class="bd-links" id="bd-docs-nav" aria-label="Main">
+    <div class="bd-toc-item active">
+        
+        <ul class="nav bd-sidenav bd-sidenav__home-link">
+            <li class="toctree-l1">
+                <a class="reference internal" href="/en/latest/index.html">
+                    HOME
+                </a>
+            </li>
+        </ul>
+        <p aria-level="2" class="caption" role="heading">
+ <span class="caption-text">
+  Admin Guide
+ </span>
+</p>
+<ul class="nav bd-sidenav">
+ <li class="toctree-l1 has-children">
+  <a class="reference internal" href="/en/latest/quick_start/index.html">
+   Quick Start
+  </a>
+  <input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox"/>
+  <label for="toctree-checkbox-1">
+   <i class="fas fa-chevron-down">
+   </i>
+  </label>
+  <ul>
+   <li class="toctree-l2">
+    <a class="reference internal" href="/en/latest/quick_start/quick_start.html">
+     Getting Started with Apache Kyuubi
+    </a>
+   </li>
+   <li class="toctree-l2">
+    <a class="reference internal" href="/en/latest/quick_start/quick_start_with_helm.html">
+     Getting Started With Kyuubi on kubernetes
+    </a>
+   </li>
+   <li class="toctree-l2">
+    <a class="reference internal" href="/en/latest/quick_start/quick_start_with_jdbc.html">
+     Getting Started With Hive JDBC
+    </a>
+   </li>
+  </ul>
+ </li>
+ <li class="toctree-l1 has-children">
+  <a class="reference internal" href="/en/latest/deployment/index.html">
+   Deploying Kyuubi
+  </a>
+  <input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox"/>
+  <label for="toctree-checkbox-2">
+   <i class="fas fa-chevron-down">
+   </i>
+  </label>
+  <ul>
+   <li class="toctree-l2">
+    <a class="reference internal" href="/en/latest/deployment/kyuubi_on_kubernetes.html">
+     Deploy Kyuubi On Kubernetes
+    </a>
+   </li>
+   <li class="toctree-l2">
+    <a class="reference internal" href="/en/latest/deployment/hive_metastore.html">
+     Integration with Hive Metastore
+    </a>
+   </li>
+   <li class="toctree-l2">
+    <a class="reference internal" href="/en/latest/deployment/high_availability_guide.html">
+     Kyuubi High Availability Guide
+    </a>
+   </li>
+   <li class="toctree-l2">
+    <a class="reference internal" href="/en/latest/deployment/settings.html">
+     Introduction to the Kyuubi Configurations System
+    </a>
+   </li>
+   <li class="toctree-l2">
+    <a class="reference internal" href="/en/latest/deployment/engine_on_yarn.html">
+     Deploy Kyuubi engines on Yarn
+    </a>
+   </li>
+   <li class="toctree-l2">
+    <a class="reference internal" href="/en/latest/deployment/engine_on_kubernetes.html">
+     Deploy Kyuubi engines on Kubernetes
+    </a>
+   </li>
+   <li class="toctree-l2">
+    <a class="reference internal" href="/en/latest/deployment/engine_share_level.html">
+     The Share Level Of Kyuubi Engines
+    </a>
+   </li>
+   <li class="toctree-l2">
+    <a class="reference internal" href="/en/latest/deployment/engine_lifecycle.html">
+     The TTL Of Kyuubi Engines
+    </a>
+   </li>
+   <li class="toctree-l2 has-children">
+    <a class="reference internal" href="/en/latest/deployment/spark/index.html">
+     The Spark SQL Engine Configuration Guide
+    </a>
+    <input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox"/>
+    <label for="toctree-checkbox-3">
+     <i class="fas fa-chevron-down">
+     </i>
+    </label>
+    <ul>
+     <li class="toctree-l3">
+      <a class="reference internal" href="/en/latest/deployment/spark/dynamic_allocation.html">
+       How To Use Spark Dynamic Resource Allocation (DRA) in Kyuubi
+      </a>
+     </li>
+     <li class="toctree-l3">
+      <a class="reference internal" href="/en/latest/deployment/spark/aqe.html">
+       How To Use Spark Adaptive Query Execution (AQE) in Kyuubi
+      </a>
+     </li>
+     <li class="toctree-l3">
+      <a class="reference internal" href="/en/latest/deployment/spark/incremental_collection.html">
+       Solution for Big Result Sets
+      </a>
+     </li>
+    </ul>
+   </li>
+  </ul>
+ </li>
+ <li class="toctree-l1 has-children">
+  <a class="reference internal" href="/en/latest/security/index.html">
+   Security
+  </a>
+  <input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox"/>
+  <label for="toctree-checkbox-4">
+   <i class="fas fa-chevron-down">
+   </i>
+  </label>
+  <ul>
+   <li class="toctree-l2 has-children">
+    <a class="reference internal" href="/en/latest/security/authentication.html">
+     Authentication
+    </a>
+    <input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox"/>
+    <label for="toctree-checkbox-5">
+     <i class="fas fa-chevron-down">
+     </i>
+    </label>
+    <ul>
+     <li class="toctree-l3">
+      <a class="reference internal" href="/en/latest/security/kerberos.html">
+       Configure Kyuubi to use Kerberos Authentication
+      </a>
+     </li>
+     <li class="toctree-l3">
+      <a class="reference internal" href="/en/latest/client/advanced/kerberos.html">
+       Configure Kerberos for clients to Access Kerberized Kyuubi
+      </a>
+     </li>
+     <li class="toctree-l3">
+      <a class="reference internal" href="/en/latest/security/ldap.html">
+       Configure Kyuubi to use LDAP Authentication
+      </a>
+     </li>
+     <li class="toctree-l3">
+      <a class="reference internal" href="/en/latest/security/jdbc.html">
+       Configure Kyuubi to Use JDBC Authentication
+      </a>
+     </li>
+     <li class="toctree-l3">
+      <a class="reference internal" href="/en/latest/extensions/server/authentication.html">
+       Configure Kyuubi to use Custom Authentication
+      </a>
+     </li>
+    </ul>
+   </li>
+   <li class="toctree-l2 has-children">
+    <a class="reference internal" href="/en/latest/security/authorization/index.html">
+     Authorization
+    </a>
+    <input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox"/>
+    <label for="toctree-checkbox-6">
+     <i class="fas fa-chevron-down">
+     </i>
+    </label>
+    <ul>
+     <li class="toctree-l3 has-children">
+      <a class="reference internal" href="/en/latest/security/authorization/spark/index.html">
+       Spark AuthZ Plugin
+      </a>
+      <input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox"/>
+      <label for="toctree-checkbox-7">
+       <i class="fas fa-chevron-down">
+       </i>
+      </label>
+      <ul>
+       <li class="toctree-l4">
+        <a class="reference internal" href="/en/latest/security/authorization/spark/overview.html">
+         Overview
+        </a>
+       </li>
+       <li class="toctree-l4">
+        <a class="reference internal" href="/en/latest/security/authorization/spark/build.html">
+         Building
+        </a>
+       </li>
+       <li class="toctree-l4">
+        <a class="reference internal" href="/en/latest/security/authorization/spark/install.html">
+         Installing
+        </a>
+       </li>
+      </ul>
+     </li>
+    </ul>
+   </li>
+   <li class="toctree-l2">
+    <a class="reference internal" href="/en/latest/security/kinit.html">
+     Kinit Auxiliary Service
+    </a>
+   </li>
+   <li class="toctree-l2">
+    <a class="reference internal" href="/en/latest/security/hadoop_credentials_manager.html">
+     Hadoop Credentials Manager
+    </a>
+   </li>
+  </ul>
+ </li>
+ <li class="toctree-l1 has-children">
+  <a class="reference internal" href="/en/latest/monitor/index.html">
+   Monitoring
+  </a>
+  <input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox"/>
+  <label for="toctree-checkbox-8">
+   <i class="fas fa-chevron-down">
+   </i>
+  </label>
+  <ul>
+   <li class="toctree-l2">
+    <a class="reference internal" href="/en/latest/monitor/logging.html">
+     1. Monitoring Kyuubi - Logging System
+    </a>
+   </li>
+   <li class="toctree-l2">
+    <a class="reference internal" href="/en/latest/monitor/metrics.html">
+     2. Monitoring Kyuubi - Server Metrics
+    </a>
+   </li>
+   <li class="toctree-l2">
+    <a class="reference internal" href="/en/latest/monitor/trouble_shooting.html">
+     3. Trouble Shooting
+    </a>
+   </li>
+  </ul>
+ </li>
+ <li class="toctree-l1 has-children">
+  <a class="reference internal" href="/en/latest/tools/index.html">
+   Tools
+  </a>
+  <input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox"/>
+  <label for="toctree-checkbox-9">
+   <i class="fas fa-chevron-down">
+   </i>
+  </label>
+  <ul>
+   <li class="toctree-l2">
+    <a class="reference internal" href="/en/latest/tools/spark_block_cleaner.html">
+     Kubernetes Tools Spark Block Cleaner
+    </a>
+   </li>
+   <li class="toctree-l2">
+    <a class="reference internal" href="/en/latest/tools/kyuubi-ctl.html">
+     Managing kyuubi servers and engines Tool
+    </a>
+   </li>
+   <li class="toctree-l2">
+    <a class="reference internal" href="/en/latest/tools/kyuubi-admin.html">
+     Kyuubi Administer Tool
+    </a>
+   </li>
+  </ul>
+ </li>
 </ul>
-<p class="caption" role="heading"><span class="caption-text">Kyuubi Insider</span></p>
-<ul>
-<li class="toctree-l1"><a class="reference internal" href="/en/latest/overview/index.html">Overview</a></li>
+<p aria-level="2" class="caption" role="heading">
+ <span class="caption-text">
+  User Guide
+ </span>
+</p>
+<ul class="nav bd-sidenav">
+ <li class="toctree-l1 has-children">
+  <a class="reference internal" href="/en/latest/client/index.html">
+   Clients &amp; APIs
+  </a>
+  <input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox"/>
+  <label for="toctree-checkbox-10">
+   <i class="fas fa-chevron-down">
+   </i>
+  </label>
+  <ul>
+   <li class="toctree-l2 has-children">
+    <a class="reference internal" href="/en/latest/client/jdbc/index.html">
+     JDBC Drivers
+    </a>
+    <input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox"/>
+    <label for="toctree-checkbox-11">
+     <i class="fas fa-chevron-down">
+     </i>
+    </label>
+    <ul>
+     <li class="toctree-l3">
+      <a class="reference internal" href="/en/latest/client/jdbc/kyuubi_jdbc.html">
+       Kyuubi Hive JDBC Driver
+      </a>
+     </li>
+     <li class="toctree-l3">
+      <a class="reference internal" href="/en/latest/client/jdbc/hive_jdbc.html">
+       Hive JDBC Driver
+      </a>
+     </li>
+     <li class="toctree-l3">
+      <a class="reference internal" href="/en/latest/client/jdbc/mysql_jdbc.html">
+       MySQL Connectors
+      </a>
+     </li>
+    </ul>
+   </li>
+   <li class="toctree-l2 has-children">
+    <a class="reference internal" href="/en/latest/client/cli/index.html">
+     Command Line Interface(CLI)s
+    </a>
+    <input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox"/>
+    <label for="toctree-checkbox-12">
+     <i class="fas fa-chevron-down">
+     </i>
+    </label>
+    <ul>
+     <li class="toctree-l3">
+      <a class="reference internal" href="/en/latest/client/cli/kyuubi_beeline.html">
+       Kyuubi Beeline
+      </a>
+     </li>
+     <li class="toctree-l3">
+      <a class="reference internal" href="/en/latest/client/cli/hive_beeline.html">
+       Hive Beeline
+      </a>
+     </li>
+    </ul>
+   </li>
+   <li class="toctree-l2 has-children">
+    <a class="reference internal" href="/en/latest/client/bi_tools/index.html">
+     Business Intelligence Tools and SQL IDEs
+    </a>
+    <input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox"/>
+    <label for="toctree-checkbox-13">
+     <i class="fas fa-chevron-down">
+     </i>
+    </label>
+    <ul>
+     <li class="toctree-l3">
+      <a class="reference internal" href="/en/latest/client/bi_tools/superset.html">
+       Apache Superset
+      </a>
+     </li>
+     <li class="toctree-l3">
+      <a class="reference internal" href="/en/latest/client/bi_tools/hue.html">
+       Cloudera Hue
+      </a>
+     </li>
+     <li class="toctree-l3">
+      <a class="reference internal" href="/en/latest/client/bi_tools/datagrip.html">
+       DataGrip
+      </a>
+     </li>
+     <li class="toctree-l3">
+      <a class="reference internal" href="/en/latest/client/bi_tools/dbeaver.html">
+       DBeaver
+      </a>
+     </li>
+     <li class="toctree-l3">
+      <a class="reference internal" href="/en/latest/client/bi_tools/powerbi.html">
+       PowerBI
+      </a>
+     </li>
+     <li class="toctree-l3">
+      <a class="reference internal" href="/en/latest/client/bi_tools/tableau.html">
+       Tableau
+      </a>
+     </li>
+    </ul>
+   </li>
+   <li class="toctree-l2 has-children">
+    <a class="reference internal" href="/en/latest/client/odbc/index.html">
+     ODBC Drivers
+    </a>
+    <input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox"/>
+    <label for="toctree-checkbox-14">
+     <i class="fas fa-chevron-down">
+     </i>
+    </label>
+    <ul class="simple">
+    </ul>
+   </li>
+   <li class="toctree-l2 has-children">
+    <a class="reference internal" href="/en/latest/client/thrift/index.html">
+     Thrift APIs
+    </a>
+    <input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox"/>
+    <label for="toctree-checkbox-15">
+     <i class="fas fa-chevron-down">
+     </i>
+    </label>
+    <ul class="simple">
+    </ul>
+   </li>
+   <li class="toctree-l2 has-children">
+    <a class="reference internal" href="/en/latest/client/rest/index.html">
+     RESTful APIs and Clients
+    </a>
+    <input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox"/>
+    <label for="toctree-checkbox-16">
+     <i class="fas fa-chevron-down">
+     </i>
+    </label>
+    <ul>
+     <li class="toctree-l3">
+      <a class="reference internal" href="/en/latest/client/rest/rest_api.html">
+       REST API v1
+      </a>
+     </li>
+    </ul>
+   </li>
+   <li class="toctree-l2 has-children">
+    <a class="reference internal" href="/en/latest/client/ui/index.html">
+     Web UI
+    </a>
+    <input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox"/>
+    <label for="toctree-checkbox-17">
+     <i class="fas fa-chevron-down">
+     </i>
+    </label>
+    <ul class="simple">
+    </ul>
+   </li>
+   <li class="toctree-l2 has-children">
+    <a class="reference internal" href="/en/latest/client/python/index.html">
+     Python DB-APIs
+    </a>
+    <input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox"/>
+    <label for="toctree-checkbox-18">
+     <i class="fas fa-chevron-down">
+     </i>
+    </label>
+    <ul>
+     <li class="toctree-l3">
+      <a class="reference internal" href="/en/latest/client/python/pyhive.html">
+       PyHive
+      </a>
+     </li>
+    </ul>
+   </li>
+   <li class="toctree-l2 has-children">
+    <a class="reference internal" href="/en/latest/client/advanced/index.html">
+     Client Commons
+    </a>
+    <input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox"/>
+    <label for="toctree-checkbox-19">
+     <i class="fas fa-chevron-down">
+     </i>
+    </label>
+    <ul>
+     <li class="toctree-l3">
+      <a class="reference internal" href="/en/latest/client/advanced/configurations.html">
+       Client Configuration Guide
+      </a>
+     </li>
+     <li class="toctree-l3">
+      <a class="reference internal" href="/en/latest/client/advanced/logging.html">
+       Logging
+      </a>
+     </li>
+     <li class="toctree-l3 has-children">
+      <a class="reference internal" href="/en/latest/client/advanced/features/index.html">
+       Advanced Features
+      </a>
+      <input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox"/>
+      <label for="toctree-checkbox-20">
+       <i class="fas fa-chevron-down">
+       </i>
+      </label>
+      <ul>
+       <li class="toctree-l4">
+        <a class="reference internal" href="/en/latest/client/advanced/features/engine_type.html">
+         Using Different Kyuubi Engines
+        </a>
+       </li>
+       <li class="toctree-l4">
+        <a class="reference internal" href="/en/latest/client/advanced/features/engine_share_level.html">
+         Sharing and Isolation for Kyuubi Engines
+        </a>
+       </li>
+       <li class="toctree-l4">
+        <a class="reference internal" href="/en/latest/client/advanced/features/engine_ttl.html">
+         Setting Time to Live for Kyuubi Engines
+        </a>
+       </li>
+       <li class="toctree-l4">
+        <a class="reference internal" href="/en/latest/client/advanced/features/engine_pool.html">
+         Enabling Kyuubi Engine Pool
+        </a>
+       </li>
+       <li class="toctree-l4">
+        <a class="reference internal" href="/en/latest/client/advanced/features/scala.html">
+         Running Scala Snippets
+        </a>
+       </li>
+       <li class="toctree-l4">
+        <a class="reference internal" href="/en/latest/client/advanced/features/plan_only.html">
+         Plan Only Execution Mode
+        </a>
+       </li>
+      </ul>
+     </li>
+    </ul>
+   </li>
+  </ul>
+ </li>
 </ul>
-<p class="caption" role="heading"><span class="caption-text">Contributing</span></p>
-<ul>
-<li class="toctree-l1"><a class="reference internal" href="/en/latest/develop_tools/index.html">Develop Tools</a></li>
-<li class="toctree-l1"><a class="reference internal" href="/en/latest/community/index.html">Community</a></li>
+<p aria-level="2" class="caption" role="heading">
+ <span class="caption-text">
+  Extension Guide
+ </span>
+</p>
+<ul class="nav bd-sidenav">
+ <li class="toctree-l1 has-children">
+  <a class="reference internal" href="/en/latest/extensions/index.html">
+   Extensions
+  </a>
+  <input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox"/>
+  <label for="toctree-checkbox-21">
+   <i class="fas fa-chevron-down">
+   </i>
+  </label>
+  <ul>
+   <li class="toctree-l2 has-children">
+    <a class="reference internal" href="/en/latest/extensions/server/index.html">
+     Server Side Extensions
+    </a>
+    <input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox"/>
+    <label for="toctree-checkbox-22">
+     <i class="fas fa-chevron-down">
+     </i>
+    </label>
+    <ul>
+     <li class="toctree-l3">
+      <a class="reference internal" href="/en/latest/extensions/server/authentication.html">
+       Configure Kyuubi to use Custom Authentication
+      </a>
+     </li>
+     <li class="toctree-l3">
+      <a class="reference internal" href="/en/latest/extensions/server/configuration.html">
+       Inject Session Conf with Custom Config Advisor
+      </a>
+     </li>
+     <li class="toctree-l3">
+      <a class="reference internal" href="/en/latest/extensions/server/events.html">
+       Handle Events with Custom Event Handler
+      </a>
+     </li>
+     <li class="toctree-l3">
+      <a class="reference internal" href="/en/latest/extensions/server/applications.html">
+       Manage Applications against Extra Cluster Managers
+      </a>
+     </li>
+    </ul>
+   </li>
+   <li class="toctree-l2 has-children">
+    <a class="reference internal" href="/en/latest/extensions/engines/index.html">
+     Engine Side Extensions
+    </a>
+    <input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox"/>
+    <label for="toctree-checkbox-23">
+     <i class="fas fa-chevron-down">
+     </i>
+    </label>
+    <ul>
+     <li class="toctree-l3 has-children">
+      <a class="reference internal" href="/en/latest/extensions/engines/spark/index.html">
+       Extensions for Spark
+      </a>
+      <input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox"/>
+      <label for="toctree-checkbox-24">
+       <i class="fas fa-chevron-down">
+       </i>
+      </label>
+      <ul>
+       <li class="toctree-l4">
+        <a class="reference internal" href="/en/latest/extensions/engines/spark/z-order.html">
+         Z-Ordering Support
+        </a>
+       </li>
+       <li class="toctree-l4">
+        <a class="reference internal" href="/en/latest/extensions/engines/spark/rules.html">
+         Auxiliary Optimization Rules
+        </a>
+       </li>
+       <li class="toctree-l4">
+        <a class="reference internal" href="/en/latest/security/authorization/spark/index.html">
+         Kyuubi Spark AuthZ Plugin
+        </a>
+       </li>
+       <li class="toctree-l4">
+        <a class="reference internal" href="/en/latest/extensions/engines/spark/functions.html">
+         Auxiliary SQL Functions
+        </a>
+       </li>
+       <li class="toctree-l4">
+        <a class="reference internal" href="/en/latest/connector/spark/index.html">
+         Connectors for Spark SQL Query Engine
+        </a>
+       </li>
+      </ul>
+     </li>
+     <li class="toctree-l3 has-children">
+      <a class="reference internal" href="/en/latest/extensions/engines/flink/index.html">
+       Extensions for Flink
+      </a>
+      <input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox"/>
+      <label for="toctree-checkbox-25">
+       <i class="fas fa-chevron-down">
+       </i>
+      </label>
+      <ul>
+       <li class="toctree-l4">
+        <a class="reference internal" href="/en/latest/connector/flink/index.html">
+         Connectors For Flink SQL Query Engine
+        </a>
+       </li>
+      </ul>
+     </li>
+     <li class="toctree-l3 has-children">
+      <a class="reference internal" href="/en/latest/extensions/engines/hive/index.html">
+       Extensions for Hive
+      </a>
+      <input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox"/>
+      <label for="toctree-checkbox-26">
+       <i class="fas fa-chevron-down">
+       </i>
+      </label>
+      <ul>
+       <li class="toctree-l4">
+        <a class="reference internal" href="/en/latest/connector/hive/index.html">
+         Connectors for Hive SQL Query Engine
+        </a>
+       </li>
+      </ul>
+     </li>
+     <li class="toctree-l3 has-children">
+      <a class="reference internal" href="/en/latest/extensions/engines/trino/index.html">
+       Extensions for Trino
+      </a>
+      <input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox"/>
+      <label for="toctree-checkbox-27">
+       <i class="fas fa-chevron-down">
+       </i>
+      </label>
+      <ul>
+       <li class="toctree-l4">
+        <a class="reference internal" href="/en/latest/connector/trino/index.html">
+         Connectors For Trino SQL Engine
+        </a>
+       </li>
+      </ul>
+     </li>
+    </ul>
+   </li>
+  </ul>
+ </li>
 </ul>
-<p class="caption" role="heading"><span class="caption-text">Appendix</span></p>
-<ul>
-<li class="toctree-l1"><a class="reference internal" href="/en/latest/appendix/index.html">Appendixes</a></li>
+<p aria-level="2" class="caption" role="heading">
+ <span class="caption-text">
+  Connectors
+ </span>
+</p>
+<ul class="nav bd-sidenav">
+ <li class="toctree-l1 has-children">
+  <a class="reference internal" href="/en/latest/connector/index.html">
+   Connectors
+  </a>
+  <input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox"/>
+  <label for="toctree-checkbox-28">
+   <i class="fas fa-chevron-down">
+   </i>
+  </label>
+  <ul>
+   <li class="toctree-l2 has-children">
+    <a class="reference internal" href="/en/latest/connector/spark/index.html">
+     Connectors for Spark SQL Query Engine
+    </a>
+    <input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox"/>
+    <label for="toctree-checkbox-29">
+     <i class="fas fa-chevron-down">
+     </i>
+    </label>
+    <ul>
+     <li class="toctree-l3">
+      <a class="reference internal" href="/en/latest/connector/spark/delta_lake.html">
+       Delta Lake
+      </a>
+     </li>
+     <li class="toctree-l3">
+      <a class="reference internal" href="/en/latest/connector/spark/delta_lake_with_azure_blob.html">
+       Delta Lake with Microsoft Azure Blob Storage
+      </a>
+     </li>
+     <li class="toctree-l3">
+      <a class="reference internal" href="/en/latest/connector/spark/hudi.html">
+       Hudi
+      </a>
+     </li>
+     <li class="toctree-l3">
+      <a class="reference internal" href="/en/latest/connector/spark/iceberg.html">
+       Iceberg
+      </a>
+     </li>
+     <li class="toctree-l3">
+      <a class="reference internal" href="/en/latest/connector/spark/kudu.html">
+       Kudu
+      </a>
+     </li>
+     <li class="toctree-l3">
+      <a class="reference internal" href="/en/latest/connector/spark/flink_table_store.html">
+       Flink Table Store
+      </a>
+     </li>
+     <li class="toctree-l3">
+      <a class="reference internal" href="/en/latest/connector/spark/tidb.html">
+       TiDB
+      </a>
+     </li>
+     <li class="toctree-l3">
+      <a class="reference internal" href="/en/latest/connector/spark/tpcds.html">
+       TPC-DS
+      </a>
+     </li>
+     <li class="toctree-l3">
+      <a class="reference internal" href="/en/latest/connector/spark/tpch.html">
+       TPC-H
+      </a>
+     </li>
+    </ul>
+   </li>
+   <li class="toctree-l2 has-children">
+    <a class="reference internal" href="/en/latest/connector/flink/index.html">
+     Connectors For Flink SQL Query Engine
+    </a>
+    <input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox"/>
+    <label for="toctree-checkbox-30">
+     <i class="fas fa-chevron-down">
+     </i>
+    </label>
+    <ul>
+     <li class="toctree-l3">
+      <a class="reference internal" href="/en/latest/connector/flink/flink_table_store.html">
+       Flink Table Store
+      </a>
+     </li>
+     <li class="toctree-l3">
+      <a class="reference internal" href="/en/latest/connector/flink/hudi.html">
+       Hudi
+      </a>
+     </li>
+     <li class="toctree-l3">
+      <a class="reference internal" href="/en/latest/connector/flink/iceberg.html">
+       Iceberg
+      </a>
+     </li>
+    </ul>
+   </li>
+   <li class="toctree-l2 has-children">
+    <a class="reference internal" href="/en/latest/connector/hive/index.html">
+     Connectors for Hive SQL Query Engine
+    </a>
+    <input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox"/>
+    <label for="toctree-checkbox-31">
+     <i class="fas fa-chevron-down">
+     </i>
+    </label>
+    <ul class="simple">
+    </ul>
+   </li>
+   <li class="toctree-l2 has-children">
+    <a class="reference internal" href="/en/latest/connector/trino/index.html">
+     Connectors For Trino SQL Engine
+    </a>
+    <input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox"/>
+    <label for="toctree-checkbox-32">
+     <i class="fas fa-chevron-down">
+     </i>
+    </label>
+    <ul>
+     <li class="toctree-l3">
+      <a class="reference internal" href="/en/latest/connector/trino/flink_table_store.html">
+       Flink Table Store
+      </a>
+     </li>
+     <li class="toctree-l3">
+      <a class="reference internal" href="/en/latest/connector/trino/iceberg.html">
+       Iceberg
+      </a>
+     </li>
+    </ul>
+   </li>
+  </ul>
+ </li>
+</ul>
+<p aria-level="2" class="caption" role="heading">
+ <span class="caption-text">
+  Kyuubi Insider
+ </span>
+</p>
+<ul class="nav bd-sidenav">
+ <li class="toctree-l1 has-children">
+  <a class="reference internal" href="/en/latest/overview/index.html">
+   Overview
+  </a>
+  <input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox"/>
+  <label for="toctree-checkbox-33">
+   <i class="fas fa-chevron-down">
+   </i>
+  </label>
+  <ul>
+   <li class="toctree-l2">
+    <a class="reference internal" href="/en/latest/overview/architecture.html">
+     Architecture
+    </a>
+   </li>
+   <li class="toctree-l2">
+    <a class="reference internal" href="/en/latest/overview/kyuubi_vs_hive.html">
+     Kyuubi v.s. HiveServer2
+    </a>
+   </li>
+   <li class="toctree-l2">
+    <a class="reference internal" href="/en/latest/overview/kyuubi_vs_thriftserver.html">
+     Kyuubi v.s. Spark Thrift JDBC/ODBC Server (STS)
+    </a>
+   </li>
+  </ul>
+ </li>
+</ul>
+<p aria-level="2" class="caption" role="heading">
+ <span class="caption-text">
+  Contributing
+ </span>
+</p>
+<ul class="nav bd-sidenav">
+ <li class="toctree-l1 has-children">
+  <a class="reference internal" href="/en/latest/develop_tools/index.html">
+   Develop Tools
+  </a>
+  <input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox"/>
+  <label for="toctree-checkbox-34">
+   <i class="fas fa-chevron-down">
+   </i>
+  </label>
+  <ul>
+   <li class="toctree-l2">
+    <a class="reference internal" href="/en/latest/develop_tools/building.html">
+     Building Kyuubi
+    </a>
+   </li>
+   <li class="toctree-l2">
+    <a class="reference internal" href="/en/latest/develop_tools/distribution.html">
+     Building a Runnable Distribution
+    </a>
+   </li>
+   <li class="toctree-l2">
+    <a class="reference internal" href="/en/latest/develop_tools/build_document.html">
+     Building Kyuubi Documentation
+    </a>
+   </li>
+   <li class="toctree-l2">
+    <a class="reference internal" href="/en/latest/develop_tools/testing.html">
+     Running Tests
+    </a>
+   </li>
+   <li class="toctree-l2">
+    <a class="reference internal" href="/en/latest/develop_tools/debugging.html">
+     Debugging Kyuubi
+    </a>
+   </li>
+   <li class="toctree-l2">
+    <a class="reference internal" href="/en/latest/develop_tools/developer.html">
+     Developer Tools
+    </a>
+   </li>
+   <li class="toctree-l2">
+    <a class="reference internal" href="/en/latest/develop_tools/idea_setup.html">
+     IntelliJ IDEA Setup Guide
+    </a>
+   </li>
+  </ul>
+ </li>
+ <li class="toctree-l1 has-children">
+  <a class="reference internal" href="/en/latest/community/index.html">
+   Community
+  </a>
+  <input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox"/>
+  <label for="toctree-checkbox-35">
+   <i class="fas fa-chevron-down">
+   </i>
+  </label>
+  <ul>
+   <li class="toctree-l2">
+    <a class="reference internal" href="/en/latest/community/CONTRIBUTING.html">
+     Contributing to Apache Kyuubi
+    </a>
+   </li>
+   <li class="toctree-l2">
+    <a class="reference internal" href="/en/latest/community/collaborators.html">
+     Collaborators
+    </a>
+   </li>
+   <li class="toctree-l2">
+    <a class="reference internal" href="/en/latest/community/release.html">
+     Kyuubi Release Guide
+    </a>
+   </li>
+  </ul>
+ </li>
+</ul>
+<p aria-level="2" class="caption" role="heading">
+ <span class="caption-text">
+  Appendix
+ </span>
+</p>
+<ul class="nav bd-sidenav">
+ <li class="toctree-l1 has-children">
+  <a class="reference internal" href="/en/latest/appendix/index.html">
+   Appendixes
+  </a>
+  <input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox"/>
+  <label for="toctree-checkbox-36">
+   <i class="fas fa-chevron-down">
+   </i>
+  </label>
+  <ul>
+   <li class="toctree-l2">
+    <a class="reference internal" href="/en/latest/appendix/terminology.html">
+     1. Terminologies
+    </a>
+   </li>
+  </ul>
+ </li>
 </ul>
 
+    </div>
+</nav></div>
+        <div class="bd-sidebar__bottom">
+             <!-- To handle the deprecated key -->
+            
+            <div class="navbar_extra_footer">
+            Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a>
+            </div>
             
-          
         </div>
-        
-      </div>
-    </nav>
-
-    <section data-toggle="wy-nav-shift" class="wy-nav-content-wrap">
-
-      
-      <nav class="wy-nav-top" aria-label="top navigation">
-        
-          <i data-toggle="wy-nav-top" class="fa fa-bars"></i>
-          <a href="/en/latest/index.html">Kyuubi</a>
-        
-      </nav>
+    </div>
+    <div id="rtd-footer-container"></div>
+</div>
 
 
-      <div class="wy-nav-content">
-        
-        <div class="rst-content">
-        
           
 
 
+          
+<!-- A tiny helper pixel to detect if we've scrolled -->
+<div class="sbt-scroll-pixel-helper"></div>
+<!-- Main content -->
+<div class="col py-0 content-container">
+    
+    <div class="header-article row sticky-top noprint">
+        
 
 
 
+<div class="col py-1 d-flex header-article-main">
+    <div class="header-article__left">
+        
+        <label for="__navigation"
+  class="headerbtn"
+  data-toggle="tooltip"
+data-placement="right"
+title="Toggle navigation"
+>
+  
 
+<span class="headerbtn__icon-container">
+  <i class="fas fa-bars"></i>
+  </span>
 
+</label>
 
+        
+    </div>
+    <div class="header-article__right">
+<button onclick="toggleFullScreen()"
+  class="headerbtn"
+  data-toggle="tooltip"
+data-placement="bottom"
+title="Fullscreen mode"
+>
+  
 
+<span class="headerbtn__icon-container">
+  <i class="fas fa-expand"></i>
+  </span>
 
+</button>
+<a href="https://github.com/apache/incubator-kyuubi"
+   class="headerbtn"
+   data-toggle="tooltip"
+data-placement="bottom"
+title="Source repository"
+>
+  
 
+<span class="headerbtn__icon-container">
+  <i class="fab fa-github"></i>
+  </span>
 
+</a>
 
+    </div>
+</div>
 
-
-
-
-<div role="navigation" aria-label="breadcrumbs navigation">
-
-  <ul class="wy-breadcrumbs">
-    
-      <li><a href="/en/latest/index.html" class="icon icon-home"></a> &raquo;</li>
-        
-      <li>Page not found</li>
-    
-    
-      <li class="wy-breadcrumbs-aside">
-        
-      </li>
-    
-  </ul>
-
-  
-  <hr/>
+<!-- Table of contents -->
+<div class="col-md-3 bd-toc show noprint">
 </div>
-          <div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
-           <div itemprop="articleBody">
-            
+    </div>
+    <div class="article row">
+        <div class="col pl-md-3 pl-lg-5 content-container">
+            <!-- Table of contents that is only displayed when printing the page -->
+            <div id="jb-print-docs-body" class="onlyprint">
+                <h1></h1>
+                <!-- Table of contents -->
+                <div id="print-main-content">
+                    <div id="jb-print-toc">
+                        
+                    </div>
+                </div>
+            </div>
+            <main id="main-content" role="main">
+                
+              <div>
+                
   <h1>Page not found</h1>
 
-Thanks for trying.
-
-           </div>
-           
-          </div>
-          <footer>
+Unfortunately we couldn't find the content you were looking for.
 
-  <hr/>
-
-  <div role="contentinfo">
-    <p>
-        &#169; Copyright 
+              </div>
+              
+            </main>
+            <footer class="footer-article noprint">
+                
+    <!-- Previous / next buttons -->
+<div class='prev-next-area'>
+</div>
+            </footer>
+        </div>
+    </div>
+    <div class="footer-content row">
+        <footer class="col footer"><p>
+  
+    By Kent Yao<br/>
+  
+      &copy; Copyright 
 Licensed to the Apache Software Foundation (ASF) under one or more
 contributor license agreements.  See the NOTICE file distributed with
 this work for additional information regarding copyright ownership.
@@ -202,38 +1162,20 @@ distributed under the License is distributed on an &#34;AS IS&#34; BASIS,
 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 See the License for the specific language governing permissions and
 limitations under the License.
-.
-
-    </p>
-  </div>
-    
-    
+.<br/>
+</p>
+        </footer>
+    </div>
     
-    Built with <a href="https://www.sphinx-doc.org/">Sphinx</a> using a
-    
-    <a href="https://github.com/readthedocs/sphinx_rtd_theme">theme</a>
-    
-    provided by <a href="https://readthedocs.org">Read the Docs</a>. 
-
-</footer>
-        </div>
-      </div>
+</div>
 
-    </section>
 
-  </div>
+      </div>
+    </div>
   
+  <!-- Scripts loaded after <body> so the DOM is not blocked -->
+  <script src="/en/latest/_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script>
 
-  <script type="text/javascript">
-      jQuery(function () {
-          SphinxRtdTheme.Navigation.enable(true);
-      });
-  </script>
-
-  
-  
-    
-   
 
-</body>
+  </body>
 </html>
\ No newline at end of file
diff --git a/content/docs/latest/_images/configure_database_connection.png b/content/docs/latest/_images/configure_database_connection.png
new file mode 100644
index 0000000..f694014
Binary files /dev/null and b/content/docs/latest/_images/configure_database_connection.png differ
diff --git a/content/docs/latest/_images/configure_database_connection_ha.png b/content/docs/latest/_images/configure_database_connection_ha.png
new file mode 100644
index 0000000..0caae3c
Binary files /dev/null and b/content/docs/latest/_images/configure_database_connection_ha.png differ
diff --git a/content/docs/latest/_images/connected.png b/content/docs/latest/_images/connected.png
deleted file mode 100644
index 372cf25..0000000
Binary files a/content/docs/latest/_images/connected.png and /dev/null differ
diff --git a/content/docs/latest/_images/dbeaver_connnect_to_database.png b/content/docs/latest/_images/dbeaver_connnect_to_database.png
deleted file mode 100644
index 9b8a2e4..0000000
Binary files a/content/docs/latest/_images/dbeaver_connnect_to_database.png and /dev/null differ
diff --git a/content/docs/latest/_images/dbeaver_connnect_to_database_connection.png b/content/docs/latest/_images/dbeaver_connnect_to_database_connection.png
deleted file mode 100644
index 2442f4e..0000000
Binary files a/content/docs/latest/_images/dbeaver_connnect_to_database_connection.png and /dev/null differ
diff --git a/content/docs/latest/_images/dbeaver_connnect_to_database_driver.png b/content/docs/latest/_images/dbeaver_connnect_to_database_driver.png
deleted file mode 100644
index 35442b8..0000000
Binary files a/content/docs/latest/_images/dbeaver_connnect_to_database_driver.png and /dev/null differ
diff --git a/content/docs/latest/_images/dbeaver_connnect_to_database_port.png b/content/docs/latest/_images/dbeaver_connnect_to_database_port.png
deleted file mode 100644
index 15669f7..0000000
Binary files a/content/docs/latest/_images/dbeaver_connnect_to_database_port.png and /dev/null differ
diff --git a/content/docs/latest/_images/delta_lake_functions.png b/content/docs/latest/_images/delta_lake_functions.png
deleted file mode 100644
index 015e5b6..0000000
Binary files a/content/docs/latest/_images/delta_lake_functions.png and /dev/null differ
diff --git a/content/docs/latest/_images/desc_database.png b/content/docs/latest/_images/desc_database.png
deleted file mode 100644
index f4a18d0..0000000
Binary files a/content/docs/latest/_images/desc_database.png and /dev/null differ
diff --git a/content/docs/latest/_images/download_driver.png b/content/docs/latest/_images/download_driver.png
deleted file mode 100644
index d936da8..0000000
Binary files a/content/docs/latest/_images/download_driver.png and /dev/null differ
diff --git a/content/docs/latest/_images/kyuubi.png b/content/docs/latest/_images/kyuubi.png
deleted file mode 100644
index de17c46..0000000
Binary files a/content/docs/latest/_images/kyuubi.png and /dev/null differ
diff --git a/content/docs/latest/_images/metadata.png b/content/docs/latest/_images/metadata.png
index 6d422bc..3d35296 100644
Binary files a/content/docs/latest/_images/metadata.png and b/content/docs/latest/_images/metadata.png differ
diff --git a/content/docs/latest/_images/new_database_connection.png b/content/docs/latest/_images/new_database_connection.png
new file mode 100644
index 0000000..7a02469
Binary files /dev/null and b/content/docs/latest/_images/new_database_connection.png differ
diff --git a/content/docs/latest/_images/query.png b/content/docs/latest/_images/query.png
deleted file mode 100644
index f21bd2d..0000000
Binary files a/content/docs/latest/_images/query.png and /dev/null differ
diff --git a/content/docs/latest/_images/query41_result.png b/content/docs/latest/_images/query41_result.png
deleted file mode 100644
index bee75b9..0000000
Binary files a/content/docs/latest/_images/query41_result.png and /dev/null differ
diff --git a/content/docs/latest/_images/tpcds_schema.png b/content/docs/latest/_images/tpcds_schema.png
deleted file mode 100644
index b610c61..0000000
Binary files a/content/docs/latest/_images/tpcds_schema.png and /dev/null differ
diff --git a/content/docs/latest/_images/trino-query-page.png b/content/docs/latest/_images/trino-query-page.png
new file mode 100644
index 0000000..7d78cb1
Binary files /dev/null and b/content/docs/latest/_images/trino-query-page.png differ
diff --git a/content/docs/latest/_images/viewdata.png b/content/docs/latest/_images/viewdata.png
deleted file mode 100644
index 4a0280b..0000000
Binary files a/content/docs/latest/_images/viewdata.png and /dev/null differ
diff --git a/content/docs/latest/_sources/appendix/index.rst.txt b/content/docs/latest/_sources/appendix/index.rst.txt
new file mode 100644
index 0000000..fdb40cf
--- /dev/null
+++ b/content/docs/latest/_sources/appendix/index.rst.txt
@@ -0,0 +1,24 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+Appendixes
+==========
+
+.. toctree::
+    :maxdepth: 3
+    :numbered: 4
+
+
+    terminology
diff --git a/content/docs/latest/_sources/appendix/terminology.md.txt b/content/docs/latest/_sources/appendix/terminology.md.txt
new file mode 100644
index 0000000..77d4dea
--- /dev/null
+++ b/content/docs/latest/_sources/appendix/terminology.md.txt
@@ -0,0 +1,164 @@
+<!--
+ - Licensed to the Apache Software Foundation (ASF) under one or more
+ - contributor license agreements.  See the NOTICE file distributed with
+ - this work for additional information regarding copyright ownership.
+ - The ASF licenses this file to You under the Apache License, Version 2.0
+ - (the "License"); you may not use this file except in compliance with
+ - the License.  You may obtain a copy of the License at
+ -
+ -   http://www.apache.org/licenses/LICENSE-2.0
+ -
+ - Unless required by applicable law or agreed to in writing, software
+ - distributed under the License is distributed on an "AS IS" BASIS,
+ - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ - See the License for the specific language governing permissions and
+ - limitations under the License.
+ -->
+
+# Terminologies
+
+## Kyuubi
+
+Kyuubi is a unified multi-tenant JDBC interface for large-scale data processing and analytics, built on top of Apache Spark.
+
+### JDBC
+
+> The Java Database Connectivity (JDBC) API is the industry standard for database-independent connectivity between the Java programming language and a wide range of databases SQL databases and other tabular data sources,
+> such as spreadsheets or flat files.
+> The JDBC API provides a call-level API for SQL-based database access.
+
+> JDBC technology allows you to use the Java programming language to exploit "Write Once, Run Anywhere" capabilities for applications that require access to enterprise data.
+> With a JDBC technology-enabled driver, you can connect all corporate data even in a heterogeneous environment.
+
+<p align=right>
+<em>
+<a href="https://www.oracle.com/java/technologies/javase/javase-tech-database.html">https://www.oracle.com/java/technologies/javase/javase-tech-database.html</a>
+</em>
+</p>
+
+Typically, there is a gap between business development and big data analytics.
+If the two are forcefully coupled, it would make the corresponding system difficult to operate and optimize.
+On the flip side, if decoupled, the values of both can be maximized.
+Business experts can stay focused on their own business development,
+while Big Data engineers can continuously optimize server-side performance and stability.
+Kyuubi combines the two seamlessly through an easy-to-use JDBC interface.
+
+#### Apache Hive
+
+> The Apache Hive ™ data warehouse software facilitates reading, writing, and managing large datasets residing in distributed storage using SQL. Structure can be projected onto data already in storage. A command line tool and JDBC driver are provided to connect users to Hive.
+
+<p align=right>
+<em>
+<a href="https://hive.apache.org/">https://hive.apache.org</a>
+</em>
+</p>
+
+Kyuubi supports Hive JDBC driver, which helps you seamlessly migrate your slow queries from Hive to Spark SQL.
+
+#### Apache Thrift
+
+> The Apache Thrift software framework, for scalable cross-language services development, combines a software stack with a code generation engine to build services that work efficiently and seamlessly between C++, Java, Python, PHP, Ruby, Erlang, Perl, Haskell, C#, Cocoa, JavaScript, Node.js, Smalltalk, OCaml and Delphi and other languages.
+
+<p align=right>
+<em>
+<a href="https://thrift.apache.org/">https://thrift.apache.org</a>
+</em>
+</p>
+
+### Server
+
+Server is a daemon process that handles concurrent connection and query requests and converting these requests into various operations against the **query engines** to complete the responses to clients.
+
+_**Aliases: Kyuubi Server / Kyuubi Instance / k.i.**_
+
+### ServerSpace
+
+A ServerSpace is used to register servers and expose them together as a service layer to clients.
+
+### Engine
+
+An engine handles all queries through Kyuubi servers.
+It is created in one Kyuubi server and can be shared with other Kyuubi servers by registering itself to an engine namespace.
+All its capabilities are mainly powered by Spark SQL.
+
+_**Aliases: Query Engine / Engine Instance / e.i.**_
+
+### EngineSpace
+
+An EngineSpace is internally used by servers to register and interact with engines.
+
+#### Apache Spark
+
+> [Apache Spark™](https://spark.apache.org/) is a unified analytics engine for large-scale data processing.
+
+<p align=right>
+<em>
+<a href="https://spark.apache.org">https://spark.apache.org</a>
+</em>
+</p>
+
+### Multi Tenancy
+
+Kyuubi guarantees end-to-end multi-tenant isolation and sharing in the following pipeline
+
+```
+Client --> Kyuubi --> Query Engine(Spark) --> Resource Manager --> Data Storage Layer
+```
+
+### High Availability / Load Balance
+
+As an enterprise service, SLA commitment is essential. Deploying Kyuubi in High Availability (HA) mode helps you guarantee that.
+
+#### Apache Zookeeper
+
+> Apache ZooKeeper is an effort to develop and maintain an open-source server which enables highly reliable distributed coordination.
+
+<p align=right>
+<em>
+<a href="https://zookeeper.apache.org/">https://zookeeper.apache.org</a>
+</em>
+</p>
+
+#### Apache Curator
+
+> Apache Curator is a Java/JVM client library for Apache ZooKeeper, a distributed coordination service. It includes a highlevel API framework and utilities to make using Apache ZooKeeper much easier and more reliable. It also includes recipes for common use cases and extensions such as service discovery and a Java 8 asynchronous DSL.
+
+<p align=right>
+<em>
+<a href="https://curator.apache.org/">https://curator.apache.org</a>
+</em>
+</p>
+
+## DataLake & LakeHouse
+
+Kyuubi unifies DataLake & LakeHouse access in the simplest pure SQL way, meanwhile it's also the securest way with authentication and SQL standard authorization.
+
+### Apache Iceberg
+
+> Apache Iceberg is an open table format for huge analytic datasets. Iceberg adds tables to Trino and Spark that use a high-performance format that works just like a SQL table.
+
+<p align=right>
+<em>
+<a href="http://iceberg.apache.org/">http://iceberg.apache.org/</a>
+</em>
+</p>
+
+### Delta Lake
+
+> Delta Lake is an open-source storage layer that brings ACID transactions to Apache Spark™ and big data workloads.
+
+<p align=right>
+<em>
+<a href="https://delta.io/">https://delta.io</a>
+</em>
+</p>
+
+### Apache Hudi
+
+> Apache Hudi ingests & manages storage of large analytical datasets over DFS (hdfs or cloud stores).
+
+<p align=right>
+<em>
+<a href="https://hudi.apache.org/">https://hudi.apache.org</a>
+</em>
+</p>
diff --git a/content/docs/latest/_sources/changelog/v1.5.1-incubating.md.txt b/content/docs/latest/_sources/changelog/v1.5.1-incubating.md.txt
new file mode 100644
index 0000000..405eba7
--- /dev/null
+++ b/content/docs/latest/_sources/changelog/v1.5.1-incubating.md.txt
@@ -0,0 +1,11 @@
+## Changelog for Apache Kyuubi(Incubating) v1.5.1-incubating
+
+[[KYUUBI #2354] Fix NPE in process builder log capture thread](https://github.com/apache/incubator-kyuubi/commit/5e76334e)  
+[[KYUUBI #2296] Fix operation log file handler leak](https://github.com/apache/incubator-kyuubi/commit/809ea2a6)  
+[[KYUUBI #2266] The default value of frontend.connection.url.use.hostname should be set to true to be consistent with previous versions](https://github.com/apache/incubator-kyuubi/commit/d3e25f08)  
+[[KYUUBI #2255]The engine state of Spark's EngineEvent is hardcoded with 0](https://github.com/apache/incubator-kyuubi/commit/2af8bbb4)  
+[[KYUUBI #2008][FOLLOWUP] Support engine type and subdomain in kyuubi-ctl](https://github.com/apache/incubator-kyuubi/commit/d1a2dda0)  
+[[KYUUBI #2156][FOLLOWUP] Fix configuration format in document](https://github.com/apache/incubator-kyuubi/commit/5225b540)  
+[[KYUUBI #2156] Change log to reflect exactly why getting token failed](https://github.com/apache/incubator-kyuubi/commit/21ca7540)  
+[[KYUUBI #2134] Respect Spark bundled log4j in extension modules](https://github.com/apache/incubator-kyuubi/commit/30dc84b5)  
+[[KYUUBI #2150] [DOCS] Fix Getting Started With Kyuubi on Kubernetes](https://github.com/apache/incubator-kyuubi/commit/e232a83a)  
diff --git a/content/docs/latest/_sources/changelog/v1.5.2-incubating.md.txt b/content/docs/latest/_sources/changelog/v1.5.2-incubating.md.txt
new file mode 100644
index 0000000..a5bc2c8
--- /dev/null
+++ b/content/docs/latest/_sources/changelog/v1.5.2-incubating.md.txt
@@ -0,0 +1,16 @@
+## Changelog for Apache Kyuubi(Incubating) v1.5.2-incubating
+
+[[KYUUBI #2841] [1.5] Revert "[KYUUBI #2211] [Improvement] Add CHANGELOG.md to codebase for maintaining release notes"](https://github.com/apache/incubator-kyuubi/commit/2b23c0dc)  
+[[KYUUBI #2746][INFRA][1.5] Improve NOTICE of binary release](https://github.com/apache/incubator-kyuubi/commit/35a4c488)  
+[[KYUUBI-2422] Wrap close session with try-finally (#2836)](https://github.com/apache/incubator-kyuubi/commit/cbca761a)  
+[[KYUUBI #2227] Fix operation log dir not deleted issue](https://github.com/apache/incubator-kyuubi/commit/27bfa683)  
+[[KYUUBI #2208] Fixed session close operator log session dir not deleted](https://github.com/apache/incubator-kyuubi/commit/5a2bcb80)  
+[[KYUUBI #2211] [Improvement] Add CHANGELOG.md to codebase for maintaining release notes](https://github.com/apache/incubator-kyuubi/commit/9fce6266)  
+[[KYUUBI #2736] Upgrade Jackson 2.13.3](https://github.com/apache/incubator-kyuubi/commit/9466a1ab)  
+[[KYUUBI #2720] Fix KyuubiDatabaseMetaData#supportsCatalogs*](https://github.com/apache/incubator-kyuubi/commit/268d1b27)  
+[[KYUUBI #2686][1.5] Fix lock bug if engine initialization timeout](https://github.com/apache/incubator-kyuubi/commit/7e9511a4)  
+[[KYUUBI #2640] Implement TGetInfoType CLI_ODBC_KEYWORDS](https://github.com/apache/incubator-kyuubi/commit/51067384)  
+[[KYUUBI #2450][FOLLOWUP] Remove opHandle from opHandleSet when exception occurs](https://github.com/apache/incubator-kyuubi/commit/a2c0f783)  
+[[KYUUBI #2478] Backport HIVE-19018 to Kyuubi Beeline](https://github.com/apache/incubator-kyuubi/commit/fbe38de7)  
+[[KYUUBI #2484] Add conf to SessionEvent and display it in EngineSessionPage](https://github.com/apache/incubator-kyuubi/commit/87f81e3c)  
+[[KYUUBI #2450] Update lastAccessTime in getStatus and add opHandle to opHandleSet before run](https://github.com/apache/incubator-kyuubi/commit/8b143689)  
diff --git a/content/docs/latest/_sources/changelog/v1.6.0-incubating.md.txt b/content/docs/latest/_sources/changelog/v1.6.0-incubating.md.txt
new file mode 100644
index 0000000..e08bbd1
--- /dev/null
+++ b/content/docs/latest/_sources/changelog/v1.6.0-incubating.md.txt
@@ -0,0 +1,618 @@
+## Changelog for Apache Kyuubi(Incubating) v1.6.0-incubating
+
+[Revert "[KYUUBI #3020] kyuubi ldap add new config property kyuubi.authentication.ldap.bindpw and kyuubi.authentication.ldap.attrs"](https://github.com/apache/incubator-kyuubi/commit/1f57b108)  
+[Revert "[KYUUBI #3020][FOLLOWUP] Refactor the code style"](https://github.com/apache/incubator-kyuubi/commit/2191dea0)  
+[[KYUUBI #3316] [BUILD] Enable spark-3.3 profile for license check on GA](https://github.com/apache/incubator-kyuubi/commit/2b713f6d)  
+[[KYUUBI #3254] Supplement the licenses of support etcd discovery](https://github.com/apache/incubator-kyuubi/commit/e2c66515)  
+[[KYUUBI #3301] Construct lifetimeTerminatingChecker only when needed](https://github.com/apache/incubator-kyuubi/commit/7fd54499)  
+[[KYUUBI #3272] Synchronize graceful shutdown with main stop sequence](https://github.com/apache/incubator-kyuubi/commit/fe79deee)  
+[[KYUUBI #3281] [MINOR] Use AccessControlException instead of RuntimeException if check privilege failed](https://github.com/apache/incubator-kyuubi/commit/7c157714)  
+[[KYUUBI #3222][FOLLOWUP] Fixing placeholder and config of user in JDBC Authentication Provider](https://github.com/apache/incubator-kyuubi/commit/8b65e0e4)  
+[[KYUUBI #3275] [KYUUBI 3269] [DOCS] Doc for JDBC authentication provider](https://github.com/apache/incubator-kyuubi/commit/04f31153)  
+[[KYUUBI #3217] [DOCS] Doc for using Marcos in row-level filter in Authz](https://github.com/apache/incubator-kyuubi/commit/e76f8f7b)  
+[[KYUUBI #3297] [MINOR] Null is replaced by KyuubiSQLException.featureNotSupported()](https://github.com/apache/incubator-kyuubi/commit/1e3dc52f)  
+[[KYUUBI #3244] Bump Hudi 0.12.0](https://github.com/apache/incubator-kyuubi/commit/84b164ee)  
+[[KYUUBI #3287] Exclude reload4j from hadoop-minikdc](https://github.com/apache/incubator-kyuubi/commit/1348abb1)  
+[[KYUUBI #3222][FOLLOWUP] Introdude JdbcUtils to simplify code](https://github.com/apache/incubator-kyuubi/commit/9d83e632)  
+[[KYUUBI #3020][FOLLOWUP] Refactor the code style](https://github.com/apache/incubator-kyuubi/commit/a8e201a8)  
+[[KYUUBI #3020] kyuubi ldap add new config property kyuubi.authentication.ldap.bindpw and kyuubi.authentication.ldap.attrs](https://github.com/apache/incubator-kyuubi/commit/ef109e18)  
+[[KYUUBI #3226] Privileges should be checked only once in `RuleAuthorization`](https://github.com/apache/incubator-kyuubi/commit/6ac28198)  
+[[KYUUBI #3156] Expose REST frontend connection metrics](https://github.com/apache/incubator-kyuubi/commit/d15ca518)  
+[[KYUUBI #3241][DOCS] Update `Develop Tools / Building a Runnable Distribution`](https://github.com/apache/incubator-kyuubi/commit/dab0583e)  
+[[KYUUBI #3255] Add miss engine type config docs](https://github.com/apache/incubator-kyuubi/commit/a3b6c675)  
+[[KYUUBI #3214] Plan only mode should unset when mode value is incorrect](https://github.com/apache/incubator-kyuubi/commit/92f3a532)  
+[[KYUUBI #3239] [Subtask] DorisSQLEngine - Add integration tests](https://github.com/apache/incubator-kyuubi/commit/3cf5a20b)  
+[[KYUUBI #3252] Fix the problem that |release| in the document was not replaced correctly](https://github.com/apache/incubator-kyuubi/commit/ecda4188)  
+[[KYUUBI #3247] Minor clean up Kyuubi JDBC code](https://github.com/apache/incubator-kyuubi/commit/3b990d71)  
+[[KYUUBI #3222]  JDBC Authentication Provider for server](https://github.com/apache/incubator-kyuubi/commit/a63587b1)  
+[[KYUUBI #3243] Move trait Logging#initializeLogging to object Logging](https://github.com/apache/incubator-kyuubi/commit/8c2e7746)  
+[[KYUUBI #3245] Add spark-3.3 profile in building.md](https://github.com/apache/incubator-kyuubi/commit/e584880f)  
+[[KYUUBI #3184] OperationResource rowset api should have default values for maxrows and fetchorientation](https://github.com/apache/incubator-kyuubi/commit/1466165a)  
+[[KYUUBI #3220] Make kyuubi.engine.ui.stop.enabled false in HistoryServer](https://github.com/apache/incubator-kyuubi/commit/76b44c98)  
+[[KYUUBI #1776][FOLLOWUP] Fill empty td tag for `Failure Reason` column in EngineTable](https://github.com/apache/incubator-kyuubi/commit/f60e9d47)  
+[[KYUUBI #3157][DOC] Modify logging doc due to using log4j2 instead of log4j](https://github.com/apache/incubator-kyuubi/commit/bbe7a4d7)  
+[[KYUUBI #3228] [Subtask] Connectors for Spark SQL Query Engine -> TPC-DS](https://github.com/apache/incubator-kyuubi/commit/5ed671c8)  
+[[KYUUBI #3138] [Subtask] DorisSQLEngine - Add jdbc engine to dist](https://github.com/apache/incubator-kyuubi/commit/c473634e)  
+[[KYUUBI #3230] Flink SQL engine supports run across versions](https://github.com/apache/incubator-kyuubi/commit/db0047d5)  
+[[KYUUBI #3072][DOC] Add a doc of Flink Table Store for Flink SQL engine](https://github.com/apache/incubator-kyuubi/commit/06d43cb3)  
+[[KYUUBI #3170] Expose thrift binary connection metrics](https://github.com/apache/incubator-kyuubi/commit/23ad7801)  
+[[KYUUBI #3227] SparkConfParser supports parse bytes and time](https://github.com/apache/incubator-kyuubi/commit/9bdff9ba)  
+[[KYUUBI #3219] Error renew delegation tokens: Unknown version of delegation token 8](https://github.com/apache/incubator-kyuubi/commit/6c4a8b08)  
+[[KYUUBI #3080][DOC] Add a doc of the Flink Table Store for the Trino SQL Engine](https://github.com/apache/incubator-kyuubi/commit/e847ab35)  
+[[KYUUBI #3107] [Subtask] DorisSQLEngine - Add process builder (#3123)](https://github.com/apache/incubator-kyuubi/commit/33b70cfe)  
+[[KYUUBI #3211] [Subtask] Connectors for Spark SQL Query Engine -> TPC-H](https://github.com/apache/incubator-kyuubi/commit/f36508b5)  
+[[KYUUBI #3206] Change Flink default version to 1.15](https://github.com/apache/incubator-kyuubi/commit/86964fef)  
+[[KYUUBI #833] Check if `spark.kubernetes.executor.podNamePrefix` is invalid](https://github.com/apache/incubator-kyuubi/commit/b3723392)  
+[[KYUUBI #3098] Unify the event log code path](https://github.com/apache/incubator-kyuubi/commit/d0865255)  
+[[KYUUBI #3210] [DOCS] Mention Kyuubi Spark SQL extension supports Spark 3.3](https://github.com/apache/incubator-kyuubi/commit/6aa898e5)  
+[[KYUUBI #3204] Fix duplicated ldapServer#close in LdapAuthenticationProviderImplSuite](https://github.com/apache/incubator-kyuubi/commit/bedc22cb)  
+[[KYUUBI #3209] Support configure TPC-H connector in runtime](https://github.com/apache/incubator-kyuubi/commit/c9cc9b7e)  
+[[KYUUBI #3200] Make KyuubiSessionEvent.sessionId clear](https://github.com/apache/incubator-kyuubi/commit/875fedd1)  
+[[KYUUBI #3186] Support applying Row-level Filter and Data Masking policies for DatasourceV2 in Authz module](https://github.com/apache/incubator-kyuubi/commit/64b1d920)  
+[[KYUUBI #2584][INFRA] Migrate CI to Ubuntu 22.04](https://github.com/apache/incubator-kyuubi/commit/6061098a)  
+[[KYUUBI #3172][FLINK] Fix failed test cases in Flink 1.15](https://github.com/apache/incubator-kyuubi/commit/a75de1b5)  
+[[KYUUBI #3203] [DOCS] Fix typo](https://github.com/apache/incubator-kyuubi/commit/e48205d7)  
+[[KYUUBI #3192] Refactor TPCDSConf](https://github.com/apache/incubator-kyuubi/commit/7720c9f6)  
+[[KYUUBI #3180] Add component version util](https://github.com/apache/incubator-kyuubi/commit/3cdf84e9)  
+[[KYUUBI #3162] Bump Hadoop 3.3.4](https://github.com/apache/incubator-kyuubi/commit/782e5fb9)  
+[[KYUUBI #3191] [DOCS] Add missing binary scala version in engine jar name](https://github.com/apache/incubator-kyuubi/commit/b8162f15)  
+[[KYUUBI #3199] [BUILD] Fix travis JAVA_HOME](https://github.com/apache/incubator-kyuubi/commit/9d4d2948)  
+[[KYUUBI #3194][Scala-2.13] Refine deprecated config](https://github.com/apache/incubator-kyuubi/commit/fdb91686)  
+[[KYUUBI #3198] [DOCS] Fix index of Hudi Flink connector](https://github.com/apache/incubator-kyuubi/commit/c64b7648)  
+[[KYUUBI #3189] [BUILD] Bump jetcd 0.7.3 and pin Netty dependencies](https://github.com/apache/incubator-kyuubi/commit/a46d6550)  
+[[KYUUBI #3190] [BUILD] Use jdk_switcher to setup JAVA_HOME](https://github.com/apache/incubator-kyuubi/commit/ea47cbc1)  
+[[KYUUBI #3145] Bump log4j from 2.17.2 to 2.18.0](https://github.com/apache/incubator-kyuubi/commit/3618002b)  
+[[KYUUBI #3135] Bump gRPC from 1.47.0 to 1.48.0](https://github.com/apache/incubator-kyuubi/commit/b87ee983)  
+[[KYUUBI #3178] Add application operation docs](https://github.com/apache/incubator-kyuubi/commit/30da9068)  
+[[KYUUBI #3174] Update MATURITY for C30, RE50, CO20, CO40, CO50, CS10, IN10](https://github.com/apache/incubator-kyuubi/commit/024fa2db)  
+[[KYUUBI #3175] Add session conf advisor docs](https://github.com/apache/incubator-kyuubi/commit/3e860145)  
+[[KYUUBI #3141] Trino engine etcd support](https://github.com/apache/incubator-kyuubi/commit/c6caeb83)  
+[[KYUUBI #3150] Expose metadata request metrics](https://github.com/apache/incubator-kyuubi/commit/0089f2f0)  
+[[KYUUBI #3070][DOC] Add a doc of the Hudi connector for the Flink SQL Engine](https://github.com/apache/incubator-kyuubi/commit/38c7c160)  
+[[KYUUBI #3104] Support SSL for Etcd](https://github.com/apache/incubator-kyuubi/commit/c17829bf)  
+[[KYUUBI #3152] Introduce JDBC parameters to control connection timeout](https://github.com/apache/incubator-kyuubi/commit/65ccf78b)  
+[[KYUUBI #3136] Change Map to a case class ApplicationInfo as the application info holder](https://github.com/apache/incubator-kyuubi/commit/24b93840)  
+[[KYUUBI #2240][SUB-TASK] Skip add metadata manager if frontend does not support rest](https://github.com/apache/incubator-kyuubi/commit/7aca75b1)  
+[[KYUUBI #3131] Improve operation state change logging](https://github.com/apache/incubator-kyuubi/commit/210d3567)  
+[[KYUUBI #3158] Fix npe issue when formatting the kyuubi-ctl output](https://github.com/apache/incubator-kyuubi/commit/0ddf7e38)  
+[[KYUUBI #3160][DOCS] `Dependencies` links in `Connectors for Spark SQL Query Engine` pages jump to wrong place #3160 (#3161)](https://github.com/apache/incubator-kyuubi/commit/3729a998)  
+[[KYUUBI #3154][Subtask] Connectors for Spark SQL Query Engine -> TiDB/TiKV](https://github.com/apache/incubator-kyuubi/commit/da87ca55)  
+[[KYUUBI #3082] Add iceberg connector doc for Trino SQL Engine](https://github.com/apache/incubator-kyuubi/commit/60cb4bd0)  
+[[KYUUBI #3071][DOC] Add iceberg connector for Flink SQL Engine](https://github.com/apache/incubator-kyuubi/commit/0f21aa94)  
+[[KYUUBI #3153] Move batch util class to kyuubi rest sdk for programing friendly](https://github.com/apache/incubator-kyuubi/commit/34e7d1ad)  
+[[KYUUBI #3067][DOC] Add Flink Table Store connector doc for Spark SQL Engine](https://github.com/apache/incubator-kyuubi/commit/91a25349)  
+[[KYUUBI #3148] Change etcd docker image to recover arm64 CI](https://github.com/apache/incubator-kyuubi/commit/137e818c)  
+[[KYUUBI #3119] [TEST] Using  more light-weight SparkPi for batch related tests](https://github.com/apache/incubator-kyuubi/commit/6b414083)  
+[[KYUUBI #3023] Kyuubi Hive JDBC: Replace UGI-based Kerberos authentication w/ JAAS](https://github.com/apache/incubator-kyuubi/commit/87782097)  
+[[KYUUBI #3144] Remove deprecated KyuubiDriver in services manifest](https://github.com/apache/incubator-kyuubi/commit/d5dae096)  
+[[KYUUBI #3143] Check class loadable before applying SLF4JBridgeHandler](https://github.com/apache/incubator-kyuubi/commit/d07d7cc2)  
+[[KYUUBI #3139] Override toString method for rest dto classes](https://github.com/apache/incubator-kyuubi/commit/7b15f2ed)  
+[[KYUUBI #3087] Convert the kyuubi batch conf with `spark.` prefix so that spark could identify](https://github.com/apache/incubator-kyuubi/commit/eb96db54)  
+[[KYUUBI #3133] Always run Flink statement in sync mode](https://github.com/apache/incubator-kyuubi/commit/976af3d9)  
+[[KYUUBI #3121] [CI] Fix GA oom issue](https://github.com/apache/incubator-kyuubi/commit/0e910197)  
+[[KYUUBI #3126] Using markdown 3.3.7 for kyuubi document build](https://github.com/apache/incubator-kyuubi/commit/64090f50)  
+[[KYUUBI #3106] Correct `RelMetadataProvider` used in flink-sql-engine](https://github.com/apache/incubator-kyuubi/commit/0b3f6b73)  
+[[KYUUBI #3069][DOC] Add Iceberg connector doc for Spark SQL Engine](https://github.com/apache/incubator-kyuubi/commit/5c1ea6e5)  
+[[KYUUBI #3068][DOC] Add the Hudi connector doc for Spark SQL Query Engine](https://github.com/apache/incubator-kyuubi/commit/f1312ea4)  
+[[KYUUBI #3108][DOC] Fix path errors in the build document](https://github.com/apache/incubator-kyuubi/commit/4b640b72)  
+[[KYUUBI #3111] Replace HashMap with singletonMap](https://github.com/apache/incubator-kyuubi/commit/a5a97489)  
+[[KYUUBI #3113] Bump up delta lake version from 2.0.0rc1 to 2.0.0](https://github.com/apache/incubator-kyuubi/commit/65c3d2fb)  
+[[KYUUBI #3101] [Subtask][#3100] Build the content for extension points documentation](https://github.com/apache/incubator-kyuubi/commit/6c8024c8)  
+[[KYUUBI #3102] Fix multi endpoints for etcd](https://github.com/apache/incubator-kyuubi/commit/16f41694)  
+[[KYUUBI #3008] Bump prometheus from 0.14.1 to 0.16.0](https://github.com/apache/incubator-kyuubi/commit/b0685f9b)  
+[[KYUUBI #3095] Move TPC-DS/TPC-H queries to unique folder](https://github.com/apache/incubator-kyuubi/commit/7f592ecf)  
+[[KYUUBI #3094] Code refactor on Kyuubi Hive JDBC driver](https://github.com/apache/incubator-kyuubi/commit/eb705bd1)  
+[[KYUUBI #3092] Replace apache commons Base64 w/ JDK](https://github.com/apache/incubator-kyuubi/commit/47f8f9cc)  
+[[KYUUBI #3093] Fix Kyuubi Hive JDBC driver SPNEGO header](https://github.com/apache/incubator-kyuubi/commit/77b6ee0d)  
+[[KYUUBI #3050] Bump Apache Iceberg 0.14.0](https://github.com/apache/incubator-kyuubi/commit/69996224)  
+[[KYUUBI #3044] Bump Spark 3.2.2](https://github.com/apache/incubator-kyuubi/commit/720bc00c)  
+[[KYUUBI #3052][FOLLOWUP] Do not use the ip in proxy http header for authentication to prevent CVE](https://github.com/apache/incubator-kyuubi/commit/d75f48ea)  
+[[KYUUBI #3051] Support to get the  real client ip address for thrift connection when using VIP as kyuubi server load balancer](https://github.com/apache/incubator-kyuubi/commit/8f3d7898)  
+[[KYUUBI #3046][Metrics] Add meter metrics for recording the rate of the operation state for each kyuubi operation](https://github.com/apache/incubator-kyuubi/commit/4bb06542)  
+[[KYUUBI #3045][FOLLOWUP] Correct the common options and add docs for kyuubi-admin command](https://github.com/apache/incubator-kyuubi/commit/99934591)  
+[[KYUUBI #3076][Subtask][#3039] Add the docs for rest api - Batch Resource](https://github.com/apache/incubator-kyuubi/commit/9cb8041d)  
+[[KYUUBI #3077] Remove meaningless statement override in LaunchEngine](https://github.com/apache/incubator-kyuubi/commit/6a6044be)  
+[[KYUUBI #3018] [Subtask] DorisSQLEngine - GetColumns Operation](https://github.com/apache/incubator-kyuubi/commit/419d725c)  
+[[KYUUBI #3073] CredentialsManager should use appUser to renew credential](https://github.com/apache/incubator-kyuubi/commit/82d61c9f)  
+[[KYUUBI #3065] Support to retry the killApplicationByTag for JpsApplicationOperation](https://github.com/apache/incubator-kyuubi/commit/0857786e)  
+[[KYUUBI #3054] Add description of the discovery client in the conf doc](https://github.com/apache/incubator-kyuubi/commit/ce72a502)  
+[[KYUUBI #3043][FOLLOWUP] Restore accidentally removed public APIs of kyuubi-hive-jdbc module](https://github.com/apache/incubator-kyuubi/commit/642b2769)  
+[[KYUUBI #3060] [Subtask][#3059] Build content of the connector document section](https://github.com/apache/incubator-kyuubi/commit/48647623)  
+[[KYUUBI #3045] Support to do admin rest request with kyuubi-adminctl](https://github.com/apache/incubator-kyuubi/commit/a7d190dd)  
+[[KYUUBI #3055] Expose client ip address into batch request conf](https://github.com/apache/incubator-kyuubi/commit/4c3a9ed0)  
+[[KYUUBI #3052] Support to get the real client ip address for http connection when using VIP as kyuubi server load balancer](https://github.com/apache/incubator-kyuubi/commit/a3973a0b)  
+[[KYUUBI #3043] Clean up Kyuubi Hive JDBC client](https://github.com/apache/incubator-kyuubi/commit/b99f25f2)  
+[[KYUUBI #3047] Fallback krb5 conf to OS if not configured](https://github.com/apache/incubator-kyuubi/commit/a5f733b4)  
+[[KYUUBI #2644] Add etcd discovery client for HA](https://github.com/apache/incubator-kyuubi/commit/32970ce6)  
+[[KYUUBI #3040] [Subtask][#3039] Build the skeleton of client side documentation](https://github.com/apache/incubator-kyuubi/commit/9060bf22)  
+[[KYUUBI #2974][FEATURE] EOL Support for Spark 3.0](https://github.com/apache/incubator-kyuubi/commit/c1158acc)  
+[[KYUUBI #3042] Kyuubi Hive JDBC should throw KyuubiSQLException](https://github.com/apache/incubator-kyuubi/commit/1bc3916d)  
+[[KYUUBI #3037] Handles configuring the JUL -> SLF4J bridge](https://github.com/apache/incubator-kyuubi/commit/c5d29260)  
+[[KYUUBI #3033][Bug] Kyuubi failed to start due to PID directory not exists](https://github.com/apache/incubator-kyuubi/commit/d3446675)  
+[[KYUUBI #3028][FLINK] Bump Flink versions to 1.14.5 and 1.15.1](https://github.com/apache/incubator-kyuubi/commit/1fbe16fc)  
+[[KYUUBI #2478][FOLLOWUP] Fix bin/beeline without -u exits unexpectedly](https://github.com/apache/incubator-kyuubi/commit/e4929949)  
+[[KYUUBI #3025] Fix the kyuubi restful href link format issue](https://github.com/apache/incubator-kyuubi/commit/95cb57e8)  
+[[KYUUBI #3007] Bump scopt from 4.0.1 to 4.1.0](https://github.com/apache/incubator-kyuubi/commit/c652bba4)  
+[[KYUUBI #3019] Backport HIVE-21538 - Beeline: password source though the console reader did not pass to connection param](https://github.com/apache/incubator-kyuubi/commit/a6499c6c)  
+[[KYUUBI #3017] kyuubi-ctl should print error message to right place](https://github.com/apache/incubator-kyuubi/commit/3c75e9de)  
+[[KYUUBI #3010] Bump Jetty from 9.4.41.v20210516 to 9.4.48.v20220622](https://github.com/apache/incubator-kyuubi/commit/1f59a592)  
+[[KYUUBI #3011] Bump swagger from 2.1.11 to 2.2.1](https://github.com/apache/incubator-kyuubi/commit/dc6e764f)  
+[[KYUUBI #3009] Bump Jersey from 2.35 to 2.36](https://github.com/apache/incubator-kyuubi/commit/a2431d0c)  
+[[KYUUBI #2801][FOLLOWUP] Also check whether the batch main resource path is in local dir allow list](https://github.com/apache/incubator-kyuubi/commit/c922ae28)  
+[[KYUUBI #3012] Remove unused thrift request max attempts and related ut](https://github.com/apache/incubator-kyuubi/commit/13e618cf)  
+[[KYUUBI #3005] [DOCS] Correct spelling errors and optimizations in 'Building Kyuubi Documentation' part](https://github.com/apache/incubator-kyuubi/commit/3203829f)  
+[[KYUUBI #3004] Clean up JDBC shaded client pom and license](https://github.com/apache/incubator-kyuubi/commit/3e5a92ef)  
+[[KYUUBI #2895] Show final info in trino engine](https://github.com/apache/incubator-kyuubi/commit/66a45f3e)  
+[[KYUUBI #2984] Refactor TPCDS configurations using SparkConfParser](https://github.com/apache/incubator-kyuubi/commit/9e2aaffc)  
+[[KYUUBI #2996] Remove Hive storage-api dependencies from Kyuubi Hive JDBC](https://github.com/apache/incubator-kyuubi/commit/4b8dc796)  
+[[KYUUBI #2997] Use spark shim set current namespace](https://github.com/apache/incubator-kyuubi/commit/407fc8db)  
+[[KYUUBI #2994] Remove Hive common dependencies from Kyuubi Hive JDBC](https://github.com/apache/incubator-kyuubi/commit/774934fe)  
+[[KYUUBI #2850][FOLLOWUP] Replace log4j2.properties by log4j2.xml](https://github.com/apache/incubator-kyuubi/commit/c7e2b322)  
+[[KYUUBI #2801] Add local dir allow list and check the application access path URI](https://github.com/apache/incubator-kyuubi/commit/b585fb42)  
+[[KYUUBI #2953] Support to interrupt the thrift request if remote engine is broken](https://github.com/apache/incubator-kyuubi/commit/03e55e0c)  
+[[KYUUBI #2999] Fix Kyuubi Hive Beeline dependencies](https://github.com/apache/incubator-kyuubi/commit/5eb83b4c)  
+[[KYUUBI #2850][FOLLOWUP] Fix default log4j2 configuration](https://github.com/apache/incubator-kyuubi/commit/8dddfeb0)  
+[[KYUUBI #2977] [BATCH] Using KyuubiApplicationManger#tagApplication help tag batch application](https://github.com/apache/incubator-kyuubi/commit/b174d0c1)  
+[[KYUUBI #2993] Fix typo in KyuubiConf and mark more config entries server only](https://github.com/apache/incubator-kyuubi/commit/163e0f82)  
+[[KYUUBI #2987] Remove Hive shims-common and shims-0.23 dependencies from Kyuubi Hive JDBC](https://github.com/apache/incubator-kyuubi/commit/d8d6903f)  
+[[KYUUBI #2985] Prompt configuration when starting engine timeout](https://github.com/apache/incubator-kyuubi/commit/8d09c83b)  
+[[KYUUBI #2989] Remove HS2 active-passive support in Kyuubi Hive JDBC client](https://github.com/apache/incubator-kyuubi/commit/56c01616)  
+[[KYUUBI #2983] Remove Hive llap-client dependencies from Kyuubi Hive JDBC](https://github.com/apache/incubator-kyuubi/commit/e41ef566)  
+[[KYUUBI #2981] Improve TPC-DS scan performance](https://github.com/apache/incubator-kyuubi/commit/3a80f33b)  
+[[KYUUBI #2917] Remove Hive service dependencies from Kyuubi Hive JDBC](https://github.com/apache/incubator-kyuubi/commit/145a18db)  
+[[KYUUBI #2868] [K8S] Add KubernetesApplicationOperation](https://github.com/apache/incubator-kyuubi/commit/3bc299d0)  
+[[KYUUBI #2979] Fix helm icon url](https://github.com/apache/incubator-kyuubi/commit/cfe380ae)  
+[[KYUUBI #2978] [SUB-TASK][KPIP-4] If batch app status not found from cluster manager, fall back to metadata store](https://github.com/apache/incubator-kyuubi/commit/b115c8dd)  
+[[KYUUBI #2975] Code improvement in rest client](https://github.com/apache/incubator-kyuubi/commit/bd2f5b23)  
+[[KYUUBI #2964] [SUB-TASK][KPIP-4] Refine the batch response and render](https://github.com/apache/incubator-kyuubi/commit/6f308c43)  
+[[KYUUBI #2976] Expose session name into kyuubi engine tab](https://github.com/apache/incubator-kyuubi/commit/2d0bb9f2)  
+[[KYUUBI #2850][FOLLOWUP] Provide log4j2.xml.template in binary and use log4j2-defaults.xml](https://github.com/apache/incubator-kyuubi/commit/cec8b03f)  
+[[KYUUBI #2963] Bump Delta 2.0.0rc1](https://github.com/apache/incubator-kyuubi/commit/b7cd6f97)  
+[[KYUUBI #2918][Bug] Kyuubi integrated Ranger failed to query: table stats must be specified](https://github.com/apache/incubator-kyuubi/commit/8d4d00fe)  
+[[KYUUBI #2966] Remove TProtocolVersion from SessionHandle/OperationHandle](https://github.com/apache/incubator-kyuubi/commit/a9908a1b)  
+[[KYUUBI #2972] Using stdout for the output of kyuubi-ctl](https://github.com/apache/incubator-kyuubi/commit/33872057)  
+[[KYUUBI #2973] Decorate LOG in the RetryableRestClient with static final](https://github.com/apache/incubator-kyuubi/commit/5412d1b9)  
+[[KYUUBI #2962] [SUB-TASK][KPIP-4] Throw exception if the metadata update count is zero](https://github.com/apache/incubator-kyuubi/commit/10affbf6)  
+[[KYUUBI #2956] Support to config the connect/socket timeout of rest client for kyuubi-ctl](https://github.com/apache/incubator-kyuubi/commit/6e4b5582)  
+[[KYUUBI #2960] TFrontendService.SERVER_VERSION shall be HIVE_CLI_SERVICE_PROTOCOL_V11](https://github.com/apache/incubator-kyuubi/commit/cf27278f)  
+[[KYUUBI #2957] [SUB-TASK][KPIP-4] Use canonical host name for kyuubi instance](https://github.com/apache/incubator-kyuubi/commit/defae6bd)  
+[[KYUUBI #2952] Remove OperationType from OperationHandle for simplification](https://github.com/apache/incubator-kyuubi/commit/6c44a7bb)  
+[[KYUUBI #2955] BatchRequest args fix: need toString operation for different data types](https://github.com/apache/incubator-kyuubi/commit/fbb434c4)  
+[[KYUUBI #2949] Flaky test: execute statement - analysis exception](https://github.com/apache/incubator-kyuubi/commit/994dc6ce)  
+[[KYUUBI #2951] No need to extend CompositeService for MetadataManager](https://github.com/apache/incubator-kyuubi/commit/c8f18f00)  
+[[KYUUBI #2948] Remove thrift request timeout for KyuubiSyncThriftClient](https://github.com/apache/incubator-kyuubi/commit/6a9d5ff2)  
+[[KYUUBI #2943][Bug][K8S] Remove Start Local Kyuubi Server For Kyuubi On K8S Test](https://github.com/apache/incubator-kyuubi/commit/caa3ed2a)  
+[[KYUUBI #2924] Correct the frontend server start state](https://github.com/apache/incubator-kyuubi/commit/de2e11c2)  
+[[KYUUBI #886][FOLLOWUP] Support to reload hadoop conf for KyuubiTHttpFrontendService](https://github.com/apache/incubator-kyuubi/commit/9bc0aa67)  
+[[KYUUBI #2929] Kyuubi integrated Ranger does not support the CTAS syntax](https://github.com/apache/incubator-kyuubi/commit/7460e745)  
+[[KYUUBI #2935] Support spnego authentication for thrift http transport mode](https://github.com/apache/incubator-kyuubi/commit/ceb66bd6)  
+[[KYUUBI #2894] Add synchronized for the ciphers of internal security accessor](https://github.com/apache/incubator-kyuubi/commit/37c0d425)  
+[[KYUUBI #2927] Fix the thread in  ScheduleThreadExecutorPool can't be shutdown immediately](https://github.com/apache/incubator-kyuubi/commit/f629992f)  
+[[KYUUBI #2876] Bump Hudi 0.11.1](https://github.com/apache/incubator-kyuubi/commit/125730a7)  
+[[KYUUBI #2922] Clean up SparkConsoleProgressBar when SQL execution fails](https://github.com/apache/incubator-kyuubi/commit/aba785f3)  
+[[KYUUBI #2919] Fix typo and wording for JDBCMetadataStoreConf](https://github.com/apache/incubator-kyuubi/commit/825c70db)  
+[[KYUUBI #2920] Fix typo for mysql metadata schema](https://github.com/apache/incubator-kyuubi/commit/f3610b2b)  
+[[KYUUBI #2890] Get the db From Sparksession When TableIdentifier's Database Field Is Empty](https://github.com/apache/incubator-kyuubi/commit/062d8746)  
+[[KYUUBI #2915] Revert "[KYUUBI #2000][DEPS] Bump Hadoop 3.3.2"](https://github.com/apache/incubator-kyuubi/commit/3435e2ae)  
+[[KYUUBI #2911] [SUB-TASK][KPIP-4] If the kyuubi instance unreachable, support to backfill state from resource manager and mark batch closed by remote kyuubi instance](https://github.com/apache/incubator-kyuubi/commit/089cf412)  
+[[KYUUBI #2912] [INFRA][DOCS] Improve release md](https://github.com/apache/incubator-kyuubi/commit/07080f35)  
+[[KYUUBI #2905][DOCS] Update the number of new committers in MATURITY.md](https://github.com/apache/incubator-kyuubi/commit/2305159a)  
+[[KYUUBI #2745] [Subtask] DorisSQLEngine - GetTables Operation](https://github.com/apache/incubator-kyuubi/commit/ea7ca789)  
+[[KYUUBI #2628][FOLLOWUP] Support waitCompletion for submit batch](https://github.com/apache/incubator-kyuubi/commit/c664c84f)  
+[[KYUUBI #2827] [BUILD][TEST] Decouple integration tests from kyuubi-server](https://github.com/apache/incubator-kyuubi/commit/2fd4e3a8)  
+[[KYUUBI #2898] Bump maven-surefire-plugin 3.0.0-M7](https://github.com/apache/incubator-kyuubi/commit/7f0c53a0)  
+[[KYUUBI #2628][FOLLOWUP] Reuse the kyuubi-ctl batch commands for SubmitBatchCommand](https://github.com/apache/incubator-kyuubi/commit/e1f74673)  
+[[KYUUBI #2897] Remove Hive metastore dependencies from Kyuubi Hive JDBC](https://github.com/apache/incubator-kyuubi/commit/62b6987a)  
+[[KYUUBI #2782] Decouple Kyuubi Hive JDBC from Hive Serde](https://github.com/apache/incubator-kyuubi/commit/e3bf6044)  
+[[KYUUBI #2854] Add exception field in KyuubiSessionEvent](https://github.com/apache/incubator-kyuubi/commit/99959b89)  
+[[KYUUBI #2873] [INFRA][DOCS] Improve release template script](https://github.com/apache/incubator-kyuubi/commit/9de3365f)  
+[[KYUUBI #2761] Flaky Test: engine.jdbc.doris.StatementSuite - test select](https://github.com/apache/incubator-kyuubi/commit/4e975507)  
+[[KYUUBI #2834] [SUB-TASK][KPIP-4] Support to retry the metadata requests on transient issue and unblock main thread](https://github.com/apache/incubator-kyuubi/commit/7baf9895)  
+[[KYUUBI #2543] Add `maxPartitionBytes` configuration for TPC-DS connecter](https://github.com/apache/incubator-kyuubi/commit/7b24ee93)  
+[[KYUUBI #886] Add HTTP transport mode support to KYUUBI - no Kerberos support](https://github.com/apache/incubator-kyuubi/commit/1ea245d2)  
+[[KYUUBI #2628][FOLLOWUP] Refine kyuubi-ctl batch commands](https://github.com/apache/incubator-kyuubi/commit/27330ddb)  
+[[KYUUBI #2859][SUB-TASK][KPIP-4] Support `--conf` for kyuubi-ctl](https://github.com/apache/incubator-kyuubi/commit/57e37334)  
+[[KYUUBI #2888] Bump Spark-3.3.0](https://github.com/apache/incubator-kyuubi/commit/eaffb27c)  
+[[KYUUBI #2708] Open engine session and renew engine credentials in the one](https://github.com/apache/incubator-kyuubi/commit/37229d41)  
+[[KYUUBI #2668][FOLLOWUP] Add log4j for rest client test](https://github.com/apache/incubator-kyuubi/commit/b987a680)  
+[[KYUUBI #2881] [SUB-TASK][KPIP-4] Rest client supports retry request if catch net exception](https://github.com/apache/incubator-kyuubi/commit/dea68bc0)  
+[[KYUUBI #2883] [Bug] java.lang.NoClassDefFoundError: org/apache/hadoop/hive/common/ValidWriteIdList in HiveDelegationTokenProvider#initialize](https://github.com/apache/incubator-kyuubi/commit/bdceaaf1)  
+[[KYUUBI #2813] Bump Iceberg 0.13.2](https://github.com/apache/incubator-kyuubi/commit/87dd1df5)  
+[[KYUUBI #2628][SUB-TASK][KPIP-4] Implement kyuubi-ctl for batch job operation](https://github.com/apache/incubator-kyuubi/commit/cb483385)  
+[[KYUUBI #2872] Catch the exception for the iterator job when incremental collect is enabled](https://github.com/apache/incubator-kyuubi/commit/383a7a84)  
+[[KYUUBI #2870] Fix sf0 query error in TPCH](https://github.com/apache/incubator-kyuubi/commit/88388951)  
+[[KYUUBI #2861][FOLLOWUP][GA] Daily publish snapshot with profile spark-3.3](https://github.com/apache/incubator-kyuubi/commit/e7a872fd)  
+[[KYUUBI #2865] Bump Spark 3.3.0-rc6](https://github.com/apache/incubator-kyuubi/commit/5510421b)  
+[[KYUUBI #2863] Unify the logic of tpch and tpcds to generate golden file](https://github.com/apache/incubator-kyuubi/commit/9403566d)  
+[[KYUUBI #2862] [BUILD] Release script supports Spark 3.3](https://github.com/apache/incubator-kyuubi/commit/c4955a8d)  
+[[KYUUBI #2861] [GA] Daily publish snapshot with profile spark-3.3](https://github.com/apache/incubator-kyuubi/commit/67ad2556)  
+[[KYUUBI #2624] Support isExtended for FilteredShowTablesCommand in AuthZ module.](https://github.com/apache/incubator-kyuubi/commit/ee8eceb2)  
+[[KYUUBI #2858] Support skipTests for kyuubi rest client module](https://github.com/apache/incubator-kyuubi/commit/b3fcc9ed)  
+[[KYUUBI #2848] Global temp view should only exist in session catalog](https://github.com/apache/incubator-kyuubi/commit/60b0cd18)  
+[[KYUUBI #2849] Close the engine alive pool gracefully](https://github.com/apache/incubator-kyuubi/commit/ce56d700)  
+[[KYUUBI #2851] Log session name when opening/closing session](https://github.com/apache/incubator-kyuubi/commit/21abfd2b)  
+[[KYUUBI #2247] Change log4j2 properties to xml](https://github.com/apache/incubator-kyuubi/commit/0acf9717)  
+[[KYUUBI #2704] verify TPC-DS query output](https://github.com/apache/incubator-kyuubi/commit/df1ebbad)  
+[[KYUUBI #2846] Add v1.5.2-incubating changelog](https://github.com/apache/incubator-kyuubi/commit/d958a2c8)  
+[[KYUUBI #2842] [TEST] Optimize the output of ExceptionThrowingDelegationTokenProvider in the Test](https://github.com/apache/incubator-kyuubi/commit/c2874818)  
+[[KYUUBI #2829] Make secret id static and remove thrift protocol from RPC handles](https://github.com/apache/incubator-kyuubi/commit/12a48ba2)  
+[[KYUUBI #2820][FOLLOWUP] Fix duplicate SPNEGO typo](https://github.com/apache/incubator-kyuubi/commit/3bfebd24)  
+[[KYUUBI #2839] Refactor changelog](https://github.com/apache/incubator-kyuubi/commit/2c7a5651)  
+[[KYUUBI #2781] Fix KyuubiDataSource#getConnection to set user and password](https://github.com/apache/incubator-kyuubi/commit/85d0656b)  
+[[KYUUBI #2845] [GA] Stop daily publish on branch-1.3](https://github.com/apache/incubator-kyuubi/commit/24c74e8c)  
+[[KYUUBI #2805] Add TPC-H queries verification](https://github.com/apache/incubator-kyuubi/commit/032d9ca7)  
+[[KYUUBI #2837] [BUILD] Support publish to private repo](https://github.com/apache/incubator-kyuubi/commit/ead33d79)  
+[[KYUUBI #2820][SUB-TASK][KPIP-4] Support to redirect getLocalLog and closeBatchSession requests across kyuubi instances](https://github.com/apache/incubator-kyuubi/commit/f8e20e3c)  
+[[KYUUBI #2830] Imporve Z-Order with Spark3.3](https://github.com/apache/incubator-kyuubi/commit/9d706e55)  
+[[KYUUBI #2746][INFRA] Improve NOTICE of binary release](https://github.com/apache/incubator-kyuubi/commit/a06a2ca4)  
+[[KYUUBI #2825] [BUILD] Remove kyuubi-flink-sql-engine from kyuubi-server dependencies](https://github.com/apache/incubator-kyuubi/commit/ddd60fc4)  
+[[KYUUBI #2211] [Improvement] Add CHANGELOG.md to codebase for maintaining release notes](https://github.com/apache/incubator-kyuubi/commit/dd96983b)  
+[[KYUUBI #2643][FOLLOWUP] Using javax AuthenticationException instead of hadoop AuthenticationException](https://github.com/apache/incubator-kyuubi/commit/daa5bfed)  
+[[KYUUBI #2373][SUB-TASK][KPIP-4] Support to recovery batch session on Kyuubi instances restart](https://github.com/apache/incubator-kyuubi/commit/e7257251)  
+[[KYUUBI #2824] [TEST] Replace test tag ExtendedSQLTest by Slow](https://github.com/apache/incubator-kyuubi/commit/afb08c74)  
+[[KYUUBI #2822] [GA] Set log level to info](https://github.com/apache/incubator-kyuubi/commit/8539a568)  
+[[KYUUBI #2817] Bump Spark 3.3.0-rc5](https://github.com/apache/incubator-kyuubi/commit/9ee4a9e3)  
+[[KYUUBI #2676] Flaky Test: SparkOperationProgressSuite: test operation progress](https://github.com/apache/incubator-kyuubi/commit/411992cd)  
+[[KYUUBI #2812] [SUB-TASK][KPIP-4] Refine the batch info response](https://github.com/apache/incubator-kyuubi/commit/0cb6f162)  
+[[KYUUBI #2469] Support RangerDefaultAuditHandler for AuthZ module](https://github.com/apache/incubator-kyuubi/commit/c56a13af)  
+[[KYUUBI #2807] Trino, Hive and JDBC Engine support session conf in newExecuteStatementOperation](https://github.com/apache/incubator-kyuubi/commit/aabc53ec)  
+[[KYUUBI #2814] Set JAVA_HOME in travis via javac](https://github.com/apache/incubator-kyuubi/commit/9265cf3e)  
+[[KYUUBI #2804] add flaky test report template](https://github.com/apache/incubator-kyuubi/commit/aaf14a12)  
+[[KYUUBI #2800][FOLLOWUP] Return CloseBatchReponse for kyuubi rest client deleteBatch](https://github.com/apache/incubator-kyuubi/commit/d881d318)  
+[[KYUUBI #2800] Refine batch mode code path](https://github.com/apache/incubator-kyuubi/commit/bb98aa75)  
+[[KYUUBI #2802] Retry opening the TSocket in KyuubiSyncThriftClient](https://github.com/apache/incubator-kyuubi/commit/21845266)  
+[[KYUUBI #2742] Introduce admin resource for service admin - refresh frontend hadoop conf without restart](https://github.com/apache/incubator-kyuubi/commit/b0495f3c)  
+[[KYUUBI #2794] Change KyuubiRestException to extend RuntimeException](https://github.com/apache/incubator-kyuubi/commit/9ed652e9)  
+[[KYUUBI #2793][DOCS] Add debugging engine](https://github.com/apache/incubator-kyuubi/commit/a3718f9b)  
+[[KYUUBI #2788] Add excludeDatabases for TPC-H catalogs](https://github.com/apache/incubator-kyuubi/commit/05ee1964)  
+[[KYUUBI #2780] Refine stylecheck](https://github.com/apache/incubator-kyuubi/commit/6cd2ad9e)  
+[[KYUUBI #2789] Kyuubi Spark TPC-H Connector - Add tiny scale](https://github.com/apache/incubator-kyuubi/commit/74ff5cf3)  
+[[KYUUBI #2765][SUB-TASK][KPIP-4] Refactor current kyuubi-ctl](https://github.com/apache/incubator-kyuubi/commit/a4622301)  
+[[KYUUBI #2717][FOLLOWUP] Fix BatchRestApiSuite due to jdbc state store UPPER the batch type](https://github.com/apache/incubator-kyuubi/commit/80d45e42)  
+[[KYUUBI #2717] [SUB-TASK][KPIP-4] Introduce jdbc session state store for batch session multiple HA](https://github.com/apache/incubator-kyuubi/commit/73c6b1b1)  
+[[KYUUBI #2643][FOLLOWUP] Generate spnego auth token dynamically per request](https://github.com/apache/incubator-kyuubi/commit/a04afabd)  
+[[KYUUBI #2741] Add kyuubi-spark-connector-common module](https://github.com/apache/incubator-kyuubi/commit/d24c18db)  
+[[KYUUBI #2775] Add excludeDatabases for TPC-DS catalogs](https://github.com/apache/incubator-kyuubi/commit/d2ceb041)  
+[[KYUUBI #2643][FOLLOWUP] Refine the rest sdk](https://github.com/apache/incubator-kyuubi/commit/fc51cfb3)  
+[[KYUUBI #2553] Kyuubi Spark TPC-DS Connector - Add tiny scale](https://github.com/apache/incubator-kyuubi/commit/8578bcd4)  
+[[KYUUBI #2772] Kyuubi Spark TPC-H Connector - use log4j1](https://github.com/apache/incubator-kyuubi/commit/d49377e7)  
+[[KYUUBI #2643] [SUB-TASK][KPIP-4] Implement kyuubi rest sdk for batch job operation](https://github.com/apache/incubator-kyuubi/commit/b817fcf7)  
+[[KYUUBI #2763] Expected error code for invalid basic/spnego authentication should be SC_FORBIDDEN](https://github.com/apache/incubator-kyuubi/commit/60d559ef)  
+[[KYUUBI #2768] Use the default DB passed in by session in Flink](https://github.com/apache/incubator-kyuubi/commit/a8270163)  
+[[KYUUBI #2764] [DOCS] Fix tables in docs being coverd by right toc sidebar](https://github.com/apache/incubator-kyuubi/commit/414d1a86)  
+[[KYUUBI #2760] Add adapter layer in Kyuubi Hive JDBC module](https://github.com/apache/incubator-kyuubi/commit/42f18378)  
+[[KYUUBI #2751] [DOC] Replace sphinx_rtd_theme with sphinx_book_theme](https://github.com/apache/incubator-kyuubi/commit/e1921fc8)  
+[[KYUUBI #2754] [GA] Separate log archive name](https://github.com/apache/incubator-kyuubi/commit/b1895913)  
+[[KYUUBI #2755] [Subtask] DorisSQLEngine - add jdbc label](https://github.com/apache/incubator-kyuubi/commit/1ca56c17)  
+[[KYUUBI #2752] Kyuubi Spark TPC-DS Connector - configurable catalog's name by initialize method](https://github.com/apache/incubator-kyuubi/commit/9766db12)  
+[[KYUUBI #2721] Implement dedicated set/get catalog/database operators](https://github.com/apache/incubator-kyuubi/commit/9b502307)  
+[[KYUUBI #2664] Kyuubi Spark TPC-H Connector - SupportsReportStatistics](https://github.com/apache/incubator-kyuubi/commit/dbe315e8)  
+[[KYUUBI #2471][FOLLOWUP] Remove unexpected test-function.jar](https://github.com/apache/incubator-kyuubi/commit/ff1d7ec7)  
+[[KYUUBI #2743] colorfully kyuubi logo support](https://github.com/apache/incubator-kyuubi/commit/c0f0089f)  
+[[KYUUBI #2736] Upgrade Jackson 2.13.3](https://github.com/apache/incubator-kyuubi/commit/a8943bc3)  
+[[KYUUBI #2735] Test Spark 3.3.0-rc3](https://github.com/apache/incubator-kyuubi/commit/e9797c02)  
+[[KYUUBI #2665] Kyuubi Spark TPC-H Connector - SupportsNamespaces](https://github.com/apache/incubator-kyuubi/commit/49352b5a)  
+[[KYUUBI #2543] Add TPCDSTable generate benchmark](https://github.com/apache/incubator-kyuubi/commit/25383698)  
+[[KYUUBI #2658] [Subtask] DorisSQLEngine with execute statement support](https://github.com/apache/incubator-kyuubi/commit/7f945017)  
+[[KYUUBI #2730] [WIP][KYUUBI #2238] Support Flink 1.15](https://github.com/apache/incubator-kyuubi/commit/c84ea87c)  
+[[KYUUBI #2663] Kyuubi Spark TPC-H Connector - Initial implementation](https://github.com/apache/incubator-kyuubi/commit/81c48b0c)  
+[[KYUUBI #2631] Rename high availability config key to support multi discovery client](https://github.com/apache/incubator-kyuubi/commit/3b81a495)  
+[[KYUUBI #2733] [CI] Cross version verification for spark-3.3](https://github.com/apache/incubator-kyuubi/commit/7a789a25)  
+[[KYUUBI #2285] trino's result fetching method is changed to a streaming iterator mode to avoid hold data at server side](https://github.com/apache/incubator-kyuubi/commit/3114b393)  
+[[KYUUBI #2718] [KYUUBI#2405] Support Flink StringData Data Type](https://github.com/apache/incubator-kyuubi/commit/5b9d92e9)  
+[[KYUUBI #2719] [SUB-TASK][KPIP-4] Support internal rest request authentication to enable http request redirection across kyuubi instances](https://github.com/apache/incubator-kyuubi/commit/f1cf95fe)  
+[[KYUUBI #2720] Fix KyuubiDatabaseMetaData#supportsCatalogs*](https://github.com/apache/incubator-kyuubi/commit/95784751)  
+[[KYUUBI #2706] Spark extensions support Spark-3.3](https://github.com/apache/incubator-kyuubi/commit/85cbea40)  
+[[KYUUBI #2714] Log4j2 layout pattern add date](https://github.com/apache/incubator-kyuubi/commit/7584e3ab)  
+[[KYUUBI #2686][FOLLOWUP] Avoid potential flaky test](https://github.com/apache/incubator-kyuubi/commit/45531f01)  
+[[KYUUBI #2594][FOLLOWUP] Fix flaky Test - support engine alive probe to fast fail on engine broken](https://github.com/apache/incubator-kyuubi/commit/8905bded)  
+[[KYUUBI #2701] Kyuubi Spark TPC-DS Connector - Rework SupportsReportStatistics and code refactor](https://github.com/apache/incubator-kyuubi/commit/b673b2f5)  
+[[KYUUBI #2712] Bump Spark master to 3.4.0-SNAPSHOT](https://github.com/apache/incubator-kyuubi/commit/7d81fd08)  
+[[KYUUBI #2541] Set nullable in table schema](https://github.com/apache/incubator-kyuubi/commit/93753292)  
+[[KYUUBI #2709] Improve TPCDSTable display in Spark Web UI](https://github.com/apache/incubator-kyuubi/commit/0c5b0d1a)  
+[[KYUUBI #2619] Add profile spark-3.3](https://github.com/apache/incubator-kyuubi/commit/85d68b20)  
+[[KYUUBI #2702] Fix TPC-DS columns name and add TPC-DS queries verification](https://github.com/apache/incubator-kyuubi/commit/aa4ac58c)  
+[[KYUUBI #2348] Add it test for trino engine](https://github.com/apache/incubator-kyuubi/commit/33c81624)  
+[[KYUUBI #2686] Fix lock bug if engine initialization timeout](https://github.com/apache/incubator-kyuubi/commit/c210fdae)  
+[[KYUUBI #2690] Make ProcessBuilder.commands immutable](https://github.com/apache/incubator-kyuubi/commit/010a34d1)  
+[[KYUUBI #2696] [TEST] Stop NoopServer should not throw exception](https://github.com/apache/incubator-kyuubi/commit/5585dd01)  
+[[KYUUBI #2700] Handle SPARK-37929 breaking change in TPCDSCatalog](https://github.com/apache/incubator-kyuubi/commit/18e9d09e)  
+[[KYUUBI #2683] Add INFO log in ServiceDiscovery.stopGracefully](https://github.com/apache/incubator-kyuubi/commit/866e4d1f)  
+[[KYUUBI #2694] EngineEvent.toString outputs application tags](https://github.com/apache/incubator-kyuubi/commit/27030d39)  
+[[KYUUBI #2594] Fix flaky Test - support engine alive probe to fast fail on engine broken](https://github.com/apache/incubator-kyuubi/commit/56efdf8c)  
+[[KYUUBI #2668][FOLLOWUP] Remove unused Option because the collection is never null](https://github.com/apache/incubator-kyuubi/commit/b2495e96)  
+[[KYUUBI #2675] Fix compatibility for spark authz with spark v3.3](https://github.com/apache/incubator-kyuubi/commit/60b9f6bc)  
+[[KYUUBI #2680] Remove SwaggerScalaModelConverter after rest dto classes rewritten in Java](https://github.com/apache/incubator-kyuubi/commit/98ed40f5)  
+[[KYUUBI #2642] Fix flaky test - JpsApplicationOperation with spark local mode](https://github.com/apache/incubator-kyuubi/commit/c1fb7bfb)  
+[[KYUUBI #2668] [SUB-TASK][KPIP-4] Rewrite the rest DTO classes in java](https://github.com/apache/incubator-kyuubi/commit/31fdd7ec)  
+[[KYUUBI #2670] Delete the useless judgment in the extractURLComponents method of Utils.java](https://github.com/apache/incubator-kyuubi/commit/32165362)  
+[[KYUUBI #2672] Check if the table exists](https://github.com/apache/incubator-kyuubi/commit/5588cd50)  
+[[KYUUBI #2641] Client should not assume launch engine has completed on exception](https://github.com/apache/incubator-kyuubi/commit/b40bcbda)  
+[[KYUUBI #2666] Backport HIVE-24694 to Kyuubi Hive JDBC](https://github.com/apache/incubator-kyuubi/commit/27cf57bd)  
+[[KYUUBI #2661] [SUB-TASK][KPIP-4] Rename GET /batches/$batchId/log to GET /batches/$batchId/localLog](https://github.com/apache/incubator-kyuubi/commit/a163f3a8)  
+[[KYUUBI #2650] Add FilteredShowColumnsCommand to AuthZ module](https://github.com/apache/incubator-kyuubi/commit/2facc0b6)  
+[[KYUUBI #2576][FOLLOWUP] Bump Hudi 0.11.0](https://github.com/apache/incubator-kyuubi/commit/f16ac8be)  
+[[KYUUBI #2540] Kyuubi Spark TPC-DS Connector - SupportsNamespaces](https://github.com/apache/incubator-kyuubi/commit/b088f39f)  
+[[KYUUBI #2655] Using the defined app keys for JpsApplicationOperation](https://github.com/apache/incubator-kyuubi/commit/31878859)  
+[[KYUUBI #2640] Implement TGetInfoType CLI_ODBC_KEYWORDS](https://github.com/apache/incubator-kyuubi/commit/8de2f5f1)  
+[[KYUUBI #2601] Add a config to support different service discovery client class implementation](https://github.com/apache/incubator-kyuubi/commit/50584f2a)  
+[[KYUUBI #2471] Fix the bug of dynamically loading external packages](https://github.com/apache/incubator-kyuubi/commit/1ab68974)  
+[[KYUUBI #2636] Refine BatchesResourceSuite](https://github.com/apache/incubator-kyuubi/commit/a5bb93e5)  
+[[KYUUBI #2634] [SUB-TASK][KPIP-4] Enhance the response error msg](https://github.com/apache/incubator-kyuubi/commit/e4e88355)  
+[[KYUUBI #2616] Remove embedded mode support in Kyuubi Hive JDBC driver](https://github.com/apache/incubator-kyuubi/commit/52817e81)  
+[[KYUUBI #2605] Make SQLOperationListener configurable](https://github.com/apache/incubator-kyuubi/commit/9a2fc86b)  
+[[KYUUBI #2474] [Improvement] Add FilteredShowFunctionsCommand to Authz module](https://github.com/apache/incubator-kyuubi/commit/d30f078c)  
+[[KYUUBI #2604] Hive Backend Engine - Multi tenancy support](https://github.com/apache/incubator-kyuubi/commit/f2b9776e)  
+[[KYUUBI #2576] Bump Hudi 0.11.0](https://github.com/apache/incubator-kyuubi/commit/cb5f49e3)  
+[[KYUUBI #2473][FOLLOWUP] Simplify FilteredShowNamespaceExec](https://github.com/apache/incubator-kyuubi/commit/bea53092)  
+[[KYUUBI #2621] Always use Hadoop shaded client](https://github.com/apache/incubator-kyuubi/commit/981b4161)  
+[[KYUUBI #2615] Add support HIVE_CLI_SERVICE_PROTOCOL_V11](https://github.com/apache/incubator-kyuubi/commit/25471506)  
+[[KYUUBI #2473] [Improvement] Add FilteredShowDatabasesCommand to AuthZ module](https://github.com/apache/incubator-kyuubi/commit/42936aa2)  
+[[KYUUBI #2626] Replace literal by FetchType.LOG](https://github.com/apache/incubator-kyuubi/commit/32b38cc6)  
+[[KYUUBI #2614] Add commons-io to beeline module since jdbc upgraded to 3.1.3](https://github.com/apache/incubator-kyuubi/commit/098ae16c)  
+[[KYUUBI #2539][Subtask] Kyuubi Spark TPC-DS Connector - SupportsReportStatistics](https://github.com/apache/incubator-kyuubi/commit/d3b9c77c)  
+[[KYUUBI #2591] Redact secret information from ProcBuilder log](https://github.com/apache/incubator-kyuubi/commit/1fee068c)  
+[[KYUUBI #2542] [Subtask] Kyuubi Spark TPC-DS Connector - Make useAnsiStringType configurable](https://github.com/apache/incubator-kyuubi/commit/802890a7)  
+[[KYUUBI #2607] Introduce new module and setup testcontainers-based Kudu service for testing](https://github.com/apache/incubator-kyuubi/commit/b85045ad)  
+[[KYUUBI #2333][KYUUBI #2554] Configuring Flink Engine heap memory and java opts](https://github.com/apache/incubator-kyuubi/commit/6b6da1f4)  
+[[KYUUBI #2029] Hive Backend Engine - Operation Logs](https://github.com/apache/incubator-kyuubi/commit/b8fd3785)  
+[[KYUUBI #2609] Set Kyuubi server thrift client socket timeout to inf](https://github.com/apache/incubator-kyuubi/commit/c1df427f)  
+[[KYUUBI #2560] Upgrade kyuubi-hive-jdbc hive version to 3.1.3](https://github.com/apache/incubator-kyuubi/commit/a6e14ac3)  
+[[KYUUBI #2602] Bump testcontainers-scala 0.40.7](https://github.com/apache/incubator-kyuubi/commit/867e0beb)  
+[[KYUUBI #2565] Variable substitution should work in plan only mode](https://github.com/apache/incubator-kyuubi/commit/7f8369bf)  
+[[KYUUBI #2493][FOLLOWUP] Fix the exception that occurred when beeline rendered spark progress](https://github.com/apache/incubator-kyuubi/commit/0f0708b7)  
+[[KYUUBI #2378] Implement BatchesResource GET /batches/${batchId}/log](https://github.com/apache/incubator-kyuubi/commit/8c1fc100)  
+[[KYUUBI #2599] Bump scala-maven-plugin 4.6.1](https://github.com/apache/incubator-kyuubi/commit/ec561bf4)  
+[[KYUUBI #2493] Implement the progress of statement for spark sql engine](https://github.com/apache/incubator-kyuubi/commit/1cb4193d)  
+[[KYUUBI #2375][FOLLOWUP] Implement BatchesResource GET /batches](https://github.com/apache/incubator-kyuubi/commit/bcf30beb)  
+[[KYUUBI #2588] Reformat kyuubi-hive-sql-engine/pom.xml](https://github.com/apache/incubator-kyuubi/commit/5b788a13)  
+[[KYUUBI #2558] fix warn message](https://github.com/apache/incubator-kyuubi/commit/282f105a)  
+[[KYUUBI #2427][FOLLOWUP] Flaky test: deregister when meeting specified exception](https://github.com/apache/incubator-kyuubi/commit/80e068df)  
+[[KYUUBI #2582] Minimize Travis build and test](https://github.com/apache/incubator-kyuubi/commit/a7443285)  
+[[KYUUBI #2500][FOLLOWUP] Resolve flink conf at engine side](https://github.com/apache/incubator-kyuubi/commit/cad0bcd5)  
+[[KYUUBI #2571] Minimize YARN tests overhead](https://github.com/apache/incubator-kyuubi/commit/9d604955)  
+[[KYUUBI #2573] [KPIP-4][SUB-TASK] Add a seekable buffered reader for random access operation log](https://github.com/apache/incubator-kyuubi/commit/965bf218)  
+[[KYUUBI #2375][SUB-TASK][KPIP-4] Implement BatchesResource GET /batches](https://github.com/apache/incubator-kyuubi/commit/c967c74f)  
+[[KYUUBI #2571] Release connection to prevent the engine leak](https://github.com/apache/incubator-kyuubi/commit/270a5726)  
+[[KYUUBI #2522] Even the process exit code is zero, also check the application state from resource manager](https://github.com/apache/incubator-kyuubi/commit/6e17e794)  
+[[KYUUBI #2569] Change the acquisition method of flinkHome to keep it consistent with other engines](https://github.com/apache/incubator-kyuubi/commit/9d5fba56)  
+[[KYUUBI #2550] Fix swagger does not show the request/response schema issue](https://github.com/apache/incubator-kyuubi/commit/90140cc3)  
+[[KYUUBI #2500] Command OptionParser for launching Flink Backend Engine](https://github.com/apache/incubator-kyuubi/commit/1932ad72)  
+[[KYUUBI #2379][SUB-TASK][KPIP-4] Implement BatchesResource DELETE /batches/${batchId}](https://github.com/apache/incubator-kyuubi/commit/5b3123e4)  
+[[KYUUBI #2513] Support NULL type in trino engine and add QueryTests](https://github.com/apache/incubator-kyuubi/commit/a58e1cf4)  
+[[KYUUBI #2403] [Improvement] move addTimeoutMonitor to AbstractOperation because it was used in multiple engines](https://github.com/apache/incubator-kyuubi/commit/9e263d79)  
+[[KYUUBI #2531] [Subtask] Kyuubi Spark TPC-DS Connector - Initial implementation](https://github.com/apache/incubator-kyuubi/commit/dcc3ccf3)  
+[[KYUUBI #2523] Flaky Test: KyuubiBatchYarnClusterSuite - open batch session](https://github.com/apache/incubator-kyuubi/commit/792c5422)  
+[[KYUUBI #2376][SUB-TASK][KPIP-4] Implement BatchesResource GET /batches/${batchId}](https://github.com/apache/incubator-kyuubi/commit/147c83bf)  
+[[KYUUBI #2547] Support jdbc url prefix jdbc:kyuubi://](https://github.com/apache/incubator-kyuubi/commit/5392591a)  
+[[KYUUBI #2549] Do not auth the request to load OpenApiConf](https://github.com/apache/incubator-kyuubi/commit/841a3635)  
+[[KYUUBI #2548] Prevent dead loop if the batch job submission process it not alive](https://github.com/apache/incubator-kyuubi/commit/672e8e95)  
+[[KYUUBI #2533] Make Utils.parseURL public to remove unnecessary reflection](https://github.com/apache/incubator-kyuubi/commit/3208410d)  
+[[KYUUBI #2524] [DOCS] Update metrics.md](https://github.com/apache/incubator-kyuubi/commit/7612f0af)  
+[[KYUUBI #2532] avoid NPE in KyuubiHiveDriver.acceptsURL](https://github.com/apache/incubator-kyuubi/commit/05161158)  
+[[KYUUBI #2478][FOLLOWUP] Invoke getOpts method instead of Reflection](https://github.com/apache/incubator-kyuubi/commit/cdfae8d8)  
+[[KYUUBI #2490][FOLLOWUP] Fix and move set command test case](https://github.com/apache/incubator-kyuubi/commit/973339db)  
+[[KYUUBI #2517] Rename ZorderSqlAstBuilder to KyuubiSparkSQLAstBuilder](https://github.com/apache/incubator-kyuubi/commit/04a91e10)  
+[[KYUUBI #2025][HIVE] Add a Hive on Yarn doc](https://github.com/apache/incubator-kyuubi/commit/02356a38)  
+[[KYUUBI #2032][Subtask] Hive Backend Engine - new APIs with hive-service-rpc 3.1.2 - SetClientInfo](https://github.com/apache/incubator-kyuubi/commit/3ab2c81d)  
+[[KYUUBI #2490] Fix NPE in getOperationStatus](https://github.com/apache/incubator-kyuubi/commit/96da2544)  
+[[KYUUBI #2516] [DOCS] Add Contributor over time in README.md](https://github.com/apache/incubator-kyuubi/commit/b739d39f)  
+[[KYUUBI #2346] [Improvement] Simplify FlinkProcessBuilder with java executable](https://github.com/apache/incubator-kyuubi/commit/3b04d994)  
+[[KYUUBI #2472] Support FilteredShowTablesCommand for AuthZ module](https://github.com/apache/incubator-kyuubi/commit/c969433f)  
+[[KYUUBI #2309][SUB-TASK][KPIP-4] Implement BatchesResource POST /batches](https://github.com/apache/incubator-kyuubi/commit/5a36db65)  
+[[KYUUBI #2028][FOLLOWUP] add engine stop event and fix the partition of initialized event](https://github.com/apache/incubator-kyuubi/commit/9ac5faaa)  
+[[KYUUBI #2512] Fix broken link of IntelliJ IDEA Setup Guide](https://github.com/apache/incubator-kyuubi/commit/03d4bbe9)  
+[[KYUUBI #2450][FOLLOWUP] Remove opHandle from opHandleSet when exception occurs](https://github.com/apache/incubator-kyuubi/commit/8e4a2954)  
+[[KYUUBI #2510] Fix NPE when invoking YarnApplicationOperation::getApplicationInfoByTag](https://github.com/apache/incubator-kyuubi/commit/c0963e1b)  
+[[KYUUBI #2496] Prevent empty auth user when anonymous is allowed](https://github.com/apache/incubator-kyuubi/commit/beb132f9)  
+[[KYUUBI #2498] Upgrade Delta version to 1.2.1](https://github.com/apache/incubator-kyuubi/commit/fa61da62)  
+[[KYUUBI #2419] Release engine during closing kyuubi server session if share level is connection](https://github.com/apache/incubator-kyuubi/commit/93f13ef6)  
+[[KYUUBI #2487] Fix test command to make it runnable](https://github.com/apache/incubator-kyuubi/commit/af162b1f)  
+[[KYUUBI #2457] Fix flaky test: engine log truncation](https://github.com/apache/incubator-kyuubi/commit/38cf4ccc)  
+[[KYUUBI #2478] Backport HIVE-19018 to Kyuubi Beeline](https://github.com/apache/incubator-kyuubi/commit/268db010)  
+[[KYUUBI #2020] [Subtask] Hive Backend Engine - new APIs with hive-service-rpc 3.1.2 - TGetQueryId](https://github.com/apache/incubator-kyuubi/commit/b41be9eb)  
+[[KYUUBI #2484] Add conf to SessionEvent and display it in EngineSessionPage](https://github.com/apache/incubator-kyuubi/commit/06da8cf8)  
+[[KYUUBI #2433] HiveSQLEngine load required jars from HIVE_HADOOP_CLASSPATH](https://github.com/apache/incubator-kyuubi/commit/679d23f0)  
+[[KYUUBI #2477] Change state early on stopping](https://github.com/apache/incubator-kyuubi/commit/7b70a6a0)  
+[[KYUUBI #2451] Support isWrapperFor and unwrap](https://github.com/apache/incubator-kyuubi/commit/61873214)  
+[[KYUUBI #2453] [Improvement] checkValue of TypedConfigBuilder shall also print the config name](https://github.com/apache/incubator-kyuubi/commit/68ac8a19)  
+[[KYUUBI #2427] Flaky test: deregister when meeting specified exception](https://github.com/apache/incubator-kyuubi/commit/40739a9f)  
+[[KYUUBI #2456] Supports managing engines of different share level in kyuubi-ctl](https://github.com/apache/incubator-kyuubi/commit/c5210547)  
+[[KYUUBI #1987] Support preserve user context in group/server share level](https://github.com/apache/incubator-kyuubi/commit/7cede6fd)  
+[[KYUUBI #2467] Remove close launchEngineOp](https://github.com/apache/incubator-kyuubi/commit/9fd62a21)  
+[[KYUUBI #2440] [Improvement] spark engine event add endTime when it is stopped](https://github.com/apache/incubator-kyuubi/commit/8a44b6bf)  
+[[KYUUBI #2461] Use the original host argument](https://github.com/apache/incubator-kyuubi/commit/dcea90b0)  
+[[KYUUBI #2463] Redact `kyuubi.ha.zookeeper.auth.digest` in Spark engine](https://github.com/apache/incubator-kyuubi/commit/f13856aa)  
+[[KYUUBI #2445] Implement ApplicationManager and Yarn/ JPS-local Application Operation](https://github.com/apache/incubator-kyuubi/commit/5e6d645e)  
+[[KYUUBI #1936][FOLLOWUP] Stop updating credentials when credentials are expired](https://github.com/apache/incubator-kyuubi/commit/5ae7c9c0)  
+[[KYUUBI #2450] Update lastAccessTime in getStatus and add opHandle to opHandleSet before run](https://github.com/apache/incubator-kyuubi/commit/86f016d9)  
+[[KYUUBI #2439] Using Pure Java TPC-DS generator](https://github.com/apache/incubator-kyuubi/commit/3ecdd422)  
+[[KYUUBI #2448] Log the engine id when opening Kyuubi connection](https://github.com/apache/incubator-kyuubi/commit/eeb8a94f)  
+[[KYUUBI #2436] Add AlterTableRecoverPartitionsCommand for Spark Sql Authz PrivilegesBuilder](https://github.com/apache/incubator-kyuubi/commit/71bc0dc1)  
+[[KYUUBI #2432][DOCS] button "Download" is invalid](https://github.com/apache/incubator-kyuubi/commit/a3e8f7ac)  
+[[KYUUBI #2429] KYUUBI #2416] Increase Test Coverage For Privileges Builder](https://github.com/apache/incubator-kyuubi/commit/14f675d2)  
+[[KYUUBI #2344] [Improvement] Add Kyuubi Server on Kubernetes with Spark Cluster mode integration test](https://github.com/apache/incubator-kyuubi/commit/7fa04947)  
+[[KYUUBI #2201] Show ExecutionId when running status on query engine page](https://github.com/apache/incubator-kyuubi/commit/4fb93275)  
+[[KYUUBI #2424] [Improvement] add Flink compile version and Trino client compile version to KyuubiServer Log](https://github.com/apache/incubator-kyuubi/commit/a09ad0b6)  
+[[KYUUBI #2253] [Improvement] Trino Engine - Events support](https://github.com/apache/incubator-kyuubi/commit/4ec707b7)  
+[[KYUUBI #2426] Return complete error stack trace information](https://github.com/apache/incubator-kyuubi/commit/deb0e620)  
+[[KYUUBI #2410] [Improvement] Fix docker-image-tool.sh example version to 1.4.0](https://github.com/apache/incubator-kyuubi/commit/5f04aa67)  
+[[KYUUBI #2351] Fix Hive Engine terminating blocked by non-daemon threads](https://github.com/apache/incubator-kyuubi/commit/45e5eda0)  
+[[KYUUBI #2422] Wrap close session with try-finally](https://github.com/apache/incubator-kyuubi/commit/70a3005e)  
+[[KYUUBI #2420] Fix outdate .gitignore for dependency-reduced-pom.xml](https://github.com/apache/incubator-kyuubi/commit/60179171)  
+[[KYUUBI #2368] [Improvement] Command OptionParser for launching Trino Backend Engine](https://github.com/apache/incubator-kyuubi/commit/4f4960cf)  
+[[KYUUBI #2021][FOLLOWUP] Move derby workaround to test code](https://github.com/apache/incubator-kyuubi/commit/bafc2f84)  
+[[KYUUBI #2301] Limit the maximum number of concurrent connections per user and ipaddress](https://github.com/apache/incubator-kyuubi/commit/dba9e223)  
+[[KYUUBI #2323] Separate events to a submodule - kyuubi-event](https://github.com/apache/incubator-kyuubi/commit/d851b23a)  
+[[KYUUBI #2289] Use unique tag to kill applications](https://github.com/apache/incubator-kyuubi/commit/c9ea7fac)  
+[[KYUUBI #2021] Command OptionParser for launching Hive Backend Engine](https://github.com/apache/incubator-kyuubi/commit/20af38ee)  
+[[KYUUBI #2414] [KYUUBI apache#2413  ] Fix InsertIntoHiveTableCommand case in PrivilegesBuilder#buildCommand()](https://github.com/apache/incubator-kyuubi/commit/b8877323)  
+[[KYUUBI #2406] Add Flink environments to template](https://github.com/apache/incubator-kyuubi/commit/cbde503f)  
+[[KYUUBI #2355] Bump Delta Lake 1.2.0](https://github.com/apache/incubator-kyuubi/commit/5bf4184c)  
+[[KYUUBI #2349][DOCS] Usage docs for kyuubi-ctl](https://github.com/apache/incubator-kyuubi/commit/a59188ff)  
+[[KYUUBI #2381] [Test] Add Kyuubi on k8s With Spark on k8s client deploy-mode unit test](https://github.com/apache/incubator-kyuubi/commit/62db92f7)  
+[[KYUUBI #2390] RuleEliminateMarker stays in analyze phase for data masking](https://github.com/apache/incubator-kyuubi/commit/eb7ad512)  
+[[KYUUBI #2395] [DOC] Add Documentation for Spark AuthZ Extension](https://github.com/apache/incubator-kyuubi/commit/8f29b4fd)  
+[[KYUUBI #2397] Supports managing engines of different versions in kyuubi-ctl](https://github.com/apache/incubator-kyuubi/commit/a7674d29)  
+[[KYUUBI #2402] [Improvement] addTimeoutMonitor for trino engine when it run query async](https://github.com/apache/incubator-kyuubi/commit/9fb62772)  
+[[KYUUBI #2360] [Subtask] Configuring Hive engine heap memory and java opts](https://github.com/apache/incubator-kyuubi/commit/26d52faa)  
+[[KYUUBI #2399] Fix PrivilegesBuilder Build Wrong PrivilegeObjets When Query Without Project But With OrderBy/PartitionBy](https://github.com/apache/incubator-kyuubi/commit/91adc3fd)  
+[[KYUUBI #2308][SUB-TASK][KPIP-4] Batch job configuration ignore list and pre-defined configuration in server-side](https://github.com/apache/incubator-kyuubi/commit/4b42c735)  
+[[KYUUBI #2369] [Improvement] update developer.md to describe what append descriptions of new configurations to settings.md](https://github.com/apache/incubator-kyuubi/commit/1a58aaf7)  
+[[KYUUBI #2391] Fix privileges builder return wrong result when there is no project but has filter/join](https://github.com/apache/incubator-kyuubi/commit/88f168c6)  
+[[KYUUBI #2361] [Improvement] Configuring Trino Engine heap memory and java opts](https://github.com/apache/incubator-kyuubi/commit/1a68b866)  
+[[KYUUBI #2338] [DOCS] Upgrade sphinx dependencies for documentation build](https://github.com/apache/incubator-kyuubi/commit/7c6789a6)  
+[[KYUUBI #2385] Export JAVA_HOME. It seems TravisCI stopped doing it recently](https://github.com/apache/incubator-kyuubi/commit/b8d04af4)  
+[[KYUUBI #2353] [SUB-TASK][KPIP-4] Implement BatchJobSubmission operation and basic KyuubiBatchSessionImpl](https://github.com/apache/incubator-kyuubi/commit/b6d5c64c)  
+[[KYUUBI #2359] [Test] Build WithKyuubiServerOnKuberntes](https://github.com/apache/incubator-kyuubi/commit/65a272f6)  
+[[KYUUBI #2330] [Subtask] Hive Backend Engine - GetTypeInfo Operation](https://github.com/apache/incubator-kyuubi/commit/6d147894)  
+[[KYUUBI #2257] Replace replace conf vars with strings in HiveSQLEngine](https://github.com/apache/incubator-kyuubi/commit/86c6b1f8)  
+[[KYUUBI #2248][DOCS] Add a flink on yarn kerberos doc](https://github.com/apache/incubator-kyuubi/commit/d0a4697f)  
+[[KYUUBI #1451] Support Data Column Masking](https://github.com/apache/incubator-kyuubi/commit/fa95281b)  
+[[KYUUBI #2357] [Improvement] Add warn log and check in class of HiveProcessBuilder](https://github.com/apache/incubator-kyuubi/commit/d5acb78e)  
+[[KYUUBI #2328] Support getting mainResource in the module target directory of KYUUBI_HOME](https://github.com/apache/incubator-kyuubi/commit/8ef211a1)  
+[[KYUUBI #2354] Fix NPE in process builder log capture thread](https://github.com/apache/incubator-kyuubi/commit/d11866df)  
+[[KYUUBI #2337] [DOCS] Access Kyuubi with Kyuubi JDBC Driver](https://github.com/apache/incubator-kyuubi/commit/659d981c)  
+[[KYUUBI #2331] Add createSession method to further abstract openSession](https://github.com/apache/incubator-kyuubi/commit/8a525e52)  
+[[KYUUBI #2347] Output trino query id within query execute](https://github.com/apache/incubator-kyuubi/commit/55c4cae1)  
+[[KYUUBI #2345] [DOC] Hot Upgrade Kyuubi Server](https://github.com/apache/incubator-kyuubi/commit/12416a78)  
+[[KYUUBI #2336] Simplify TrinoProcessBuilder with java executable](https://github.com/apache/incubator-kyuubi/commit/9e486080)  
+[[KYUUBI #2343] [KYUUBI#2297] HiveEngineEvent toString is not pretty](https://github.com/apache/incubator-kyuubi/commit/0888abd9)  
+[[KYUUBI #1989] Decouple curator from other modules](https://github.com/apache/incubator-kyuubi/commit/91010689)  
+[[KYUUBI #2324] [SUB-TASK][KPIP-4] Implement SparkBatchProcessBuilder to submit spark batch job](https://github.com/apache/incubator-kyuubi/commit/a63e811e)  
+[[KYUUBI #2329] [KYUUBI#2214][FOLLOWUP] Cleanup kubernetes-deployment-it](https://github.com/apache/incubator-kyuubi/commit/ff19ae73)  
+[[KYUUBI #2216] [Test] [K8s] Add Spark Cluster mode on Kubernetes integration test](https://github.com/apache/incubator-kyuubi/commit/a4c521c6)  
+[[KYUUBI #2024][FOLLOWUP] Hive Backend Engine - ProcBuilder for HiveEngine](https://github.com/apache/incubator-kyuubi/commit/a56e4b4f)  
+[[KYUUBI #2281] The RenewDelegationToken method of TFrontendService should return SUCCESS_STATUS by default](https://github.com/apache/incubator-kyuubi/commit/04c536b9)  
+[[KYUUBI #2300] Add http UGIAssuming handler wrapper for kerberos enabled restful frontend service](https://github.com/apache/incubator-kyuubi/commit/9bd91054)  
+[[KYUUBI #2292][FOLLOWUP] Unify kyuubi server plugin location](https://github.com/apache/incubator-kyuubi/commit/93627611)  
+[[KYUUBI #2320] Make the CodeSource location correctly obtained on Windows](https://github.com/apache/incubator-kyuubi/commit/c29d2d85)  
+[[KYUUBI #2316] [DOCS] Fix typo in GitHub issue template](https://github.com/apache/incubator-kyuubi/commit/d3d73c6b)  
+[[KYUUBI #2317][BUILD] Bump hive-service-rpc 3.1.3 version](https://github.com/apache/incubator-kyuubi/commit/ff1bb555)  
+[[KYUUBI #2311] Refine github label](https://github.com/apache/incubator-kyuubi/commit/fbb2d974)  
+[[KYUUBI #2310] Make ranger extension work with mac m1](https://github.com/apache/incubator-kyuubi/commit/ed827708)  
+[[KYUUBI #2312] Spark data type TimestampNTZ supported version changes as 3.4.0](https://github.com/apache/incubator-kyuubi/commit/e1e0b358)  
+[[KYUUBI #2296] Fix operation log file handler leak](https://github.com/apache/incubator-kyuubi/commit/e5834ae7)  
+[[KYUUBI #2250] Support to limit the spark engine max running time](https://github.com/apache/incubator-kyuubi/commit/4bc14657)  
+[[KYUUBI #2299] Fix flaky test: support engine alive probe to fast fail on engine broken](https://github.com/apache/incubator-kyuubi/commit/9c4af55f)  
+[[KYUUBI #2292] Unify kyuubi server plugin location](https://github.com/apache/incubator-kyuubi/commit/b41ec939)  
+[[KYUUBI #2292] Unify spark extension location](https://github.com/apache/incubator-kyuubi/commit/82a024a9)  
+[[KYUUBI #2084][FOLLOWUP] Support arbitrary parameters for KyuubiConf](https://github.com/apache/incubator-kyuubi/commit/0ed865f7)  
+[[KYUUBI #2287] Revamp Flink IT by random port and merge tests](https://github.com/apache/incubator-kyuubi/commit/19f1d411)  
+[[KYUUBI #1451] Add Row-level filtering support](https://github.com/apache/incubator-kyuubi/commit/c4a608f9)  
+[[KYUUBI #2280] [INFRA] Replace BSD 3-clause with ASF License v2 for scala binaries](https://github.com/apache/incubator-kyuubi/commit/03b72268)  
+[[KYUUBI #2277] Inline kyuubi prefix in KyuubiConf](https://github.com/apache/incubator-kyuubi/commit/6a231519)  
+[[KYUUBI #2266] The default value of frontend.connection.url.use.hostname should be set to true to be consistent with previous versions](https://github.com/apache/incubator-kyuubi/commit/c1a68a7c)  
+[[KYUUBI #2260] The running query will not update the duration of the page](https://github.com/apache/incubator-kyuubi/commit/a6ba6420)  
+[[KYUUBI #2272] Fix incorrect doc link](https://github.com/apache/incubator-kyuubi/commit/f0d6ca0f)  
+[[KYUUBI #2268] Flaky test: submit spark app timeout with last log output](https://github.com/apache/incubator-kyuubi/commit/2af5067a)  
+[[KYUUBI #2275][DOCS] Fix missing prefix in trino engine quick start](https://github.com/apache/incubator-kyuubi/commit/a18a3a95)  
+[[KYUUBI #2243][DOCS] Add quick start for trino engine](https://github.com/apache/incubator-kyuubi/commit/0ee53710)  
+[[KYUUBI #2255]The engine state of Spark's EngineEvent is hardcoded with 0](https://github.com/apache/incubator-kyuubi/commit/6bd9edf2)  
+[[KYUUBI #2263] [KYUUBI  #2262]Kyuubi Spark Nightly failed - select timestamp_ntz *** FAILED ***](https://github.com/apache/incubator-kyuubi/commit/89edeaee)  
+[[KYUUBI #2209] Add detail usage documents of Flink engine](https://github.com/apache/incubator-kyuubi/commit/5733452b)  
+[[KYUUBI #2246] [BUILD] Pick Commons-Logging dependence out of Hudi-Common](https://github.com/apache/incubator-kyuubi/commit/0fd47381)  
+[[KYUUBI #2028] Hive Backend Engine - Events support](https://github.com/apache/incubator-kyuubi/commit/9da22e66)  
+[[KYUUBI #2207]  Fix DayTimeIntervalType/YearMonthIntervalType Column Size](https://github.com/apache/incubator-kyuubi/commit/d77882a8)  
+[[KYUUBI #1451] Introduce Kyuubi Spark AuthZ Module with column-level fine-grained authorization](https://github.com/apache/incubator-kyuubi/commit/513ea834)  
+[[KYUUBI #1798] Add EventBus module to unify the distribution and subscription of Kyuubi's events](https://github.com/apache/incubator-kyuubi/commit/6fe69753)  
+[[KYUUBI #2207]Support newly added spark data types: TimestampNTZType](https://github.com/apache/incubator-kyuubi/commit/dcc71b30)  
+[[KYUUBI #1021] Expire CredentialsRef in a proper time to reduce memor…](https://github.com/apache/incubator-kyuubi/commit/e16c728d)  
+[[KYUUBI #2241] Remove unused deployment documents of Spark](https://github.com/apache/incubator-kyuubi/commit/f80d1a8b)  
+[[KYUUBI #2244] load-kyuubi-env.sh should print SPARK_ENGINE_HOME for consistent](https://github.com/apache/incubator-kyuubi/commit/cf490a1b)  
+[[KYUUBI #2023] Hive Backend Engine - Shade HiveSQLEngine runtime](https://github.com/apache/incubator-kyuubi/commit/a7708076)  
+[[KYUUBI #1498] Support operation log for ExecuteScala](https://github.com/apache/incubator-kyuubi/commit/ea0121bc)  
+[[KYUUBI #2087] Add issue template for documentation improvement](https://github.com/apache/incubator-kyuubi/commit/fe3ece1d)  
+[[KYUUBI #1962] Add timeout check for createSpark](https://github.com/apache/incubator-kyuubi/commit/8b06f135)  
+[[KYUUBI #2218] Fix maven options about hive-provided](https://github.com/apache/incubator-kyuubi/commit/98887e75)  
+[[KYUUBI #2225] Support to set result max rows for spark engine](https://github.com/apache/incubator-kyuubi/commit/9797ff0d)  
+[[KYUUBI #2008][FOLLOWUP] Support engine type and subdomain in kyuubi-ctl](https://github.com/apache/incubator-kyuubi/commit/015cbe5d)  
+[[KYUUBI #2231] Close action and default sparksession before `createSpark`.](https://github.com/apache/incubator-kyuubi/commit/b18be382)  
+[[KYUUBI #2222] Refactor the log when failing to get hadoop fs delegation token](https://github.com/apache/incubator-kyuubi/commit/a5b4c1b9)  
+[[KYUUBI #2227] Fix operation log dir not deleted issue](https://github.com/apache/incubator-kyuubi/commit/55052f96)  
+[[KYUUBI #2223] Return the last rows of log for prompts even exception detected](https://github.com/apache/incubator-kyuubi/commit/c1ffc3ca)  
+[[KYUUBI #2221] Shade hive-service-rpc and thrift in Spark engine](https://github.com/apache/incubator-kyuubi/commit/86cd685d)  
+[[KYUUBI #2102] Support to retry the internal thrift request call and add engine liveness probe to enable fast fail before retry](https://github.com/apache/incubator-kyuubi/commit/e8445b7f)  
+[[KYUUBI #2207] Support newly added spark data types: DayTimeIntervalType/YearMonthIntervalType](https://github.com/apache/incubator-kyuubi/commit/d0c92caa)  
+[[KYUUBI #2085][FOLLOWUP] Fix kyuubi-common wrong label](https://github.com/apache/incubator-kyuubi/commit/fb3e1235)  
+[[KYUUBI #2208] Fixed session close operator log session dir not deleted](https://github.com/apache/incubator-kyuubi/commit/a13a89e8)  
+[[KYUUBI #2214] Add Spark Engine on Kubernetes integration test](https://github.com/apache/incubator-kyuubi/commit/4f0323d5)  
+[[KYUUBI #2203][FLINK] Support flink conf set by kyuubi conf file](https://github.com/apache/incubator-kyuubi/commit/c09cd654)  
+[[KYUUBI #2197][BUILD] Bump jersey 2.35 version](https://github.com/apache/incubator-kyuubi/commit/88e0bcd2)  
+[[KYUUBI #2204] Make comments consistent with code in EngineRef](https://github.com/apache/incubator-kyuubi/commit/89497c41)  
+[[KYUUBI #2186] Manage test failures with kyuubi spark nightly build - execute statement - select interval](https://github.com/apache/incubator-kyuubi/commit/8f2b7358)  
+[[KYUUBI #2195] Using while-loop or for-loop instead of map/range to improve performance in RowSet](https://github.com/apache/incubator-kyuubi/commit/5ed8f148)  
+[[KYUUBI #2033] Hive Backend Engine - GetCrossReference](https://github.com/apache/incubator-kyuubi/commit/4e01f9b9)  
+[[KYUUBI #2135] Build and test all modules on Linux ARM64](https://github.com/apache/incubator-kyuubi/commit/e390f34c)  
+[[KYUUBI #2035] Hive Backend Engine - `build/dist` support](https://github.com/apache/incubator-kyuubi/commit/04896233)  
+[[KYUUBI #2189] Manage test failures with kyuubi spark nightly build - deregister when meeting specified exception *** FAILED ***](https://github.com/apache/incubator-kyuubi/commit/0025505a)  
+[[KYUUBI #2119][FOLLOWUP] Support output progress bar in Spark engine](https://github.com/apache/incubator-kyuubi/commit/e7c42012)  
+[[KYUUBI #2187] Manage test failures with kyuubi spark nightly build - execute simple scala code *** FAILED ***](https://github.com/apache/incubator-kyuubi/commit/b02fb584)  
+[[KYUUBI #2184] Manage test failures with kyuubi spark nightly build - operation listener *** FAILED ***](https://github.com/apache/incubator-kyuubi/commit/55b14056)  
+[[KYUUBI #2034] Hive Backend Engine - GetPrimaryKeys](https://github.com/apache/incubator-kyuubi/commit/f333e6ba)  
+[[KYUUBI #2086] Add issue template for subtasks - remove duplicate labels](https://github.com/apache/incubator-kyuubi/commit/ea6e02b5)  
+[[KYUUBI #2086] Add issue template for subtasks](https://github.com/apache/incubator-kyuubi/commit/0b721f4f)  
+[[KYUUBI #2163][K8s] copy beeline-jars into docker image](https://github.com/apache/incubator-kyuubi/commit/f29adb2f)  
+[[KYUUBI #2175] Improve CI with cancel & concurrency & paths filter](https://github.com/apache/incubator-kyuubi/commit/3eb1fa9e)  
+[[KYUUBI #2172][BUILD] Bump Flink 1.14.4 version](https://github.com/apache/incubator-kyuubi/commit/3c2463b1)  
+[[KYUUBI #2159][FLINK] Prevent loss of exception message line separator](https://github.com/apache/incubator-kyuubi/commit/4eb5df34)  
+[[KYUUBI #2024] Hive Backend Engine - ProcBuilder for HiveEngine](https://github.com/apache/incubator-kyuubi/commit/2911ac26)  
+[[KYUUBI #2017] Hive Backend Engine - GetColumns Operation](https://github.com/apache/incubator-kyuubi/commit/f22a14f1)  
+[[KYUUBI #2019] Hive Backend Engine - GetTableTypes Operation](https://github.com/apache/incubator-kyuubi/commit/db987998)  
+[[KYUUBI #2018] Hive Backend Engine - GetFunctions Operation](https://github.com/apache/incubator-kyuubi/commit/57edfcd0)  
+[[KYUUBI #2156][FOLLOWUP] Fix configuration format in document](https://github.com/apache/incubator-kyuubi/commit/62f685ff)  
+[[KYUUBI #2119] Support output progress bar in Spark engine](https://github.com/apache/incubator-kyuubi/commit/f65db034)  
+[[KYUUBI #2016] Hive Backend Engine - GetTables Operation](https://github.com/apache/incubator-kyuubi/commit/449c4260)  
+[[KYUUBI #2156] Change log to reflect exactly why getting token failed](https://github.com/apache/incubator-kyuubi/commit/31be7a30)  
+[[KYUUBI #2148][DOCS] Add dev/reformat usage](https://github.com/apache/incubator-kyuubi/commit/36507f8e)  
+[[KYUUBI #2150] [DOCS] Fix Getting Started With Kyuubi on Kubernetes](https://github.com/apache/incubator-kyuubi/commit/95fc6da9)  
+[[KYUUBI #2143] [KYUBBI #2142][DOCS] Add IDEA setup guide](https://github.com/apache/incubator-kyuubi/commit/2225b9a1)  
+[[KYUUBI #2015] Hive Backend Engine - GetSchemas Operation](https://github.com/apache/incubator-kyuubi/commit/07d36320)  
+[[KYUUBI #1936][FOLLOWUP] Send credentials when opening session and wait for completion](https://github.com/apache/incubator-kyuubi/commit/eb4d2890)  
+[[KYUUBI #2085][FOLLOWUP] Fix the wrong path of `module:hive`](https://github.com/apache/incubator-kyuubi/commit/ffb7a6f4)  
+[[KYUUBI #2134] Respect Spark bundled log4j in extension modules](https://github.com/apache/incubator-kyuubi/commit/d7d8b05d)  
+[[KYUUBI #2014] Hive Backend Engine - GetCatalog Operation](https://github.com/apache/incubator-kyuubi/commit/54c0035b)  
+[[KYUUBI #2129] FlinkEngine throws UnsupportedOperationException in GetColumns](https://github.com/apache/incubator-kyuubi/commit/4e2fdd52)  
+[[KYUUBI #2022] Hive Backend Engine - maven-google-downloader plugin support for hive distribution](https://github.com/apache/incubator-kyuubi/commit/0cfbcd2e)  
+[[KYUUBI #2084] Support arbitrary parameters for KyuubiConf](https://github.com/apache/incubator-kyuubi/commit/8f15622d)  
+[[KYUUBI #2112]  Improve the compatibility of queryTimeout in more version clients](https://github.com/apache/incubator-kyuubi/commit/55b422bf)  
+[[KYUUBI #2115] Update license and enhance collect_licenses script](https://github.com/apache/incubator-kyuubi/commit/33424a1b)  
+[[KYUUBI #2085] Add a labeler github action to triage PRs](https://github.com/apache/incubator-kyuubi/commit/da22498a)  
+[[KYUUBI #1866][FOLLOWUP] Add Deploy Kyuubi Flink engine on Yarn](https://github.com/apache/incubator-kyuubi/commit/a83cd49e)  
+[[KYUUBI #2120] Optimize RenewDelegationToken logs in Spark engine](https://github.com/apache/incubator-kyuubi/commit/450cf691)  
+[[KYUUBI #2104] Kill yarn job using yarn client API when kyuubi engine …](https://github.com/apache/incubator-kyuubi/commit/ffdd665f)  
+[[KYUUBI #2118] [SUB-TASK][KPIP-2] Support session jars management](https://github.com/apache/incubator-kyuubi/commit/537a0f82)  
+[[KYUUBI #2127] avoid to set HA_ZK_NAMESPACE and HA_ZK_ENGINE_REF_ID repetitively when create flink sql engine](https://github.com/apache/incubator-kyuubi/commit/48c0059d)  
+[[KYUUBI #2123] Output engine information after openEngineSession call fails](https://github.com/apache/incubator-kyuubi/commit/f97350de)  
+[[KYUUBI #2125] closeSession should avoid sending RPC after openSession fails](https://github.com/apache/incubator-kyuubi/commit/8c85480b)  
+[[KYUUBI #2116] move toString() to ProcBuilder trait from its implements](https://github.com/apache/incubator-kyuubi/commit/e4af5513)  
+[[KYUUBI #2103] Revert "[KYUUBI #1948] Upgrade thrift version to 0.16.0"](https://github.com/apache/incubator-kyuubi/commit/f8efcb71)  
+[[KYUUBI #1866][FOLLOWUP] Add logging of Flink SQL Engine](https://github.com/apache/incubator-kyuubi/commit/b8389dae)  
+[[KYUUBI #1866][DOCS] Add flink sql engine quick start](https://github.com/apache/incubator-kyuubi/commit/8f7b2c66)  
+[[KYUUBI #2108] Add description about trino in the config of engine.type](https://github.com/apache/incubator-kyuubi/commit/b7a5cfcf)  
+[[KYUUBI #2071] Using while-loop instead of map/range to improve performance in RowSet](https://github.com/apache/incubator-kyuubi/commit/e942df40)  
+[[KYUUBI #2070][FLINK] Support Flink job submission on yarn-session mode](https://github.com/apache/incubator-kyuubi/commit/7d66e9aa)  
+[[KYUUBI #2089] Add debugging instructions for Flink engine](https://github.com/apache/incubator-kyuubi/commit/2486c5df)  
+[[KYUUBI #2097] [CI] Upload Test Log for CI failure shall contain trino engine log #2094](https://github.com/apache/incubator-kyuubi/commit/c5a7c669)  
+[[KYUUBI #2095] Remove useless logic about add conf when create a new engine](https://github.com/apache/incubator-kyuubi/commit/46234594)  
+[[KYUUBI #1948][FOLLOWUP] Relocate fb303 classes](https://github.com/apache/incubator-kyuubi/commit/12d56422)  
+[[KYUUBI #1978] Support NEGOTIATE/BASIC authorization for restful frontend service](https://github.com/apache/incubator-kyuubi/commit/1e23e7a9)  
+[[KYUUBI #2081] YARN_CONF_DIR shall be added to kyuubi server classpath as HADOOP_CONF_DIR](https://github.com/apache/incubator-kyuubi/commit/a71c511b)  
+[[KYUUBI #2078] `logCaptureThread` does not catch sparksubmit exception](https://github.com/apache/incubator-kyuubi/commit/cf014ee2)  
+[[KYUUBI #1936] Send credentials when opening session and wait for completion](https://github.com/apache/incubator-kyuubi/commit/8e983a19)  
+[[KYUUBI #2079] Update kyuubi layer source file to add flink and trino…](https://github.com/apache/incubator-kyuubi/commit/a882c4bf)  
+[[KYUUBI #1563] Fix broken link and add new link in `CONTRIBUTION.md`](https://github.com/apache/incubator-kyuubi/commit/95a4974e)  
+[[KYUUBI #2072] Improve rest server behavior](https://github.com/apache/incubator-kyuubi/commit/746a94a7)  
+[[KYUUBI #2075] Using thread-safe FastDateFormat instead of SimpleDateFormat](https://github.com/apache/incubator-kyuubi/commit/5a64e124)  
+[[KYUUBI #2066] fix spelling mistake and appropriate naming](https://github.com/apache/incubator-kyuubi/commit/caeb6a43)  
+[[KYUUBI #2063] Fix engine idle timeout lose efficacy for Flink Engine](https://github.com/apache/incubator-kyuubi/commit/dde83819)  
+[[KYUUBI #2061] Implementation of the very basic UI on current Jetty server](https://github.com/apache/incubator-kyuubi/commit/9bf8ff83)  
+[[KYUUBI #2011] Introduce to very basic hive engine](https://github.com/apache/incubator-kyuubi/commit/2b50aaa4)  
+[[KYUUBI #1215][DOC] Document incremental collection](https://github.com/apache/incubator-kyuubi/commit/54dfb4bb)  
+[[KYUUBI #2043] Upgrade log4j/2.x/ to 2.17.2](https://github.com/apache/incubator-kyuubi/commit/dc6085e0)  
+[[KYUUBI #2060] Clear job group for init SQL](https://github.com/apache/incubator-kyuubi/commit/f8d9010b)  
+[[KYUUBI #2044] Remove authentication thread local objects to prevent memory leak](https://github.com/apache/incubator-kyuubi/commit/35a6b9b3)  
+[[KYUUBI #1955] Add CI for branch-1.5 & 1.4 SNAPSHOTS](https://github.com/apache/incubator-kyuubi/commit/df2ff6e8)  
+[[KYUUBI #2054] [KYUUBI-1819] Support closing Flink SQL engine process](https://github.com/apache/incubator-kyuubi/commit/9518724c)  
+[[KYUUBI #2055] correct the log service name](https://github.com/apache/incubator-kyuubi/commit/b1e949d4)  
+[[KYUUBI #2047] Support more MySQL JDBC driver versions](https://github.com/apache/incubator-kyuubi/commit/109569bc)   
diff --git a/content/docs/latest/_sources/client/advanced/configurations.rst.txt b/content/docs/latest/_sources/client/advanced/configurations.rst.txt
new file mode 100644
index 0000000..fd9f20a
--- /dev/null
+++ b/content/docs/latest/_sources/client/advanced/configurations.rst.txt
@@ -0,0 +1,17 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+Client Configuration Guide
+==========================
diff --git a/content/docs/latest/_sources/client/advanced/features/engine_pool.rst.txt b/content/docs/latest/_sources/client/advanced/features/engine_pool.rst.txt
new file mode 100644
index 0000000..ed97f3c
--- /dev/null
+++ b/content/docs/latest/_sources/client/advanced/features/engine_pool.rst.txt
@@ -0,0 +1,18 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+Enabling Kyuubi Engine Pool
+===========================
+
diff --git a/content/docs/latest/_sources/client/advanced/features/engine_resouces.rst.txt b/content/docs/latest/_sources/client/advanced/features/engine_resouces.rst.txt
new file mode 100644
index 0000000..538c9ec
--- /dev/null
+++ b/content/docs/latest/_sources/client/advanced/features/engine_resouces.rst.txt
@@ -0,0 +1,18 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+Configuring Resources for Kyuubi Engines
+========================================
+
diff --git a/content/docs/latest/_sources/client/advanced/features/engine_share_level.rst.txt b/content/docs/latest/_sources/client/advanced/features/engine_share_level.rst.txt
new file mode 100644
index 0000000..9dd484b
--- /dev/null
+++ b/content/docs/latest/_sources/client/advanced/features/engine_share_level.rst.txt
@@ -0,0 +1,18 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+Sharing and Isolation for Kyuubi Engines
+========================================
+
diff --git a/content/docs/latest/_sources/client/advanced/features/engine_ttl.rst.txt b/content/docs/latest/_sources/client/advanced/features/engine_ttl.rst.txt
new file mode 100644
index 0000000..0ba7751
--- /dev/null
+++ b/content/docs/latest/_sources/client/advanced/features/engine_ttl.rst.txt
@@ -0,0 +1,18 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+Setting Time to Live for Kyuubi Engines
+=======================================
+
diff --git a/content/docs/latest/_sources/client/advanced/features/engine_type.rst.txt b/content/docs/latest/_sources/client/advanced/features/engine_type.rst.txt
new file mode 100644
index 0000000..1687678
--- /dev/null
+++ b/content/docs/latest/_sources/client/advanced/features/engine_type.rst.txt
@@ -0,0 +1,18 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+Using Different Kyuubi Engines
+==============================
+
diff --git a/content/docs/latest/_sources/client/advanced/features/index.rst.txt b/content/docs/latest/_sources/client/advanced/features/index.rst.txt
new file mode 100644
index 0000000..0491af2
--- /dev/null
+++ b/content/docs/latest/_sources/client/advanced/features/index.rst.txt
@@ -0,0 +1,30 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+Advanced Features
+=================
+
+.. toctree::
+    :maxdepth: 2
+
+    engine_type
+    engine_share_level
+    engine_ttl
+    engine_pool
+    engine_resources
+    scala
+    plan_only
+
+
diff --git a/content/docs/latest/_sources/client/advanced/features/plan_only.rst.txt b/content/docs/latest/_sources/client/advanced/features/plan_only.rst.txt
new file mode 100644
index 0000000..2f2c7f1
--- /dev/null
+++ b/content/docs/latest/_sources/client/advanced/features/plan_only.rst.txt
@@ -0,0 +1,18 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+Plan Only Execution Mode
+========================
+
diff --git a/content/docs/latest/_sources/client/advanced/features/scala.rst.txt b/content/docs/latest/_sources/client/advanced/features/scala.rst.txt
new file mode 100644
index 0000000..64877c7
--- /dev/null
+++ b/content/docs/latest/_sources/client/advanced/features/scala.rst.txt
@@ -0,0 +1,18 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+Running Scala Snippets
+======================
+
diff --git a/content/docs/latest/_sources/client/advanced/index.rst.txt b/content/docs/latest/_sources/client/advanced/index.rst.txt
new file mode 100644
index 0000000..1ff4e59
--- /dev/null
+++ b/content/docs/latest/_sources/client/advanced/index.rst.txt
@@ -0,0 +1,25 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+Client Commons
+==============
+
+.. toctree::
+    :maxdepth: 2
+
+    configurations
+    logging
+    kerberized_kyuubi
+    features/index
diff --git a/content/docs/latest/_sources/client/advanced/kerberos.md.txt b/content/docs/latest/_sources/client/advanced/kerberos.md.txt
new file mode 100644
index 0000000..f6d71e8
--- /dev/null
+++ b/content/docs/latest/_sources/client/advanced/kerberos.md.txt
@@ -0,0 +1,224 @@
+<!--
+ - Licensed to the Apache Software Foundation (ASF) under one or more
+ - contributor license agreements.  See the NOTICE file distributed with
+ - this work for additional information regarding copyright ownership.
+ - The ASF licenses this file to You under the Apache License, Version 2.0
+ - (the "License"); you may not use this file except in compliance with
+ - the License.  You may obtain a copy of the License at
+ -
+ -   http://www.apache.org/licenses/LICENSE-2.0
+ -
+ - Unless required by applicable law or agreed to in writing, software
+ - distributed under the License is distributed on an "AS IS" BASIS,
+ - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ - See the License for the specific language governing permissions and
+ - limitations under the License.
+ -->
+
+
+# Configure Kerberos for clients to Access Kerberized Kyuubi
+
+## Instructions
+When Kyuubi is secured by Kerberos, the authentication procedure becomes a little complicated.
+
+![](../../imgs/kyuubi_kerberos_authentication.png)
+
+The graph above shows a simplified kerberos authentication procedure:
+1. Kerberos client sends user principal and secret key to KDC. Secret key can be a password or a keytab file.   
+2. KDC returns a `ticket-granting ticket`(TGT).
+3. Kerberos client stores TGT into a ticket cache.
+4. JDBC client, such as beeline and BI tools, reads TGT from the ticket cache.
+5. JDBC client sends TGT and server principal to KDC.
+6. KDC returns a `client-to-server ticket`.
+7. JDBC client sends `client-to-server ticket` to Kyuubi server to prove its identity.
+
+In the rest part of this page, we will describe steps needed to pass through this authentication.
+
+## Install Kerberos Client
+Usually, Kerberos client is installed as default. You can validate it using klist tool.
+
+Linux command and output:
+```bash
+$ klist -V
+Kerberos 5 version 1.15.1
+```
+
+MacOS command and output:
+```bash
+$ klist --version
+klist (Heimdal 1.5.1apple1)
+Copyright 1995-2011 Kungliga Tekniska Högskolan
+Send bug-reports to heimdal-bugs@h5l.org
+```
+
+Windows command and output:
+```cmd
+> klist -V
+Kerberos for Windows
+```
+
+If the client is not installed, you should install it ahead based on the OS platform.  
+We recommend you to install the MIT Kerberos Distribution as all commands in this guide is based on it.  
+
+## Configure Kerberos Client
+Kerberos client needs a configuration file for tuning up the creation of Kerberos ticket cache.
+Following is the configuration file's default location on different OS:
+
+OS | Path
+---| ---
+Linux | /etc/krb5.conf
+MacOS | /etc/krb5.conf
+Windows | %ProgramData%\MIT\Kerberos5\krb5.ini
+
+You can use `KRB5_CONFIG` environment variable to overwrite the default location.
+
+The configuration file should be configured to point to the same KDC as Kyuubi points to.
+
+## Get Kerberos TGT
+Execute `kinit` command to get TGT from KDC.
+
+Suppose user principal is `kyuubi_user@KYUUBI.APACHE.ORG` and user keytab file name is `kyuubi_user.keytab`, 
+the command should be:
+
+```
+$ kinit -kt kyuubi_user.keytab kyuubi_user@KYUUBI.APACHE.ORG
+
+(Command is identical on different OS platform)
+```
+
+You may also execute `kinit` command with principal and password to get TGT:
+
+```
+$ kinit kyuubi_user@KYUUBI.APACHE.ORG
+Password for kyuubi_user@KYUUBI.APACHE.ORG: password 
+
+(Command is identical on different OS platform)
+```
+
+If the command executes successfully, TGT will be store in ticket cache.   
+Use `klist` command to print TGT info in ticket cache:
+
+```
+$ klist
+
+Ticket cache: FILE:/tmp/krb5cc_1000
+Default principal: kyuubi_user@KYUUBI.APACHE.ORG
+
+Valid starting       Expires              Service principal
+2021-12-13T18:44:58  2021-12-14T04:44:58  krbtgt/KYUUBI.APACHE.ORG@KYUUBI.APACHE.ORG
+    renew until 2021-12-14T18:44:57
+    
+(Command is identical on different OS platform. Ticket cache location may be different.)
+```
+
+Ticket cache may have different storage type on different OS platform. 
+
+For example,
+
+OS | Default Ticket Cache Type and Location
+---| ---
+Linux | FILE:/tmp/krb5cc_%{uid}
+MacOS | KCM:%{uid}:%{gid}
+Windows | API:krb5cc
+
+You can find your ticket cache type and location in the `Ticket cache` part of `klist` output.
+
+**Note**:  
+- Ensure your ticket cache type is `FILE` as JVM can only read ticket cache stored as file.
+- Do not store TGT into default ticket cache if you are running Kyuubi and execute `kinit` on the same 
+host with the same OS user. The default ticket cache is already used by Kyuubi server.
+
+Either because the default ticket cache is not a file, or because it is used by Kyuubi server, you 
+should store ticket cache in another file location.  
+This can be achieved by specifying a file location with `-c` argument in `kinit` command.
+
+For example,
+```
+$ kinit -c /tmp/krb5cc_beeline -kt kyuubi_user.keytab kyuubi_user@KYUUBI.APACHE.ORG
+
+(Command is identical on different OS platform)
+```
+
+To check the ticket cache, specify the file location with `-c` argument in `klist` command.
+
+For example,
+```
+$ klist -c /tmp/krb5cc_beeline
+
+(Command is identical on different OS platform)
+```
+
+## Add Kerberos Client Configuration File to JVM Search Path
+The JVM, which JDBC client is running on, also needs to read the Kerberos client configuration file.
+However, JVM uses different default locations from Kerberos client, and does not honour `KRB5_CONFIG`
+environment variable.
+
+OS | JVM Search Paths
+---| ---
+Linux | System scope: `/etc/krb5.conf`
+MacOS | User scope: `$HOME/Library/Preferences/edu.mit.Kerberos`<br/>System scope: `/etc/krb5.conf`
+Windows | User scoep: `%USERPROFILE%\krb5.ini`<br/>System scope: `%windir%\krb5.ini`
+
+You can use JVM system property, `java.security.krb5.conf`, to overwrite the default location.
+
+## Add Kerberos Ticket Cache to JVM Search Path
+JVM determines the ticket cache location in the following order:
+1. Path specified by `KRB5CCNAME` environment variable. Path must start with `FILE:`.
+2. `/tmp/krb5cc_%{uid}` on Unix-like OS, e.g. Linux, MacOS
+3. `${user.home}/krb5cc_${user.name}` if `${user.name}` is not null
+4. `${user.home}/krb5cc` if `${user.name}` is null
+
+**Note**:  
+- `${user.home}` and `${user.name}` are JVM system properties.
+- `${user.home}` should be replaced with `${user.dir}` if `${user.home}` is null.
+ 
+Ensure your ticket cache is stored as a file and put it in one of the above locations. 
+
+## Ensure core-site.xml Exists in Classpath
+Like hadoop clients, `hadoop.security.authentication` should be set to `KERBEROS` in `core-site.xml` 
+to let Hive JDBC driver use Kerberos authentication. `core-site.xml` should be placed under beeline's 
+classpath or BI tools' classpath.
+
+### Beeline
+Here are the usual locations where `core-site.xml` should exist for different beeline distributions:
+
+Client | Location | Note
+--- | --- | ---
+Hive beeline | `$HADOOP_HOME/etc/hadoop` | Hive resolves `$HADOOP_HOME` and use `$HADOOP_HOME/bin/hadoop` command to launch beeline. `$HADOOP_HOME/etc/hadoop` is in `hadoop` command's classpath.
+Spark beeline | `$HADOOP_CONF_DIR` | In `$SPARK_HOME/conf/spark-env.sh`, `$HADOOP_CONF_DIR` often be set to the directory containing hadoop client configuration files.
+Kyuubi beeline | `$HADOOP_CONF_DIR` | In `$KYUUBI_HOME/conf/kyuubi-env.sh`, `$HADOOP_CONF_DIR` often be set to the directory containing hadoop client configuration files.
+
+If `core-site.xml` is not found in above locations, create one with the following content:
+```xml
+<configuration>
+  <property>
+    <name>hadoop.security.authentication</name>
+    <value>kerberos</value>
+  </property>
+</configuration>
+```
+
+### BI Tools
+As to BI tools, ways to add `core-site.xml` varies.  
+Take DBeaver as an example. We can add files to DBeaver's classpath through its `Global libraries` preference.  
+As `Global libraries` only accepts jar files, you should package `core-site.xml` into a jar file.
+
+```bash
+$ jar -c -f core-site.jar core-site.xml
+
+(Command is identical on different OS platform)
+```
+
+## Connect with JDBC URL
+The last step is to connect to Kyuubi with the right JDBC URL.  
+The JDBC URL should be in format: 
+
+```
+jdbc:hive2://<kyuubi_server_a ddress>:<kyuubi_server_port>/<db>;principal=<kyuubi_server_principal>
+```
+
+**Note**:  
+- `kyuubi_server_principal` is the value of `kyuubi.kinit.principal` set in `kyuubi-defaults.conf`.
+- As a command line argument, JDBC URL should be quoted to avoid being split into 2 commands by ";".
+- As to DBeaver, `<db>;principal=<kyuubi_server_principal>` should be set as the `Database/Schema` argument.
+
diff --git a/content/docs/latest/_sources/client/advanced/logging.rst.txt b/content/docs/latest/_sources/client/advanced/logging.rst.txt
new file mode 100644
index 0000000..ab0013a
--- /dev/null
+++ b/content/docs/latest/_sources/client/advanced/logging.rst.txt
@@ -0,0 +1,17 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+Logging
+=======
diff --git a/content/docs/latest/_sources/client/bi_tools/datagrip.md.txt b/content/docs/latest/_sources/client/bi_tools/datagrip.md.txt
new file mode 100644
index 0000000..517061d
--- /dev/null
+++ b/content/docs/latest/_sources/client/bi_tools/datagrip.md.txt
@@ -0,0 +1,57 @@
+<!--
+ - Licensed to the Apache Software Foundation (ASF) under one or more
+ - contributor license agreements.  See the NOTICE file distributed with
+ - this work for additional information regarding copyright ownership.
+ - The ASF licenses this file to You under the Apache License, Version 2.0
+ - (the "License"); you may not use this file except in compliance with
+ - the License.  You may obtain a copy of the License at
+ -
+ -   http://www.apache.org/licenses/LICENSE-2.0
+ -
+ - Unless required by applicable law or agreed to in writing, software
+ - distributed under the License is distributed on an "AS IS" BASIS,
+ - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ - See the License for the specific language governing permissions and
+ - limitations under the License.
+ -->
+
+# DataGrip
+## What is DataGrip
+[DataGrip](https://www.jetbrains.com/datagrip/) is a multi-engine database environment released by JetBrains, supporting MySQL and PostgreSQL, Microsoft SQL Server and Oracle, Sybase, DB2, SQLite, HyperSQL, Apache Derby, and H2.
+
+## Preparation
+### Get DataGrip And Install
+Please go to [Download DataGrip](https://www.jetbrains.com/datagrip/download) to get and install an appropriate version for yourself.
+### Get Kyuubi Started
+[Get kyuubi server started](../../quick_start.md) before you try DataGrip with kyuubi.
+
+For debugging purpose, you can use `tail -f` or `tailf` to track the server log.
+## Configurations
+### Start DataGrip
+After you install DataGrip, just launch it.
+### Select Database
+Substantially, this step is to choose a JDBC Driver type to use later. We can choose Apache Hive to set up a driver for Kyuubi.
+
+![select database](../../imgs/datagrip/select_database.png)
+### Datasource Driver
+You should first download the missing driver files. Just click on the link below, DataGrip will download and install those. 
+
+![datasource and driver](../../imgs/datagrip/datasource_and_driver.png)
+### Generic JDBC Connection Settings
+After install drivers, you should configure the right host and port which you can find in kyuubi server log. By default, we use `localhost` and `10009` to configure.
+
+Of course, you can fill other configs.
+
+After generic configs, you can use test connection to test.
+
+![configuration](../../imgs/datagrip/configuration.png)
+## Interacting With Kyuubi Server
+Now, you can interact with Kyuubi server.
+
+The left side of the photo is the table, and the right side of the photo is the console.
+
+You can interact through the visual interface or code.
+
+![workspace](../../imgs/datagrip/workspace.png)
+## The End
+There are many other amazing features in both Kyuubi and DataGrip and here is just the tip of the iceberg. The rest is for you to discover.
\ No newline at end of file
diff --git a/content/docs/latest/_sources/client/bi_tools/dbeaver.rst.txt b/content/docs/latest/_sources/client/bi_tools/dbeaver.rst.txt
new file mode 100644
index 0000000..0064985
--- /dev/null
+++ b/content/docs/latest/_sources/client/bi_tools/dbeaver.rst.txt
@@ -0,0 +1,125 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+DBeaver
+=======
+
+What is DBeaver
+---------------
+
+.. image:: https://raw.githubusercontent.com/wiki/dbeaver/dbeaver/images/dbeaver-icon-64x64.png
+
+`DBeaver`_ is a free multi-platform database tool for developers, database administrators, analysts, and all people who need to work with databases.
+Supports all popular databases as well as kyuubi JDBC.
+
+.. seealso:: `DBeaver Wiki`
+
+Installation
+------------
+Please go to `Download DBeaver`_ page to get and install an appropriate release version for yourself.
+
+.. versionadded:: 22.1.0(dbeaver)
+   DBeaver officially supports apache kyuubi JDBC driver since 06 Jun 2022 via `PR 16567 <https://github.com/dbeaver/dbeaver/issues/16567>`_.
+
+Using DBeaver with Kyuubi
+-------------------------
+If you have successfully installed dbeaver, just hit the button to launch it.
+
+New Connection
+**************
+
+Firstly, we need to create a database connection against a live kyuubi server.
+You are able to find the kyuubi jdbc driver since dbeaver 22.1.0, as shown in the following figure.
+
+.. image:: ../../imgs/dbeaver/new_database_connection.png
+
+.. note::
+   We can also choose Apache Hive or Apache Spark to set up a driver for Kyuubi, because they are compatible with the same client.
+
+Configure Connection
+********************
+
+Secondly, we configure the JDBC connection settings to format an underlying kyuubi JDBC connection URL string.
+
+Basic Connection Settings
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The basic connection setting contains a minimal set of items you need to talk with kyuubi server,
+
+- Host - hostname or IP address that the kyuubi server bound with, default: `localhost`.
+- Port - port that the kyuubi server listening to, default: `10009`.
+- Database/Schema - database or schema to use, default: `default`.
+- Authentication - identity information, such as user/password, based on the server authentication mechanism.
+
+Session Configurations
+^^^^^^^^^^^^^^^^^^^^^^
+
+Session configuration list is an optional part of kyuubi JDBC URLs, which are very helpful to override some configurations of the kyuubi server at session scope.
+The setup page of dbeaver does not contain any text box for such behavior.
+However, we can append the semicolon-separated configuration pairs to the Database/Schema filed leading with a number sign(#).
+Though it's a bit weird, but it works.
+
+.. image:: ../../imgs/dbeaver/configure_database_connection.png
+
+As an example, shown in the picture above, the engine uses 2 gigabytes memory for the driver process of kyuubi engine and will be terminated after idle for 30 seconds.
+
+Connecting in HA mode
+^^^^^^^^^^^^^^^^^^^^^
+
+Kyuubi supports HA by service discovery over Apache Zookeeper cluster.
+
+.. image:: ../../imgs/dbeaver/configure_database_connection_ha.png
+
+As an example, shown in the above picture, the Host and Port fields can be used to concat the comma separated zookeeper peers,
+while the `serviceDiscoveryMode` and `zooKeeperNamespace` are appended to the Database/Schema field.
+
+Test Connection
+***************
+
+It is not necessary but recommended to click `Test Connection` to verify the connection is set correctly.
+If something wrong happens at the client side or server side, we can debug ahead with the error message.
+
+SQL Operations
+**************
+
+Now, we can use the SQL editor to write queries to interact with Kyuubi server through the connection.
+
+.. code-block:: sql
+
+   DESC NAMESPACE DEFAULT;
+
+.. code-block:: sql
+
+   CREATE TABLE spark_catalog.`default`.SRC(KEY INT, VALUE STRING) USING PARQUET;
+   INSERT INTO TABLE spark_catalog.`default`.SRC VALUES (11215016, 'Kent Yao');
+
+.. code-block:: sql
+
+   SELECT KEY % 10 AS ID, SUBSTRING(VALUE, 1, 4) AS NAME FROM spark_catalog.`default`.SRC;
+
+.. image:: ../../imgs/dbeaver/metadata.png
+
+.. code-block:: sql
+
+   DROP TABLE spark_catalog.`default`.SRC;
+
+Client Authentication
+---------------------
+For kerberized kyuubi clusters, please refer to `Kerberos Authentication`_ for more information.
+
+.. _DBeaver: https://dbeaver.io/
+.. _DBeaver Wiki: https://github.com/dbeaver/dbeaver/wiki
+.. _Download DBeaver: https://dbeaver.io/download/
+.. _Kerberos Authentication: ../advanced/kerberized_kyuubi.html#bi-tools
diff --git a/content/docs/latest/_sources/client/bi_tools/hue.md.txt b/content/docs/latest/_sources/client/bi_tools/hue.md.txt
new file mode 100644
index 0000000..ff11632
--- /dev/null
+++ b/content/docs/latest/_sources/client/bi_tools/hue.md.txt
@@ -0,0 +1,130 @@
+<!--
+ - Licensed to the Apache Software Foundation (ASF) under one or more
+ - contributor license agreements.  See the NOTICE file distributed with
+ - this work for additional information regarding copyright ownership.
+ - The ASF licenses this file to You under the Apache License, Version 2.0
+ - (the "License"); you may not use this file except in compliance with
+ - the License.  You may obtain a copy of the License at
+ -
+ -   http://www.apache.org/licenses/LICENSE-2.0
+ -
+ - Unless required by applicable law or agreed to in writing, software
+ - distributed under the License is distributed on an "AS IS" BASIS,
+ - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ - See the License for the specific language governing permissions and
+ - limitations under the License.
+ -->
+
+
+# Cloudera Hue
+
+## What is Hue
+
+[Hue](https://gethue.com/) is an open source SQL Assistant for Databases & Data Warehouses.
+
+## Preparation
+
+### Get Kyuubi Started
+
+[Get the server Started](../../quick_start.md) first before your try Hue with Kyuubi.
+
+```bash
+Welcome to
+  __  __                           __
+ /\ \/\ \                         /\ \      __
+ \ \ \/'/'  __  __  __  __  __  __\ \ \____/\_\
+  \ \ , <  /\ \/\ \/\ \/\ \/\ \/\ \\ \ '__`\/\ \
+   \ \ \\`\\ \ \_\ \ \ \_\ \ \ \_\ \\ \ \L\ \ \ \
+    \ \_\ \_\/`____ \ \____/\ \____/ \ \_,__/\ \_\
+     \/_/\/_/`/___/> \/___/  \/___/   \/___/  \/_/
+                /\___/
+                \/__/
+```
+
+## Run Hue in Docker
+
+Here we demo running Kyuubi on macOS and Hue on [Docker for Mac](https://docs.docker.com/docker-for-mac/), 
+there are several known limitations of network, and you can find 
+[workarounds from here](https://docs.docker.com/docker-for-mac/networking/#known-limitations-use-cases-and-workarounds).
+
+### Configuration
+
+1. Copy a configuration template from Hue Docker image.
+
+```
+docker run --rm gethue/hue:latest cat /usr/share/hue/desktop/conf/hue.ini > hue.ini
+```
+
+2. Modify the `hue.ini`
+
+```ini
+[beeswax]
+  # Kyuubi 1.1.x support thrift version from 1 to 10
+  thrift_version=7
+  # change to your username to avoid permissions issue for local test
+  auth_username=chengpan
+
+[notebook]
+  [[interpreters]]
+    [[[sql]]]
+      name=SparkSQL
+      interface=hiveserver2
+      
+[spark]
+  # Host of the Spark Thrift Server
+  # For macOS users, use docker.for.mac.host.internal to access host network
+  sql_server_host=docker.for.mac.host.internal
+
+  # Port of the Spark Thrift Server
+  sql_server_port=10009
+  
+# other configurations
+...
+```
+
+### Start Hue in Docker
+
+```
+docker run -p 8888:8888 -v $PWD/hue.ini:/usr/share/hue/desktop/conf/hue.ini gethue/hue:latest
+```
+
+Go http://localhost:8888/ and follow the guide to create an account.
+
+![](../../imgs/hue/start.png)
+
+Having fun with Hue and Kyuubi!
+
+![](../../imgs/hue/spark_sql_docker.png)
+
+## For CDH 6.x Users
+
+If you are using CDH 6.x, there is a trick that CDH 6.x blocks Spark in default, you need to modify the configuration to 
+overwrite the `desktop.app_blacklist` to remove this restriction.
+
+Config Hue in Cloudera Manager.
+
+![](../../imgs/hue/cloudera_manager.png)
+
+Refer following configuration and tune it to fit your environment.
+```
+[desktop]
+ app_blacklist=zookeeper,hbase,impala,search,sqoop,security
+ use_new_editor=true
+[[interpreters]]
+[[[sparksql]]]
+  name=Spark SQL
+  interface=hiveserver2
+  # other interpreters
+  ...
+[spark]
+sql_server_host=kyuubi-server-host
+sql_server_port=10009
+```
+
+You need to restart the Hue Service to activate the configuration changes, and then Spark SQL will available in editor list.
+
+![](../../imgs/hue/editor.png)
+
+Having fun with Hue and Kyuubi!
+
+![](../../imgs/hue/spark_sql_cdh6.png)
diff --git a/content/docs/latest/_sources/client/bi_tools/index.rst.txt b/content/docs/latest/_sources/client/bi_tools/index.rst.txt
new file mode 100644
index 0000000..b11076a
--- /dev/null
+++ b/content/docs/latest/_sources/client/bi_tools/index.rst.txt
@@ -0,0 +1,32 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+Business Intelligence Tools and SQL IDEs
+========================================
+
+Kyuubi provides a standard JDBC/ODBC interface over thrift that allows various existing BI tools, SQL clients/IDEs to connect with.
+
+.. note:: Is your favorite tool missing?
+   `Report an feature request <https://kyuubi.apache.org/issue_tracking.html>`_ or help us document it.
+
+.. toctree::
+    :maxdepth: 1
+
+    superset
+    hue
+    datagrip
+    dbeaver
+    powerbi
+    tableau
diff --git a/content/docs/latest/_sources/client/bi_tools/powerbi.rst.txt b/content/docs/latest/_sources/client/bi_tools/powerbi.rst.txt
new file mode 100644
index 0000000..2da6747
--- /dev/null
+++ b/content/docs/latest/_sources/client/bi_tools/powerbi.rst.txt
@@ -0,0 +1,21 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+`PowerBI`_
+==========
+
+.. warning:: The document you are visiting now is incomplete, please help kyuubi community to fix it if appropriate for you.
+
+.. _PowerBI: https://powerbi.microsoft.com/en-us/
\ No newline at end of file
diff --git a/content/docs/latest/_sources/client/bi_tools/superset.rst.txt b/content/docs/latest/_sources/client/bi_tools/superset.rst.txt
new file mode 100644
index 0000000..52afdd9
--- /dev/null
+++ b/content/docs/latest/_sources/client/bi_tools/superset.rst.txt
@@ -0,0 +1,21 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+`Apache Superset`_
+==================
+
+.. warning:: The document you are visiting now is incomplete, please help kyuubi community to fix it if appropriate for you.
+
+.. _Apache Superset: https://superset.apache.org/
\ No newline at end of file
diff --git a/content/docs/latest/_sources/client/bi_tools/tableau.rst.txt b/content/docs/latest/_sources/client/bi_tools/tableau.rst.txt
new file mode 100644
index 0000000..ef6e6aa
--- /dev/null
+++ b/content/docs/latest/_sources/client/bi_tools/tableau.rst.txt
@@ -0,0 +1,21 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+`Tableau`_
+==========
+
+.. warning:: The document you are visiting now is incomplete, please help kyuubi community to fix it if appropriate for you.
+
+.. _Tableau: https://www.tableau.com/
\ No newline at end of file
diff --git a/content/docs/latest/_sources/client/cli/hive_beeline.rst.txt b/content/docs/latest/_sources/client/cli/hive_beeline.rst.txt
new file mode 100644
index 0000000..fda925a
--- /dev/null
+++ b/content/docs/latest/_sources/client/cli/hive_beeline.rst.txt
@@ -0,0 +1,31 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+Hive Beeline
+============
+
+Kyuubi supports Apache Hive beeline that works with Kyuubi server.
+Hive beeline is a `SQLLine CLI <http://sqlline.sourceforge.net/>`_ based on the `Hive JDBC Driver <../jdbc/hive_jdbc.html>`_.
+
+Prerequisites
+-------------
+
+- Kyuubi server installed and launched.
+- Hive beeline installed
+
+.. important:: Kyuubi does not support embedded mode which beeline and server run in the same process.
+   It always uses remote mode for connecting beeline with a separate server process over thrift.
+
+.. warning:: The document you are visiting now is incomplete, please help kyuubi community to fix it if appropriate for you.
diff --git a/content/docs/latest/_sources/client/cli/index.rst.txt b/content/docs/latest/_sources/client/cli/index.rst.txt
new file mode 100644
index 0000000..61be9ad
--- /dev/null
+++ b/content/docs/latest/_sources/client/cli/index.rst.txt
@@ -0,0 +1,23 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+Command Line Interface(CLI)s
+============================
+
+.. toctree::
+    :maxdepth: 2
+
+    kyuubi_beeline
+    hive_beeline
diff --git a/content/docs/latest/_sources/client/cli/kyuubi_beeline.rst.txt b/content/docs/latest/_sources/client/cli/kyuubi_beeline.rst.txt
new file mode 100644
index 0000000..e217810
--- /dev/null
+++ b/content/docs/latest/_sources/client/cli/kyuubi_beeline.rst.txt
@@ -0,0 +1,22 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+Kyuubi Beeline
+==============
+
+.. warning:: The document you are visiting now is incomplete, please help kyuubi community to fix it if appropriate for you.
+
+
+
diff --git a/content/docs/latest/_sources/client/index.rst.txt b/content/docs/latest/_sources/client/index.rst.txt
new file mode 100644
index 0000000..049f593
--- /dev/null
+++ b/content/docs/latest/_sources/client/index.rst.txt
@@ -0,0 +1,39 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+Clients
+=======
+
+This section aims to document the APIs, clients and tools for end-users who are not necessary to care about deployment at the kyuubi server side.
+
+Kyuubi provides standards-based drivers for JDBC, and ODBC enabling developers to build database applications in their language of choice.
+
+In addition, APIs like REST, Thrift, etc., allow developers to access kyuubi directly and flexibly.
+
+.. note::
+   When you try some of the examples in this section, make sure you have a available server.
+
+.. toctree::
+    :maxdepth: 2
+
+    jdbc/index
+    cli/index
+    bi_tools/index
+    odbc/index
+    thrift/index
+    rest/index
+    ui/index
+    python/index
+    advanced/index
\ No newline at end of file
diff --git a/content/docs/latest/_sources/client/jdbc/hive_jdbc.md.txt b/content/docs/latest/_sources/client/jdbc/hive_jdbc.md.txt
new file mode 100644
index 0000000..f0886d7
--- /dev/null
+++ b/content/docs/latest/_sources/client/jdbc/hive_jdbc.md.txt
@@ -0,0 +1,82 @@
+<!--
+ - Licensed to the Apache Software Foundation (ASF) under one or more
+ - contributor license agreements.  See the NOTICE file distributed with
+ - this work for additional information regarding copyright ownership.
+ - The ASF licenses this file to You under the Apache License, Version 2.0
+ - (the "License"); you may not use this file except in compliance with
+ - the License.  You may obtain a copy of the License at
+ -
+ -   http://www.apache.org/licenses/LICENSE-2.0
+ -
+ - Unless required by applicable law or agreed to in writing, software
+ - distributed under the License is distributed on an "AS IS" BASIS,
+ - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ - See the License for the specific language governing permissions and
+ - limitations under the License.
+ -->
+
+
+# Hive JDBC Driver
+
+
+## Instructions
+
+Kyuubi does not provide its own JDBC Driver so far,
+as it is fully compatible with Hive JDBC and ODBC drivers that let you connect to popular Business Intelligence (BI) tools to query,
+analyze and visualize data though Spark SQL engines.
+
+
+## Install Hive JDBC
+
+For programing, the easiest way to get `hive-jdbc` is from [the maven central](https://mvnrepository.com/artifact/org.apache.hive/hive-jdbc). For example,
+
+- **maven**
+```xml
+<dependency>
+    <groupId>org.apache.hive</groupId>
+    <artifactId>hive-jdbc</artifactId>
+    <version>2.3.8</version>
+</dependency>
+```
+
+- **sbt**
+```scala
+libraryDependencies += "org.apache.hive" % "hive-jdbc" % "2.3.8"
+```
+
+- **gradle**
+```gradle
+implementation group: 'org.apache.hive', name: 'hive-jdbc', version: '2.3.8'
+```
+
+For BI tools, please refer to [Quick Start](../quick_start/index.html) to check the guide for the BI tool used.
+If you find there is no specific document for the BI tool that you are using, don't worry, the configuration part for all BI tools are basically the same.
+Also, we will appreciate if you can help us to improve the document.
+
+
+## JDBC URL
+
+JDBC URLs have the following format:
+
+```
+jdbc:hive2://<host>:<port>/<dbName>;<sessionVars>?<kyuubiConfs>#<[spark|hive]Vars>
+```
+
+JDBC Parameter | Description
+---------------| -----------
+host | The cluster node hosting Kyuubi Server.
+port | The port number to which is Kyuubi Server listening.
+dbName | Optional database name to set the current database to run the query against, use `default` if absent.
+sessionVars | Optional `Semicolon(;)` separated `key=value` parameters for the JDBC/ODBC driver. Such as `user`, `password` and `hive.server2.proxy.user`.
+kyuubiConfs | Optional `Semicolon(;)` separated `key=value` parameters for Kyuubi server to create the corresponding engine, dismissed if engine exists.
+[spark&#124;hive]Vars | Optional `Semicolon(;)` separated `key=value` parameters for Spark/Hive variables used for variable substitution.
+
+## Example
+
+```
+jdbc:hive2://localhost:10009/default;hive.server2.proxy.user=proxy_user?kyuubi.engine.share.level=CONNECTION;spark.ui.enabled=false#var_x=y
+```
+
+## Unsupported Hive Features
+
+- Connect to HiveServer2 using HTTP transport. ```transportMode=http```
diff --git a/content/docs/latest/_sources/client/jdbc/index.rst.txt b/content/docs/latest/_sources/client/jdbc/index.rst.txt
new file mode 100644
index 0000000..31871f1
--- /dev/null
+++ b/content/docs/latest/_sources/client/jdbc/index.rst.txt
@@ -0,0 +1,25 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+JDBC Drivers
+============
+
+.. toctree::
+    :maxdepth: 1
+
+    kyuubi_jdbc
+    hive_jdbc
+    mysql_jdbc
+
diff --git a/content/docs/latest/_sources/client/jdbc/kyuubi_jdbc.rst.txt b/content/docs/latest/_sources/client/jdbc/kyuubi_jdbc.rst.txt
new file mode 100644
index 0000000..fdc40d5
--- /dev/null
+++ b/content/docs/latest/_sources/client/jdbc/kyuubi_jdbc.rst.txt
@@ -0,0 +1,160 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+Kyuubi Hive JDBC Driver
+=======================
+
+.. versionadded:: 1.4.0
+   Since 1.4.0, kyuubi community maintains a forked hive jdbc driver module and provides both shaded and non-shaded packages.
+
+This packages aims to support some missing functionalities of the original hive jdbc.
+For kyuubi engines that support multiple catalogs, it provides meta APIs for better support.
+The behaviors of the original hive jdbc have remained.
+
+To access a Hive data warehouse or new lakehouse formats, such as Apache Iceberg/Hudi, delta lake using the kyuubi jdbc driver for Apache kyuubi, you need to configure
+the following:
+
+- The list of driver library files - :ref:`referencing-libraries`.
+- The Driver or DataSource class - :ref:`registering_class`.
+- The connection URL for the driver - :ref:`building_url`
+
+.. _referencing-libraries:
+
+Referencing the JDBC Driver Libraries
+-------------------------------------
+
+Before you use the jdbc driver for Apache Kyuubi, the JDBC application or Java code that
+you are using to connect to your data must be able to access the driver JAR files.
+
+Using the Driver in Java Code
+*****************************
+
+In the code, specify the artifact `kyuubi-hive-jdbc-shaded` from `Maven Central`_ according to the build tool you use.
+
+Maven
+^^^^^
+
+.. code-block:: xml
+
+   <dependency>
+       <groupId>org.apache.kyuubi</groupId>
+       <artifactId>kyuubi-hive-jdbc-shaded</artifactId>
+       <version>1.5.2-incubating</version>
+   </dependency>
+
+Sbt
+^^^
+
+.. code-block:: sbt
+
+   libraryDependencies += "org.apache.kyuubi" % "kyuubi-hive-jdbc-shaded" % "1.5.2-incubating"
+
+
+Gradle
+^^^^^^
+
+.. code-block:: gradle
+
+   implementation group: 'org.apache.kyuubi', name: 'kyuubi-hive-jdbc-shaded', version: '1.5.2-incubating'
+
+Using the Driver in a JDBC Application
+**************************************
+
+For `JDBC Applications`_, such as BI tools, SQL IDEs, please check the specific guide for detailed information.
+
+.. note:: Is your favorite tool missing?
+   `Report an feature request <https://kyuubi.apache.org/issue_tracking.html>`_ or help us document it.
+
+.. _registering_class:
+
+Registering the Driver Class
+----------------------------
+
+Before connecting to your data, you must register the JDBC Driver class for your application.
+
+- org.apache.kyuubi.jdbc.KyuubiHiveDriver
+- org.apache.kyuubi.jdbc.KyuubiDriver (Deprecated)
+
+The following sample code shows how to use the `java.sql.DriverManager`_ class to establish a
+connection for JDBC:
+
+.. code-block:: java
+
+   private static Connection connectViaDM() throws Exception
+   {
+      Connection connection = null;
+      connection = DriverManager.getConnection(CONNECTION_URL);
+      return connection;
+   }
+
+.. _building_url:
+
+Building the Connection URL
+---------------------------
+
+Basic Connection URL format
+***************************
+
+Use the connection URL to supply connection information to the kyuubi server or cluster that you are
+accessing. The following is the format of the connection URL for the Kyuubi Hive JDBC Driver
+
+.. code-block:: jdbc
+
+   jdbc:subprotocol://host:port/schema;<clientProperties;><[#|?]sessionProperties>
+
+- subprotocol: kyuubi or hive2
+- host: DNS or IP address of the kyuubi server
+- port: The number of the TCP port that the server uses to listen for client requests
+- dbName: Optional database name to set the current database to run the query against, use `default` if absent.
+- clientProperties: Optional `semicolon(;)` separated `key=value` parameters identified and affect the client behavior locally. e.g., user=foo;password=bar.
+- sessionProperties: Optional `semicolon(;)` separated `key=value` parameters used to configure the session, operation or background engines.
+  For instance, `kyuubi.engine.share.level=CONNECTION` determines the background engine instance is used only by the current connection. `spark.ui.enabled=false` disables the Spark UI of the engine.
+
+.. important::
+   - The sessionProperties MUST come after a leading number sign(#) or question mark (?).
+   - Properties are case-sensitive
+   - Do not duplicate properties in the connection URL
+
+Connection URL over Http
+************************
+
+.. versionadded:: 1.6.0
+
+.. code-block:: jdbc
+
+   jdbc:subprotocol://host:port/schema;transportMode=http;httpPath=<http_endpoint>
+
+- http_endpoint is the corresponding HTTP endpoint configured by `kyuubi.frontend.thrift.http.path` at the server side.
+
+Connection URL over Service Discovery
+*************************************
+
+.. code-block:: jdbc
+
+   jdbc:subprotocol://<zookeeper quorum>/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=kyuubi
+
+- zookeeper quorum is the corresponding zookeeper cluster configured by `kyuubi.ha.zookeeper.quorum` at the server side.
+- zooKeeperNamespace is  the corresponding namespace configured by `kyuubi.ha.zookeeper.namespace` at the server side.
+
+Authentication
+--------------
+
+
+DataTypes
+---------
+
+.. _Maven Central: https://mvnrepository.com/artifact/org.apache.kyuubi/kyuubi-hive-jdbc-shaded
+.. _JDBC Applications: ../bi_tools/index.html
+.. _java.sql.DriverManager: https://docs.oracle.com/javase/8/docs/api/java/sql/DriverManager.html
diff --git a/content/docs/latest/_sources/client/jdbc/mysql_jdbc.rst.txt b/content/docs/latest/_sources/client/jdbc/mysql_jdbc.rst.txt
new file mode 100644
index 0000000..0702fcd
--- /dev/null
+++ b/content/docs/latest/_sources/client/jdbc/mysql_jdbc.rst.txt
@@ -0,0 +1,26 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+`MySQL Connectors`_
+================
+
+.. versionadded:: 1.4.0
+
+Kyuubi provides an frontend service that enables the connectivity and accessibility from MySQL connectors.
+
+.. warning:: The document you are visiting now is incomplete, please help kyuubi community to fix it if appropriate for you.
+
+.. _MySQL Connectors: https://www.mysql.com/products/connector/
diff --git a/content/docs/latest/_sources/client/odbc/index.rst.txt b/content/docs/latest/_sources/client/odbc/index.rst.txt
new file mode 100644
index 0000000..0d4ed8b
--- /dev/null
+++ b/content/docs/latest/_sources/client/odbc/index.rst.txt
@@ -0,0 +1,24 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+ODBC Drivers
+============================
+
+.. toctree::
+    :maxdepth: 2
+
+    todo
+
diff --git a/content/docs/latest/_sources/client/python/index.rst.txt b/content/docs/latest/_sources/client/python/index.rst.txt
new file mode 100644
index 0000000..6dfbec0
--- /dev/null
+++ b/content/docs/latest/_sources/client/python/index.rst.txt
@@ -0,0 +1,24 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+Python DB-APIs
+==============
+
+.. toctree::
+    :maxdepth: 2
+
+    pyhive
+
diff --git a/content/docs/latest/_sources/client/python/pyhive.rst.txt b/content/docs/latest/_sources/client/python/pyhive.rst.txt
new file mode 100644
index 0000000..fe2f9cf
--- /dev/null
+++ b/content/docs/latest/_sources/client/python/pyhive.rst.txt
@@ -0,0 +1,22 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+`PyHive`_
+=========
+
+.. warning:: The document you are visiting now is incomplete, please help kyuubi community to fix it if appropriate for you.
+
+.. _PyHive: https://github.com/dropbox/PyHive
diff --git a/content/docs/latest/_sources/client/rest/index.rst.txt b/content/docs/latest/_sources/client/rest/index.rst.txt
new file mode 100644
index 0000000..3e94c0e
--- /dev/null
+++ b/content/docs/latest/_sources/client/rest/index.rst.txt
@@ -0,0 +1,24 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+RESTful APIs and Clients
+========================
+
+.. toctree::
+    :maxdepth: 2
+
+    rest_api
+
diff --git a/content/docs/latest/_sources/client/rest/rest_api.md.txt b/content/docs/latest/_sources/client/rest/rest_api.md.txt
new file mode 100644
index 0000000..e9a40c9
--- /dev/null
+++ b/content/docs/latest/_sources/client/rest/rest_api.md.txt
@@ -0,0 +1,124 @@
+<!--
+ - Licensed to the Apache Software Foundation (ASF) under one or more
+ - contributor license agreements.  See the NOTICE file distributed with
+ - this work for additional information regarding copyright ownership.
+ - The ASF licenses this file to You under the Apache License, Version 2.0
+ - (the "License"); you may not use this file except in compliance with
+ - the License.  You may obtain a copy of the License at
+ -
+ -   http://www.apache.org/licenses/LICENSE-2.0
+ -
+ - Unless required by applicable law or agreed to in writing, software
+ - distributed under the License is distributed on an "AS IS" BASIS,
+ - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ - See the License for the specific language governing permissions and
+ - limitations under the License.
+ -->
+
+# REST API v1
+
+Note that: now the api version is v1 and the base uri is `/api/v1`.
+
+## Batch Resource
+
+### GET /batches
+
+Returns all the batches.
+
+#### Request Parameters
+
+| Name       | Description                                                                                             | Type   |
+| :--------- |:--------------------------------------------------------------------------------------------------------| :----- |
+| batchType  | The batch type, such as spark/flink, if no batchType is specified,<br/> return all types                | String |
+| batchState | The valid batch state can be one of the following:<br/> PENDING, RUNNING, FINISHED, ERROR, CANCELED     | String |
+| batchUser  | The user name that created the batch                                                                    | String |
+| createTime | Return the batch that created after this timestamp                                                      | Long   |
+| endTime    | Return the batch that ended before this timestamp                                                       | Long   |
+| from       | The start index to fetch sessions                                                                       | Int    |
+| size       | Number of sessions to fetch                                                                             | Int    |
+
+#### Response Body
+
+| Name    | Description                         | Type |
+| :------ | :---------------------------------- | :--- |
+| from    | The start index of fetched sessions | Int  |
+| total   | Number of sessions fetched          | Int  |
+| batches | [Batch](#batch) List                | List |
+
+### POST /batches
+
+#### Request Body
+
+| Name      | Description                                        | Type             |
+| :-------- |:---------------------------------------------------|:-----------------|
+| batchType | The batch type, such as Spark, Flink               | String           |
+| resource  | The resource containing the application to execute | Path (required)  |
+| className | Application main class                             | String(required) |
+| name      | The name of this batch.                            | String           |
+| conf      | Configuration properties                           | Map of key=val   |
+| args      | Command line arguments for the application         | List of Strings  |
+
+
+#### Response Body
+
+The created [Batch](#batch) object.
+
+### GET /batches/{batchId}
+
+Returns the batch information.
+
+#### Response Body
+
+The [Batch](#batch).
+
+### DELETE /batches/${batchId}
+
+Kill the batch if it is still running.
+
+#### Request Parameters
+
+| Name                    | Description                   | Type             |
+| :---------------------- | :---------------------------- | :--------------- |
+| hive.server2.proxy.user | the proxy user to impersonate | String(optional) |
+
+#### Response Body
+
+| Name    | Description                           | Type    |
+| :------ |:--------------------------------------| :------ |
+| success | Whether killed the batch successfully | Boolean |
+| msg     | The kill batch message                | String  |
+
+### GET /batches/${batchId}/localLog
+
+Gets the local log lines from this batch.
+
+#### Request Parameters
+
+| Name | Description                       | Type |
+| :--- | :-------------------------------- | :--- |
+| from | Offset                            | Int  |
+| size | Max number of log lines to return | Int  |
+
+#### Response Body
+
+| Name      | Description       | Type          |
+| :-------- | :---------------- |:--------------|
+| logRowSet | The log lines     | List of sting |
+| rowCount  | The log row count | Int           |
+
+### Batch
+
+| Name           | Description                                                       | Type   |
+| :------------- |:------------------------------------------------------------------| :----- |
+| id             | The batch id                                                      | String |
+| user           | The user created the batch                                        | String |
+| batchType      | The batch type                                                    | String |
+| name           | The batch name                                                    | String |
+| appId          | The batch application Id                                          | String |
+| appUrl         | The batch application tracking url                                | String |
+| appState       | The batch application state                                       | String |
+| appDiagnostic  | The batch application diagnostic                                  | String |
+| kyuubiInstance | The kyuubi instance that created the batch                        | String |
+| state          | The kyuubi batch operation state                                  | String |
+| createTime     | The batch create time                                             | Long   |
+| endTime        | The batch end time, if it has not been terminated, the value is 0 | Long   |
diff --git a/content/docs/latest/_sources/client/thrift/index.rst.txt b/content/docs/latest/_sources/client/thrift/index.rst.txt
new file mode 100644
index 0000000..e7def48
--- /dev/null
+++ b/content/docs/latest/_sources/client/thrift/index.rst.txt
@@ -0,0 +1,24 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+Thrift APIs
+===========
+
+.. toctree::
+    :maxdepth: 2
+
+    hive_beeline
+
diff --git a/content/docs/latest/_sources/client/ui/index.rst.txt b/content/docs/latest/_sources/client/ui/index.rst.txt
new file mode 100644
index 0000000..63a02cb
--- /dev/null
+++ b/content/docs/latest/_sources/client/ui/index.rst.txt
@@ -0,0 +1,24 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+Web UI
+======
+
+.. toctree::
+    :maxdepth: 2
+
+    hive_beeline
+
diff --git a/content/docs/latest/_sources/community/CONTRIBUTING.md.txt b/content/docs/latest/_sources/community/CONTRIBUTING.md.txt
new file mode 100644
index 0000000..a1c0bd9
--- /dev/null
+++ b/content/docs/latest/_sources/community/CONTRIBUTING.md.txt
@@ -0,0 +1,61 @@
+<!--
+ - Licensed to the Apache Software Foundation (ASF) under one or more
+ - contributor license agreements.  See the NOTICE file distributed with
+ - this work for additional information regarding copyright ownership.
+ - The ASF licenses this file to You under the Apache License, Version 2.0
+ - (the "License"); you may not use this file except in compliance with
+ - the License.  You may obtain a copy of the License at
+ -
+ -   http://www.apache.org/licenses/LICENSE-2.0
+ -
+ - Unless required by applicable law or agreed to in writing, software
+ - distributed under the License is distributed on an "AS IS" BASIS,
+ - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ - See the License for the specific language governing permissions and
+ - limitations under the License.
+ -->
+
+# Contributing to Apache Kyuubi
+
+Thanks for your interest in the Apache Kyuubi project.
+Contributions are welcome and are greatly appreciated!
+Every little bit helps, and a credit will always be given.
+
+This page provides some orientation and resources we have for you to get involved.
+It also offers recommendations on getting the best results when engaging with the community.
+We hope that this will be a pleasant first experience for you to return to continue contributing.
+
+## Get Involved
+
+In the process of using Apache Kyuubi, if you have any questions, suggestions, or improvement ideas, you can participate in the Kyuubi community building through the following suggested channels.
+
+- Join the [Mailing Lists](https://kyuubi.apache.org/mailing_lists.html) - the best way to keep up-to-date with the community.
+- [Issue Tracker](https://kyuubi.apache.org/issue_tracking.html) - tracking bugs, ideas, plans, etc.
+- [Github Discussions](https://github.com/apache/incubator-kyuubi/discussions) - second to mailing list for anything else you want to share or ask
+
+## Contributing Guide
+
+As a community-driven project. All bits of help are welcome.
+
+Contributing code is excellent, but that’s probably not the first place to start.
+There are many ways to make valuable contributions to the project and community.
+
+You can make various types of contributions to Kyuubi, including the following but not limited to,
+
+- Answer questions in the  [Mailing Lists](https://kyuubi.apache.org/mailing_lists.html)
+- [Share your success stories with us](https://github.com/apache/incubator-kyuubi/discussions/925) 
+- Improve Documentation - [![Documentation Status](https://readthedocs.org/projects/kyuubi/badge/?version=latest)](https://kyuubi.apache.org/docs/latest/)
+- Test latest releases - [![Latest tag](https://img.shields.io/github/v/tag/apache/incubator-kyuubi?label=tag)](https://github.com/apache/incubator-kyuubi/tags)
+- Improve test coverage - [![codecov](https://codecov.io/gh/apache/incubator-kyuubi/branch/master/graph/badge.svg)](https://codecov.io/gh/apache/incubator-kyuubi)
+- Report bugs and better help developers to reproduce
+- Review changes
+- [Make a pull request](https://kyuubi.apache.org/pull_request.html)
+- Promote to others
+- Click the star button if you like this project
+
+## Easter Eggs for Contributors
+
+TBD, please be patient for the surprise.
+
+## IDE Setup Guide
+[IntelliJ IDEA Setup Guide](https://kyuubi.readthedocs.io/en/latest/develop_tools/idea_setup.html)
diff --git a/content/docs/latest/_sources/community/collaborators.md.txt b/content/docs/latest/_sources/community/collaborators.md.txt
new file mode 100644
index 0000000..d424262
--- /dev/null
+++ b/content/docs/latest/_sources/community/collaborators.md.txt
@@ -0,0 +1,22 @@
+<!--
+ - Licensed to the Apache Software Foundation (ASF) under one or more
+ - contributor license agreements.  See the NOTICE file distributed with
+ - this work for additional information regarding copyright ownership.
+ - The ASF licenses this file to You under the Apache License, Version 2.0
+ - (the "License"); you may not use this file except in compliance with
+ - the License.  You may obtain a copy of the License at
+ -
+ -   http://www.apache.org/licenses/LICENSE-2.0
+ -
+ - Unless required by applicable law or agreed to in writing, software
+ - distributed under the License is distributed on an "AS IS" BASIS,
+ - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ - See the License for the specific language governing permissions and
+ - limitations under the License.
+ -->
+
+# Collaborators
+
+[PPMC Members and Committers](https://people.apache.org/phonebook.html?podling=kyuubi)
+
+See full contributor list at [contributors](https://github.com/apache/incubator-kyuubi/graphs/contributors).
diff --git a/content/docs/latest/_sources/community/index.rst.txt b/content/docs/latest/_sources/community/index.rst.txt
new file mode 100644
index 0000000..420905d
--- /dev/null
+++ b/content/docs/latest/_sources/community/index.rst.txt
@@ -0,0 +1,27 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+Community
+=========
+
+.. toctree::
+    :maxdepth: 2
+    :glob:
+
+    CONTRIBUTING
+    collaborators
+    release
+
diff --git a/content/docs/latest/_sources/community/release.md.txt b/content/docs/latest/_sources/community/release.md.txt
new file mode 100644
index 0000000..435226d
--- /dev/null
+++ b/content/docs/latest/_sources/community/release.md.txt
@@ -0,0 +1,283 @@
+<!--
+ - Licensed to the Apache Software Foundation (ASF) under one or more
+ - contributor license agreements.  See the NOTICE file distributed with
+ - this work for additional information regarding copyright ownership.
+ - The ASF licenses this file to You under the Apache License, Version 2.0
+ - (the "License"); you may not use this file except in compliance with
+ - the License.  You may obtain a copy of the License at
+ -
+ -   http://www.apache.org/licenses/LICENSE-2.0
+ -
+ - Unless required by applicable law or agreed to in writing, software
+ - distributed under the License is distributed on an "AS IS" BASIS,
+ - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ - See the License for the specific language governing permissions and
+ - limitations under the License.
+ -->
+
+Kyuubi Release Guide
+===
+
+## Introduction
+The Apache Kyuubi (Incubating) project periodically declares and publishes releases. A release is one or more packages
+of the project artifact(s) that are approved for general public distribution and use. They may come with various
+degrees of caveat regarding their perceived quality and potential for change, such as "alpha", "beta", "incubating",
+"stable", etc.
+
+The Kyuubi community treats releases with great importance. They are a public face of the project and most users
+interact with the project only through the releases. Releases are signed off by the entire Kyuubi community in a
+public vote.
+
+Each release is executed by a Release Manager, who is selected among the Kyuubi committers. This document describes
+the process that the Release Manager follows to perform a release. Any changes to this process should be discussed
+and adopted on the [dev mailing list](mailto:dev@kyuubi.apache.org).
+
+Please remember that publishing software has legal consequences. This guide complements the foundation-wide 
+[Product Release Policy](https://www.apache.org/dev/release.html) and 
+[Release Distribution Policy](https://www.apache.org/dev/release-distribution).
+
+### Overview
+
+The release process consists of several steps:
+
+1. Decide to release
+2. Prepare for the release
+3. Cut branch off for __major__ release
+4. Build a release candidate
+5. Vote on the release candidate
+6. If necessary, fix any issues and go back to step 3.
+7. Finalize the release
+8. Promote the release
+
+## Decide to release
+
+Deciding to release and selecting a Release Manager is the first step of the release process. This is a consensus-based
+decision of the entire community.
+
+Anybody can propose a release on the [dev mailing list](mailto:dev@kyuubi.apache.org), giving a solid argument and
+nominating a committer as the Release Manager (including themselves). There’s no formal process, no vote requirements,
+and no timing requirements. Any objections should be resolved by consensus before starting the release.
+
+In general, the community prefers to have a rotating set of 1-2 Release Managers. Keeping a small core set of managers
+allows enough people to build expertise in this area and improve processes over time, without Release Managers needing
+to re-learn the processes for each release. That said, if you are a committer interested in serving the community in
+this way, please reach out to the community on the [dev mailing list](mailto:dev@kyuubi.apache.org).
+
+### Checklist to proceed to the next step
+
+1. Community agrees to release
+2. Community selects a Release Manager
+
+## Prepare for the release
+
+Before your first release, you should perform one-time configuration steps. This will set up your security keys for
+signing the release and access to various release repositories.
+
+### One-time setup instructions
+
+#### ASF authentication
+
+The environments `ASF_USERNAME` and `ASF_PASSWORD` have been used in several places and several times in the release
+process, you can either one-time set up them in `~/.bashrc` or `~/.zshrc`, or export them in terminal every time.
+
+```shell
+export ASF_USERNAME=<your apache username>
+export ASF_PASSWORD=<your apache password>
+```
+
+#### Java Home
+An available environment variable `JAVA_HOME`, you can do `echo $JAVA_HOME` to check it.
+Note that, the Java version should be 8.
+
+#### Subversion
+
+Besides on `git`, `svn` is also required for Apache release, please refer to
+https://www.apache.org/dev/version-control.html#https-svn for details.
+
+#### GPG Key
+
+You need to have a GPG key to sign the release artifacts. Please be aware of the ASF-wide
+[release signing guidelines](https://www.apache.org/dev/release-signing.html). If you don’t have a GPG key associated
+with your Apache account, please create one according to the guidelines.
+
+Determine your Apache GPG Key and Key ID, as follows:
+```shell
+gpg --list-keys --keyid-format SHORT
+```
+
+This will list your GPG keys. One of these should reflect your Apache account, for example:
+```shell
+pub   rsa4096 2021-08-30 [SC]
+      8FC8075E1FDC303276C676EE8001952629BCC75D
+uid           [ultimate] Cheng Pan <ch...@apache.org>
+sub   rsa4096 2021-08-30 [E]
+```
+
+> Note: To follow the [Apache's release specification](https://infra.apache.org/release-signing.html#note), all new RSA keys generated should be at least 4096 bits. Do not generate new DSA keys.
+
+Here, the key ID is the 8-digit hex string in the pub line: `29BCC75D`.
+
+To export the PGP public key, using:
+```shell
+gpg --armor --export 29BCC75D
+```
+
+If you have more than one gpg key, you can specify the default key as the following:
+```
+echo 'default-key <key-fpr>' > ~/.gnupg/gpg.conf
+```
+
+The last step is to update the KEYS file with your code signing key 
+https://www.apache.org/dev/openpgp.html#export-public-key
+
+```shell
+svn checkout --depth=files "https://dist.apache.org/repos/dist/release/incubator/kyuubi" work/svn-kyuubi
+
+(gpg --list-sigs "${ASF_USERNAME}@apache.org" && gpg --export --armor "${ASF_USERNAME}@apache.org") >> work/svn-kyuubi/KEYS
+
+svn commit --username "${ASF_USERNAME}" --password "${ASF_PASSWORD}" --message "Update KEYS" work/svn-kyuubi
+```
+
+In order to make yourself have the right permission to stage java artifacts in Apache Nexus staging repository, please submit your GPG public key to ubuntu server via
+
+```shell
+gpg --keyserver hkp://keyserver.ubuntu.com --send-keys ${PUBLIC_KEY} # send public key to ubuntu server
+gpg --keyserver hkp://keyserver.ubuntu.com --recv-keys ${PUBLIC_KEY} # verify
+```
+
+## Cut branch if for major release
+
+Kyuubi use version pattern `{MAJOR_VERSION}.{MINOR_VERSION}.{PATCH_VERSION}[-{OPTIONAL_SUFFIX}]`, e.g. `1.3.0-incubating`.
+__Major Release__ means `MAJOR_VERSION` or `MINOR_VERSION` changed, and __Patch Release__ means `PATCH_VERSION` changed.
+
+The main step towards preparing a major release is to create a release branch. This is done via standard Git branching
+mechanism and should be announced to the community once the branch is created.
+
+> Note: If you are releasing a patch version, you can ignore this step.
+
+The release branch pattern is `branch-{MAJOR_VERSION}.{MINOR_VERSION}`, e.g. `branch-1.3`.
+
+After cutting release branch, don't forget bump version in `master` branch.
+
+## Build a release candidate
+
+> Don't forget to switch to the release branch!  
+
+1. Set environment variables.
+
+```shell
+export RELEASE_VERSION=<release version, e.g. 1.3.0-incubating>
+export RELEASE_RC_NO=<RC number, e.g. 0>
+```
+
+2. Bump version.
+
+```shell
+build/mvn versions:set -DgenerateBackupPoms=false \
+  -DnewVersion="${RELEASE_VERSION}" \
+  -Pspark-3.2,spark-block-cleaner
+
+git commit -am "[RELEASE] Bump ${RELEASE_VERSION}"
+```
+
+3. Create a git tag for the release candidate.
+
+The tag pattern is `v${RELEASE_VERSION}-rc${RELEASE_RC_NO}`, e.g. `v1.3.0-incubating-rc0`
+
+> NOTE: After all the voting passed, be sure to create a final tag with the pattern: `v${RELEASE_VERSION}`
+
+4. Package the release binaries & sources, and upload them to the Apache staging SVN repo. Publish jars to the Apache
+staging Maven repo.
+
+```shell
+build/release/release.sh publish
+```
+
+To make your release available in the staging repository, you must close the staging repo in the [Apache Nexus](https://repository.apache.org/#stagingRepositories). Until you close, you can re-run deploying to staging multiple times. But once closed, it will create a new staging repo. So ensure you close this, so that the next RC (if need be) is on a new repo. Once everything is good, close the staging repository on Apache Nexus.
+
+5. Generate a pre-release note from GitHub for the subsequent voting.
+
+Goto the [release page](https://github.com/apache/incubator-kyuubi/releases) and click the "Draft a new release" button, then it would jump to a new page to prepare the release.
+
+Filling in all the necessary information required by the form. And in the bottom of the form, choose the "This is a pre-release" checkbox. Finally, click the "Publish release" button to finish the step.
+
+> Note: the pre-release note is used for voting purposes. It would be marked with a **Pre-release** tag. After all the voting works(dev and general) are finished, do not forget to inverse the "This is a pre-release" checkbox. The pre-release version comes from vx.y.z-incubating-rcN tags, and the final version should come from vx.y.z-incubating tags.
+
+## Vote on the release candidate
+
+The release voting takes place on the Apache Kyuubi (Incubating) developers list (the (P)PMC is voting).
+
+- If possible, attach a draft of the release notes with the email.
+- Recommend represent voting closing time in UTC format.
+- Make sure the email is in text format and the links are correct
+
+> Note: you can generate the voting mail content for dev ML automatically via invoke the `build/release/script/dev_kyuubi_vote.sh` script. 
+
+Once the vote is done, you should also send out a summary email with the totals, with a subject that looks
+something like __[VOTE][RESULT] ....__
+
+Then, you can move the release vote on the general incubator mailing list, and generate the voting mail content automatically via invoke the `build/release/script/general_incubator_vote.sh` script.
+Also, you should send out a summary email like dev ML voting.
+
+> Note, any reason causes voting cancel. You should re-vote on the dev ML firstly.
+
+## Finalize the Release
+
+__Be Careful!__
+
+__THIS STEP IS IRREVERSIBLE so make sure you selected the correct staging repository.__
+__Once you move the artifacts into the release folder, they cannot be removed.__
+
+After the vote passes, to upload the binaries to Apache mirrors, you move the binaries from dev directory (this should
+be where they are voted) to release directory. This "moving" is the only way you can add stuff to the actual release
+directory. (Note: only (P)PMC members can move to release directory)
+
+Move the sub-directory in "dev" to the corresponding directory in "release". If you've added your signing key to the
+KEYS file, also update the release copy.
+
+```shell
+build/release/release.sh finalize
+```
+
+Verify that the resources are present in https://www.apache.org/dist/incubator/kyuubi/. It may take a while for them
+to be visible. This will be mirrored throughout the Apache network.
+
+For Maven Central Repository, you can Release from the [Apache Nexus Repository Manager](https://repository.apache.org/).
+Log in, open Staging Repositories, find the one voted on, select and click Release and confirm. If successful, it should
+show up under https://repository.apache.org/content/repositories/releases/org/apache/kyuubi/ and the same under 
+https://repository.apache.org/content/groups/maven-staging-group/org/apache/kyuubi/ (look for the correct release version).
+After some time this will be sync’d to [Maven Central](https://search.maven.org/) automatically.
+
+## Promote the release
+
+### Update Website
+
+Fork and clone [Apache Kyuubi website](https://github.com/apache/incubator-kyuubi-website)
+
+1. Add a new markdown file in `src/zh/news/`, `src/en/news/`
+2. Add a new markdown file in `src/zh/release/`, `src/en/release/`
+3. Follow [Build Document](../develop_tools/build_document.md) to build documents, then copy `apache/incubator-kyuubi`'s
+   folder `docs/_build/html` to `apache/incubator-kyuubi-website`'s folder `content/docs/r{RELEASE_VERSION}`
+
+### Create an Announcement
+
+Once everything is working, create an announcement on the website and then send an e-mail to the mailing list.
+You can generate the announcement via `buld/release/script/announce.sh` automatically.
+The mailing list includes: `general@incubator.apache.org`, `announce@apache.org`, `dev@kyuubi.apache.org`, `user@spark.apache.org`.
+
+Note that, you must use the apache.org email to send announce to `announce@apache.org`.
+
+Enjoy an adult beverage of your choice, and congratulations on making a Kyuubi release.
+
+
+## Remove the dist repo directories for deprecated release candidates
+
+Remove the deprecated dist repo directories at last. 
+
+```shell
+cd work/svn-dev
+svn delete https://dist.apache.org/repos/dist/dev/incubator/kyuubi/{RELEASE_TAG} \
+  --username "${ASF_USERNAME}" \
+  --password "${ASF_PASSWORD}" \
+  --message "Remove deprecated Apache Kyuubi ${RELEASE_TAG}" 
+```
diff --git a/content/docs/latest/_sources/connector/flink/flink_table_store.rst.txt b/content/docs/latest/_sources/connector/flink/flink_table_store.rst.txt
new file mode 100644
index 0000000..c2fd667
--- /dev/null
+++ b/content/docs/latest/_sources/connector/flink/flink_table_store.rst.txt
@@ -0,0 +1,111 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+`Flink Table Store`_
+==========
+
+Flink Table Store is a unified storage to build dynamic tables for both streaming and batch processing in Flink,
+supporting high-speed data ingestion and timely data query.
+
+.. tip::
+   This article assumes that you have mastered the basic knowledge and operation of `Flink Table Store`_.
+   For the knowledge about Flink Table Store not mentioned in this article,
+   you can obtain it from its `Official Documentation`_.
+
+By using kyuubi, we can run SQL queries towards Flink Table Store which is more
+convenient, easy to understand, and easy to expand than directly using
+flink to manipulate Flink Table Store.
+
+Flink Table Store Integration
+-------------------
+
+To enable the integration of kyuubi flink sql engine and Flink Table Store, you need to:
+
+- Referencing the Flink Table Store :ref:`dependencies<flink-table-store-deps>`
+
+.. _flink-table-store-deps:
+
+Dependencies
+************
+
+The **classpath** of kyuubi flink sql engine with Flink Table Store supported consists of
+
+1. kyuubi-flink-sql-engine-\ |release|\ _2.12.jar, the engine jar deployed with Kyuubi distributions
+2. a copy of flink distribution
+3. flink-table-store-dist-<version>.jar (example: flink-table-store-dist-0.2.jar), which can be found in the `Maven Central`_
+
+In order to make the Flink Table Store packages visible for the runtime classpath of engines, we can use these methods:
+
+1. Put the Flink Table Store packages into ``$FLINK_HOME/lib`` directly
+2. Setting the HADOOP_CLASSPATH environment variable or copy the `Pre-bundled Hadoop Jar`_ to flink/lib.
+
+.. warning::
+   Please mind the compatibility of different Flink Table Store and Flink versions, which can be confirmed on the page of `Flink Table Store multi engine support`_.
+
+Flink Table Store Operations
+------------------
+
+Taking ``CREATE CATALOG`` as a example,
+
+.. code-block:: sql
+
+   CREATE CATALOG my_catalog WITH (
+     'type'='table-store',
+     'warehouse'='hdfs://nn:8020/warehouse/path' -- or 'file:///tmp/foo/bar'
+   );
+
+   USE CATALOG my_catalog;
+
+Taking ``CREATE TABLE`` as a example,
+
+.. code-block:: sql
+
+   CREATE TABLE MyTable (
+     user_id BIGINT,
+     item_id BIGINT,
+     behavior STRING,
+     dt STRING,
+     PRIMARY KEY (dt, user_id) NOT ENFORCED
+   ) PARTITIONED BY (dt) WITH (
+     'bucket' = '4'
+   );
+
+Taking ``Query Table`` as a example,
+
+.. code-block:: sql
+
+   SET 'execution.runtime-mode' = 'batch';
+   SELECT * FROM orders WHERE catalog_id=1025;
+
+Taking ``Streaming Query`` as a example,
+
+.. code-block:: sql
+
+   SET 'execution.runtime-mode' = 'streaming';
+   SELECT * FROM MyTable /*+ OPTIONS ('log.scan'='latest') */;
+
+Taking ``Rescale Bucket` as a example,
+
+.. code-block:: sql
+
+   ALTER TABLE my_table SET ('bucket' = '4');
+   INSERT OVERWRITE my_table PARTITION (dt = '2022-01-01');
+
+
+.. _Flink Table Store: https://nightlies.apache.org/flink/flink-table-store-docs-stable/
+.. _Official Documentation: https://nightlies.apache.org/flink/flink-table-store-docs-stable/
+.. _Maven Central: https://mvnrepository.com/artifact/org.apache.flink/flink-table-store-dist
+.. _Pre-bundled Hadoop Jar: https://flink.apache.org/downloads.html
+.. _Flink Table Store multi engine support: https://nightlies.apache.org/flink/flink-table-store-docs-stable/docs/engines/overview/
diff --git a/content/docs/latest/_sources/connector/flink/hudi.rst.txt b/content/docs/latest/_sources/connector/flink/hudi.rst.txt
new file mode 100644
index 0000000..0000bde
--- /dev/null
+++ b/content/docs/latest/_sources/connector/flink/hudi.rst.txt
@@ -0,0 +1,117 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+`Hudi`_
+========
+
+Apache Hudi (pronounced “hoodie”) is the next generation streaming data lake platform.
+Apache Hudi brings core warehouse and database functionality directly to a data lake.
+
+.. tip::
+   This article assumes that you have mastered the basic knowledge and operation of `Hudi`_.
+   For the knowledge about Hudi not mentioned in this article,
+   you can obtain it from its `Official Documentation`_.
+
+By using Kyuubi, we can run SQL queries towards Hudi which is more convenient, easy to understand,
+and easy to expand than directly using flink to manipulate Hudi.
+
+Hudi Integration
+----------------
+
+To enable the integration of kyuubi flink sql engine and Hudi through
+Catalog APIs, you need to:
+
+- Referencing the Hudi :ref:`dependencies<flink-hudi-deps>`
+
+.. _flink-hudi-deps:
+
+Dependencies
+************
+
+The **classpath** of kyuubi flink sql engine with Hudi supported consists of
+
+1. kyuubi-flink-sql-engine-\ |release|\ _2.12.jar, the engine jar deployed with Kyuubi distributions
+2. a copy of flink distribution
+3. hudi-flink<flink.version>-bundle_<scala.version>-<hudi.version>.jar (example: hudi-flink1.14-bundle_2.12-0.11.1.jar), which can be found in the `Maven Central`_
+
+In order to make the Hudi packages visible for the runtime classpath of engines, we can use one of these methods:
+
+1. Put the Hudi packages into ``$flink_HOME/lib`` directly
+2. Set ``pipeline.jars=/path/to/hudi-flink-bundle``
+
+Hudi Operations
+---------------
+
+Taking ``Create Table`` as a example,
+
+.. code-block:: sql
+
+   CREATE TABLE t1 (
+     id INT PRIMARY KEY NOT ENFORCED,
+     name STRING,
+     price DOUBLE
+   ) WITH (
+     'connector' = 'hudi',
+     'path' = 's3://bucket-name/hudi/',
+     'table.type' = 'MERGE_ON_READ' -- this creates a MERGE_ON_READ table, by default is COPY_ON_WRITE
+   );
+
+Taking ``Query Data`` as a example,
+
+.. code-block:: sql
+
+   SELECT * FROM t1;
+
+Taking ``Insert and Update Data`` as a example,
+
+.. code-block:: sql
+
+   INSERT INTO t1 VALUES (1, 'Lucas' , 2.71828);
+
+Taking ``Streaming Query`` as a example,
+
+.. code-block:: sql
+
+   CREATE TABLE t1 (
+     uuid VARCHAR(20) PRIMARY KEY NOT ENFORCED,
+     name VARCHAR(10),
+     age INT,
+     ts TIMESTAMP(3),
+     `partition` VARCHAR(20)
+   )
+   PARTITIONED BY (`partition`)
+   WITH (
+     'connector' = 'hudi',
+     'path' = '${path}',
+     'table.type' = 'MERGE_ON_READ',
+     'read.streaming.enabled' = 'true',  -- this option enable the streaming read
+     'read.start-commit' = '20210316134557', -- specifies the start commit instant time
+     'read.streaming.check-interval' = '4' -- specifies the check interval for finding new source commits, default 60s.
+   );
+
+   -- Then query the table in stream mode
+   SELECT * FROM t1;
+
+Taking ``Delete Data``,
+
+The streaming query can implicitly auto delete data.
+When consuming data in streaming query,
+Hudi Flink source can also accepts the change logs from the underneath data source,
+it can then applies the UPDATE and DELETE by per-row level.
+
+
+.. _Hudi: https://hudi.apache.org/
+.. _Official Documentation: https://hudi.apache.org/docs/overview
+.. _Maven Central: https://mvnrepository.com/artifact/org.apache.hudi
diff --git a/content/docs/latest/_sources/connector/flink/iceberg.rst.txt b/content/docs/latest/_sources/connector/flink/iceberg.rst.txt
new file mode 100644
index 0000000..ab4a701
--- /dev/null
+++ b/content/docs/latest/_sources/connector/flink/iceberg.rst.txt
@@ -0,0 +1,121 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+`Iceberg`_
+==========
+
+Apache Iceberg is an open table format for huge analytic datasets.
+Iceberg adds tables to compute engines including Spark, Trino, PrestoDB, Flink, Hive and Impala
+using a high-performance table format that works just like a SQL table.
+
+.. tip::
+   This article assumes that you have mastered the basic knowledge and operation of `Iceberg`_.
+   For the knowledge about Iceberg not mentioned in this article,
+   you can obtain it from its `Official Documentation`_.
+
+By using kyuubi, we can run SQL queries towards Iceberg which is more
+convenient, easy to understand, and easy to expand than directly using
+flink to manipulate Iceberg.
+
+Iceberg Integration
+-------------------
+
+To enable the integration of kyuubi flink sql engine and Iceberg through Catalog APIs, you need to:
+
+- Referencing the Iceberg :ref:`dependencies<flink-iceberg-deps>`
+
+.. _flink-iceberg-deps:
+
+Dependencies
+************
+
+The **classpath** of kyuubi flink sql engine with Iceberg supported consists of
+
+1. kyuubi-flink-sql-engine-\ |release|\ _2.12.jar, the engine jar deployed with Kyuubi distributions
+2. a copy of flink distribution
+3. iceberg-flink-runtime-<flink.version>-<iceberg.version>.jar (example: iceberg-flink-runtime-1.14-0.14.0.jar), which can be found in the `Maven Central`_
+
+In order to make the Iceberg packages visible for the runtime classpath of engines, we can use one of these methods:
+
+1. Put the Iceberg packages into ``$FLINK_HOME/lib`` directly
+2. Set ``pipeline.jars=/path/to/iceberg-flink-runtime``
+
+.. warning::
+   Please mind the compatibility of different Iceberg and Flink versions, which can be confirmed on the page of `Iceberg multi engine support`_.
+
+Iceberg Operations
+------------------
+
+Taking ``CREATE CATALOG`` as a example,
+
+.. code-block:: sql
+
+   CREATE CATALOG hive_catalog WITH (
+     'type'='iceberg',
+     'catalog-type'='hive',
+     'uri'='thrift://localhost:9083',
+     'warehouse'='hdfs://nn:8020/warehouse/path'
+   );
+   USE CATALOG hive_catalog;
+
+Taking ``CREATE DATABASE`` as a example,
+
+.. code-block:: sql
+
+   CREATE DATABASE iceberg_db;
+   USE iceberg_db;
+
+Taking ``CREATE TABLE`` as a example,
+
+.. code-block:: sql
+
+   CREATE TABLE `hive_catalog`.`default`.`sample` (
+     id BIGINT COMMENT 'unique id',
+     data STRING
+   );
+
+Taking ``Batch Read`` as a example,
+
+.. code-block:: sql
+
+   SET execution.runtime-mode = batch;
+   SELECT * FROM sample;
+
+Taking ``Streaming Read`` as a example,
+
+.. code-block:: sql
+
+   SET execution.runtime-mode = streaming;
+   SELECT * FROM sample /*+ OPTIONS('streaming'='true', 'monitor-interval'='1s')*/ ;
+
+Taking ``INSERT INTO`` as a example,
+
+.. code-block:: sql
+
+   INSERT INTO `hive_catalog`.`default`.`sample` VALUES (1, 'a');
+   INSERT INTO `hive_catalog`.`default`.`sample` SELECT id, data from other_kafka_table;
+
+Taking ``INSERT OVERWRITE`` as a example,
+Flink streaming job does not support INSERT OVERWRITE.
+
+.. code-block:: sql
+
+   INSERT OVERWRITE `hive_catalog`.`default`.`sample` VALUES (1, 'a');
+   INSERT OVERWRITE `hive_catalog`.`default`.`sample` PARTITION(data='a') SELECT 6;
+
+.. _Iceberg: https://iceberg.apache.org/
+.. _Official Documentation: https://iceberg.apache.org/docs/latest/
+.. _Maven Central: https://mvnrepository.com/artifact/org.apache.iceberg
+.. _Iceberg multi engine support: https://iceberg.apache.org/multi-engine-support/
diff --git a/content/docs/latest/_sources/connector/flink/index.rst.txt b/content/docs/latest/_sources/connector/flink/index.rst.txt
new file mode 100644
index 0000000..c9d9109
--- /dev/null
+++ b/content/docs/latest/_sources/connector/flink/index.rst.txt
@@ -0,0 +1,24 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+Connectors For Flink SQL Query Engine
+=====================================
+
+.. toctree::
+    :maxdepth: 2
+
+    flink_table_store
+    hudi
+    iceberg
diff --git a/content/docs/latest/_sources/connector/hive/index.rst.txt b/content/docs/latest/_sources/connector/hive/index.rst.txt
new file mode 100644
index 0000000..dddfd5c
--- /dev/null
+++ b/content/docs/latest/_sources/connector/hive/index.rst.txt
@@ -0,0 +1,20 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+Connectors for Hive SQL Query Engine
+====================================
+
+.. toctree::
+    :maxdepth: 2
diff --git a/content/docs/latest/_sources/connector/index.rst.txt b/content/docs/latest/_sources/connector/index.rst.txt
new file mode 100644
index 0000000..f7911e6
--- /dev/null
+++ b/content/docs/latest/_sources/connector/index.rst.txt
@@ -0,0 +1,42 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+Connectors
+==========
+
+This section describes the connectors available for different kyuubi engines to access data from various data sources.
+
+.. note:: Is your connector missing?
+   `Report an feature request <https://kyuubi.apache.org/issue_tracking.html>`_ or help us document it.
+
+.. toctree::
+    :maxdepth: 2
+
+    spark/index
+
+.. toctree::
+    :maxdepth: 2
+
+    flink/index
+
+.. toctree::
+    :maxdepth: 2
+
+    hive/index
+
+.. toctree::
+    :maxdepth: 2
+
+    trino/index
diff --git a/content/docs/latest/_sources/connector/spark/delta_lake.rst.txt b/content/docs/latest/_sources/connector/spark/delta_lake.rst.txt
new file mode 100644
index 0000000..164036c
--- /dev/null
+++ b/content/docs/latest/_sources/connector/spark/delta_lake.rst.txt
@@ -0,0 +1,95 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+`Delta Lake`_
+=============
+
+Delta lake is an open-source project that enables building a Lakehouse
+Architecture on top of existing storage systems such as S3, ADLS, GCS,
+and HDFS.
+
+.. tip::
+   This article assumes that you have mastered the basic knowledge and
+   operation of `Delta Lake`_.
+   For the knowledge about delta lake not mentioned in this article,
+   you can obtain it from its `Official Documentation`_.
+
+By using kyuubi, we can run SQL queries towards delta lake which is more
+convenient, easy to understand, and easy to expand than directly using
+spark to manipulate delta lake.
+
+Delta Lake Integration
+----------------------
+
+To enable the integration of kyuubi spark sql engine and delta lake through
+Apache Spark Datasource V2 and Catalog APIs, you need to:
+
+- Referencing the delta lake :ref:`dependencies<spark-delta-lake-deps>`
+- Setting the spark extension and catalog :ref:`configurations<spark-delta-lake-conf>`
+
+.. _spark-delta-lake-deps:
+
+Dependencies
+************
+
+The **classpath** of kyuubi spark sql engine with delta lake supported consists of
+
+1. kyuubi-spark-sql-engine-\ |release|\ _2.12.jar, the engine jar deployed with kyuubi distributions
+2. a copy of spark distribution
+3. delta-core & delta-storage, which can be found in the `Maven Central`_
+
+In order to make the delta packages visible for the runtime classpath of engines, we can use one of these methods:
+
+1. Put the delta packages into ``$SPARK_HOME/jars`` directly
+2. Set ``spark.jars=/path/to/delta-core,/path/to/delta-storage``
+
+.. warning::
+   Please mind the compatibility of different Delta Lake and Spark versions, which can be confirmed on the page of `delta release notes`_.
+
+.. _spark-delta-lake-conf:
+
+Configurations
+**************
+
+To activate functionality of delta lake, we can set the following configurations:
+
+.. code-block:: properties
+
+   spark.sql.extensions=io.delta.sql.DeltaSparkSessionExtension
+   spark.sql.catalog.spark_catalog=org.apache.spark.sql.delta.catalog.DeltaCatalog
+
+Delta Lake Operations
+---------------------
+
+As for end-users, who only use a pure SQL interface, there aren't much differences between
+using a delta table and a regular hive table. Unless you are going to use some advanced
+features, but they are still SQL, just more syntax added.
+
+Taking ``CREATE A TABLE`` as a example,
+
+.. code-block:: sql
+
+   CREATE TABLE IF NOT EXISTS kyuubi_delta (
+     id INT,
+     name STRING,
+     org STRING,
+     url STRING,
+     start TIMESTAMP
+   ) USING DELTA;
+
+.. _Delta Lake: https://delta.io/
+.. _Official Documentation: https://docs.delta.io/latest/index.html
+.. _Maven Central: https://mvnrepository.com/artifact/io.delta/delta-core
+.. _Delta release notes: https://github.com/delta-io/delta/releases
\ No newline at end of file
diff --git a/content/docs/latest/_sources/connector/spark/delta_lake_with_azure_blob.rst.txt b/content/docs/latest/_sources/connector/spark/delta_lake_with_azure_blob.rst.txt
new file mode 100644
index 0000000..dd302c5
--- /dev/null
+++ b/content/docs/latest/_sources/connector/spark/delta_lake_with_azure_blob.rst.txt
@@ -0,0 +1,345 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+Delta Lake with Microsoft Azure Blob Storage
+============================================
+
+Registration And Configuration
+----------------------------------------------
+
+Register An Account And Log In
+******************************
+
+Regarding the Microsoft Azure account, please contact your organization or register
+an account as an individual. For details, please refer to the `Microsoft Azure official
+website`_.
+
+Create Storage Container
+************************
+
+After logging in with your Microsoft Azure account, please follow the steps below to create a data storage container:
+
+.. image:: ../../imgs/deltalake/azure_create_new_container.png
+
+Get Access Key
+**************
+
+.. image:: ../../imgs/deltalake/azure_create_azure_access_key.png
+
+Deploy Spark
+------------
+
+Download Spark Package
+**********************
+
+Download spark package that matches your environment from `spark official website`_.
+And then unpackage:
+```shell
+tar -xzvf spark-3.2.0-bin-hadoop3.2.tgz
+```
+
+Config Spark
+************
+
+Enter the ``$SPARK_HOME/conf`` directory and execute:
+
+.. code-block:: shell
+
+   cp spark-defaults.conf.template spark-defaults.conf
+
+Add following configuration to spark-defaults.conf, please refer to your own local configuration for specific personalized configuration:
+
+.. code-block:: properties
+
+   spark.master                     spark://<YOUR_HOST>:7077
+   spark.sql.extensions             io.delta.sql.DeltaSparkSessionExtension
+   spark.sql.catalog.spark_catalog  org.apache.spark.sql.delta.catalog.DeltaCatalog
+
+Create a new file named ``core-site.xml`` under ``$SPARK_HOME/conf`` directory, and add following configuration:
+
+.. code-block:: xml
+
+   <?xml version="1.0" encoding="UTF-8"?>
+   <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+   <configuration>
+   <property>
+       <name>fs.AbstractFileSystem.wasb.Impl</name>
+       <value>org.apache.hadoop.fs.azure.Wasb</value>
+    </property>
+    <property>
+     <name>fs.azure.account.key.YOUR_AZURE_ACCOUNT.blob.core.windows.net</name>
+     <value>YOUR_AZURE_ACCOUNT_ACCESS_KEY</value>
+    </property>
+    <property>
+       <name>fs.azure.block.blob.with.compaction.dir</name>
+       <value>/hbase/WALs,/tmp/myblobfiles</value>
+    </property>
+    <property>
+       <name>fs.azure</name>
+       <value>org.apache.hadoop.fs.azure.NativeAzureFileSystem</value>
+    </property>
+   <property>
+       <name>fs.azure.enable.append.support</name>
+       <value>true</value>
+    </property>
+   </configuration>
+
+
+Copy Dependencies To Spark
+**************************
+
+Copy jar packages required by delta lake and microsoft azure to ./spark/jars directory:
+
+.. code-block:: shell
+
+   wget https://repo1.maven.org/maven2/io/delta/delta-core_2.12/1.0.0/delta-core_2.12-1.0.0.jar -O ./spark/jars/delta-core_2.12-1.0.0.jar
+   wget https://repo1.maven.org/maven2/com/microsoft/azure/azure-storage/8.6.6/azure-storage-8.6.6.jar -O ./spark/jars/azure-storage-8.6.6.jar
+   wget https://repo1.maven.org/maven2/com/azure/azure-storage-blob/12.14.2/azure-storage-blob-12.14.2.jar -O ./spark/jars/azure-storage-blob-12.14.2.jar
+   wget https://repo1.maven.org/maven2/org/apache/hadoop/hadoop-azure/3.1.1/hadoop-azure-3.1.1.jar -O ./spark/jars/hadoop-azure-3.1.1.jar
+
+Start Spark Standalone cluster
+******************************
+
+.. code-block:: shell
+
+   ./spark/sbin/start-master.sh -h <YOUR_HOST> -p 7077 --webui-port 9090
+   ./spark/sbin/start-worker.sh spark://<YOUR_HOST>:7077
+
+Test The connectivity Of Spark And Delta Lake
+*********************************************
+
+Start spark shell:
+
+.. code-block:: shell
+
+   ./bin/spark-shell
+
+Generate a piece of random data and push them to delta lake:
+
+.. code-block:: scala
+
+   scala> val data = spark.range(1000, 2000)
+   scala> data.write.format("delta").mode("overwrite").save("wasbs://<YOUR_CONTAINER_NAME>@<YOUR_AZURE_ACCOUNT>.blob.core.windows.net/<YOUR_TABLE_NAME>")
+
+After this, you can check your data on azure web UI. For example, my container name is 1000 and table name is alexDemo20211127:
+
+.. image:: ../../imgs/deltalake/azure_spark_connection_test_storage.png
+
+You can also check data by reading back the data from delta lake:
+
+.. code-block:: scala
+
+   scala> val df=spark.read.format("delta").load("wasbs://<YOUR_CONTAINER_NAME>@<YOUR_AZURE_ACCOUNT>.blob.core.windows.net/<YOUR_TABLE_NAME>")
+   scala> df.show()
+
+If there is no problem with the above, it proves that spark has been built with delta lake.
+
+Deploy Kyuubi
+-------------
+
+Install Kyuubi
+**************
+
+1.Download the latest version of kyuubi from `kyuubi download page`_.
+
+2.Unpackage
+
+   tar -xzvf  apache-kyuubi-|release|-incubating-bin.tgz
+
+Config Kyuubi
+*************
+
+Enter the ./kyuubi/conf directory
+
+.. code-block:: shell
+
+   cp kyuubi-defaults.conf.template kyuubi-defaults.conf
+   vim kyuubi-defaults.conf
+
+Add the following content:
+
+.. code-block:: properties
+   spark.master                    spark://<YOUR_HOST>:7077
+   kyuubi.authentication           NONE
+   kyuubi.frontend.bind.host       <YOUR_HOST>
+   kyuubi.frontend.bind.port       10009
+   # If you use your own zk cluster, you need to configure your zk host port.
+   # kyuubi.ha.zookeeper.quorum    <YOUR_HOST>:2181
+
+Start Kyuubi
+************
+
+.. code-block:: shell
+
+   bin/kyuubi start
+
+Check kyuubi log, in order to check kyuubi start status and find the jdbc connection url:
+
+.. code-block:: log
+
+   2021-11-26 17:49:50.235 INFO service.ThriftFrontendService: Starting and exposing JDBC connection at: jdbc:hive2://HOST:10009/
+   2021-11-26 17:49:50.265 INFO client.ServiceDiscovery: Created a /kyuubi/serviceUri=host:10009;version=1.3.1-incubating;sequence=0000000037 on ZooKeeper for KyuubiServer uri: host:10009
+   2021-11-26 17:49:50.267 INFO server.KyuubiServer: Service[KyuubiServer] is started.
+
+You can get the jdbc connection url by the log above.
+
+Test The Connectivity Of Kyuubi And Delta Lake
+**********************************************
+
+Use ``$KYUUBI_HOME/bin/beeline`` tool,
+
+.. code-block:: shell
+
+   ./bin//beeline -u 'jdbc:hive2://<YOUR_HOST>:10009/'
+
+At the same time, you can also check whether the engine is running on the spark UI:
+
+.. image:: ../../imgs/deltalake/kyuubi_start_status_spark_UI.png
+
+When the engine started, it will expose a thrift endpoint and register itself into ZooKeeper, Kyuubi server can get the connection info from ZooKeeper and establish the connection to the engine.
+So, you can check the registration details in zookeeper path '/kyuubi_USER/anonymous'.
+
+Dealing Delta Lake Data By Using Kyuubi Examples
+------------------------------------------------
+
+Operate delta-lake data through SQL:  
+
+Create Table
+************
+
+.. code-block:: sql
+   -- Create or replace table with path
+   CREATE OR REPLACE TABLE delta.`wasbs://1000@azure_account.blob.core.windows.net/alexDemo20211129` (
+     date DATE,
+     eventId STRING,
+     eventType STRING,
+     data STRING)
+   USING DELTA
+   PARTITIONED BY (date);
+
+Insert Data
+***********
+
+Append Mode
+^^^^^^^^^^^
+
+.. code-block:: sql
+
+   INSERT INTO delta.`wasbs://1000@azure_account.blob.core.windows.net/alexDemo20211129` (
+       date,
+       eventId,
+       eventType,
+       data)
+   VALUES
+       (now(),'001','test','Hello World!'),
+       (now(),'002','test','Hello World!'),
+       (now(),'003','test','Hello World!');
+
+Result:
+
+.. code-block:: text
+
+   +-------------+----------+------------+---------------+
+   |    date     | eventId  | eventType  |     data      |
+   +-------------+----------+------------+---------------+
+   | 2021-11-29  | 001      | test       | Hello World!  |
+   | 2021-11-29  | 003      | test       | Hello World!  |
+   | 2021-11-29  | 002      | test       | Hello World!  |
+   +-------------+----------+------------+---------------+
+
+Overwrite Mode
+^^^^^^^^^^^^^^
+
+.. code-block:: sql
+
+   INSERT OVERWRITE TABLE delta.`wasbs://1000@azure_account.blob.core.windows.net/alexDemo20211129`(
+       date,
+       eventId,
+       eventType,
+       data)
+   VALUES
+   (now(),'001','test','hello kyuubi'),
+   (now(),'002','test','hello kyuubi');
+
+Result:
+
+.. code-block:: text
+   +-------------+----------+------------+---------------+
+   |    date     | eventId  | eventType  |     data      |
+   +-------------+----------+------------+---------------+
+   | 2021-11-29  | 002      | test       | hello kyuubi  |
+   | 2021-11-29  | 001      | test       | hello kyuubi  |
+   +-------------+----------+------------+---------------+
+
+Delete Table Data
+*****************
+
+.. code-block:: sql
+   DELETE FROM
+      delta.`wasbs://1000@azure_account.blob.core.windows.net/alexDemo20211129`
+   WHERE eventId = 002;
+
+Result:
+
+.. code-block:: text
+
+   +-------------+----------+------------+---------------+
+   |    date     | eventId  | eventType  |     data      |
+   +-------------+----------+------------+---------------+
+   | 2021-11-29  | 001      | test       | hello kyuubi  |
+   +-------------+----------+------------+---------------+
+
+Update table data
+*****************
+
+.. code-block:: sql
+
+   UPDATE
+       delta.`wasbs://1000@azure_account.blob.core.windows.net/alexDemo20211129`
+   SET data = 'This is a test for update data.'
+   WHERE eventId = 001;
+
+Result:
+
+.. code-block:: text
+
+   +-------------+----------+------------+----------------------------------+
+   |    date     | eventId  | eventType  |               data               |
+   +-------------+----------+------------+----------------------------------+
+   | 2021-11-29  | 001      | test       | This is a test for update data.  |
+   +-------------+----------+------------+----------------------------------+
+
+Select table data
+*****************
+
+.. code-block:: sql
+
+   SELECT *
+   FROM
+       delta.`wasbs://1000@azure_account.blob.core.windows.net/alexDemo20211129`;
+
+Result:
+
+.. code-block:: text
+
+   +-------------+----------+------------+----------------------------------+
+   |    date     | eventId  | eventType  |               data               |
+   +-------------+----------+------------+----------------------------------+
+   | 2021-11-29  | 001      | test       | This is a test for update data.  |
+   +-------------+----------+------------+----------------------------------+
+
+.. _Microsoft Azure official website: https://azure.microsoft.com/en-gb/
+.. _spark official website: https://spark.apache.org/downloads.html
+.. _kyuubi download page: https://kyuubi.apache.org/releases.html
diff --git a/content/docs/latest/_sources/connector/spark/flink_table_store.rst.txt b/content/docs/latest/_sources/connector/spark/flink_table_store.rst.txt
new file mode 100644
index 0000000..ee4c2b3
--- /dev/null
+++ b/content/docs/latest/_sources/connector/spark/flink_table_store.rst.txt
@@ -0,0 +1,90 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+`Flink Table Store`_
+==========
+
+Flink Table Store is a unified storage to build dynamic tables for both streaming and batch processing in Flink,
+supporting high-speed data ingestion and timely data query.
+
+.. tip::
+   This article assumes that you have mastered the basic knowledge and operation of `Flink Table Store`_.
+   For the knowledge about Flink Table Store not mentioned in this article,
+   you can obtain it from its `Official Documentation`_.
+
+By using kyuubi, we can run SQL queries towards Flink Table Store which is more
+convenient, easy to understand, and easy to expand than directly using
+spark to manipulate Flink Table Store.
+
+Flink Table Store Integration
+-------------------
+
+To enable the integration of kyuubi spark sql engine and Flink Table Store through
+Apache Spark Datasource V2 and Catalog APIs, you need to:
+
+- Referencing the Flink Table Store :ref:`dependencies<spark-flink-table-store-deps>`
+- Setting the spark extension and catalog :ref:`configurations<spark-flink-table-store-conf>`
+
+.. _spark-flink-table-store-deps:
+
+Dependencies
+************
+
+The **classpath** of kyuubi spark sql engine with Flink Table Store supported consists of
+
+1. kyuubi-spark-sql-engine-\ |release|\ _2.12.jar, the engine jar deployed with Kyuubi distributions
+2. a copy of spark distribution
+3. flink-table-store-spark-<version>.jar (example: flink-table-store-spark-0.2.jar), which can be found in the `Maven Central`_
+
+In order to make the Flink Table Store packages visible for the runtime classpath of engines, we can use one of these methods:
+
+1. Put the Flink Table Store packages into ``$SPARK_HOME/jars`` directly
+2. Set ``spark.jars=/path/to/flink-table-store-spark``
+
+.. warning::
+   Please mind the compatibility of different Flink Table Store and Spark versions, which can be confirmed on the page of `Flink Table Store multi engine support`_.
+
+.. _spark-flink-table-store-conf:
+
+Configurations
+**************
+
+To activate functionality of Flink Table Store, we can set the following configurations:
+
+.. code-block:: properties
+
+   spark.sql.catalog.tablestore=org.apache.flink.table.store.spark.SparkCatalog
+   spark.sql.catalog.tablestore.warehouse=file:/tmp/warehouse
+
+Flink Table Store Operations
+------------------
+
+Flink Table Store supports reading table store tables through Spark.
+A common scenario is to write data with Flink and read data with Spark.
+You can follow this document `Flink Table Store Quick Start`_  to write data to a table store table
+and then use kyuubi spark sql engine to query the table with the following SQL ``SELECT`` statement.
+
+
+.. code-block:: sql
+
+   select * from table_store.default.word_count;
+
+
+
+.. _Flink Table Store: https://nightlies.apache.org/flink/flink-table-store-docs-stable/
+.. _Flink Table Store Quick Start: https://nightlies.apache.org/flink/flink-table-store-docs-stable/docs/try-table-store/quick-start/
+.. _Official Documentation: https://nightlies.apache.org/flink/flink-table-store-docs-stable/
+.. _Maven Central: https://mvnrepository.com/artifact/org.apache.flink
+.. _Flink Table Store multi engine support: https://nightlies.apache.org/flink/flink-table-store-docs-stable/docs/engines/overview/
diff --git a/content/docs/latest/_sources/connector/spark/hudi.rst.txt b/content/docs/latest/_sources/connector/spark/hudi.rst.txt
new file mode 100644
index 0000000..045e751
--- /dev/null
+++ b/content/docs/latest/_sources/connector/spark/hudi.rst.txt
@@ -0,0 +1,112 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+`Hudi`_
+========
+
+Apache Hudi (pronounced “hoodie”) is the next generation streaming data lake platform.
+Apache Hudi brings core warehouse and database functionality directly to a data lake.
+
+.. tip::
+   This article assumes that you have mastered the basic knowledge and operation of `Hudi`_.
+   For the knowledge about Hudi not mentioned in this article,
+   you can obtain it from its `Official Documentation`_.
+
+By using Kyuubi, we can run SQL queries towards Hudi which is more convenient, easy to understand,
+and easy to expand than directly using Spark to manipulate Hudi.
+
+Hudi Integration
+----------------
+
+To enable the integration of kyuubi spark sql engine and Hudi through
+Catalog APIs, you need to:
+
+- Referencing the Hudi :ref:`dependencies<spark-hudi-deps>`
+- Setting the Spark extension and catalog :ref:`configurations<spark-hudi-conf>`
+
+.. _spark-hudi-deps:
+
+Dependencies
+************
+
+The **classpath** of kyuubi spark sql engine with Hudi supported consists of
+
+1. kyuubi-spark-sql-engine-\ |release|\ _2.12.jar, the engine jar deployed with Kyuubi distributions
+2. a copy of spark distribution
+3. hudi-spark<spark.version>-bundle_<scala.version>-<hudi.version>.jar (example: hudi-spark3.2-bundle_2.12-0.11.1.jar), which can be found in the `Maven Central`_
+
+In order to make the Hudi packages visible for the runtime classpath of engines, we can use one of these methods:
+
+1. Put the Hudi packages into ``$SPARK_HOME/jars`` directly
+2. Set ``spark.jars=/path/to/hudi-spark-bundle``
+
+.. _spark-hudi-conf:
+
+Configurations
+**************
+
+To activate functionality of Hudi, we can set the following configurations:
+
+.. code-block:: properties
+   # Spark 3.2
+   spark.serializer=org.apache.spark.serializer.KryoSerializer
+   spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension
+   spark.sql.catalog.spark_catalog=org.apache.spark.sql.hudi.catalog.HoodieCatalog
+
+   # Spark 3.1
+   spark.serializer=org.apache.spark.serializer.KryoSerializer
+   spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension
+
+Hudi Operations
+---------------
+
+Taking ``Create Table`` as a example,
+
+.. code-block:: sql
+
+   CREATE TABLE hudi_cow_nonpcf_tbl (
+     uuid INT,
+     name STRING,
+     price DOUBLE
+   ) USING HUDI;
+
+Taking ``Query Data`` as a example,
+
+.. code-block:: sql
+
+   SELECT * FROM hudi_cow_nonpcf_tbl WHERE id < 20;
+
+Taking ``Insert Data`` as a example,
+
+.. code-block:: sql
+
+   INSERT INTO hudi_cow_nonpcf_tbl SELECT 1, 'a1', 20;
+
+
+Taking ``Update Data`` as a example,
+
+.. code-block:: sql
+
+   UPDATE hudi_cow_nonpcf_tbl SET name = 'foo', price = price * 2 WHERE id = 1;
+
+Taking ``Delete Data`` as a example,
+
+.. code-block:: sql
+
+   DELETE FROM hudi_cow_nonpcf_tbl WHERE uuid = 1;
+
+.. _Hudi: https://hudi.apache.org/
+.. _Official Documentation: https://hudi.apache.org/docs/overview
+.. _Maven Central: https://mvnrepository.com/artifact/org.apache.hudi
diff --git a/content/docs/latest/_sources/connector/spark/iceberg.rst.txt b/content/docs/latest/_sources/connector/spark/iceberg.rst.txt
new file mode 100644
index 0000000..2ce58aa
--- /dev/null
+++ b/content/docs/latest/_sources/connector/spark/iceberg.rst.txt
@@ -0,0 +1,124 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+`Iceberg`_
+==========
+
+Apache Iceberg is an open table format for huge analytic datasets.
+Iceberg adds tables to compute engines including Spark, Trino, PrestoDB, Flink, Hive and Impala
+using a high-performance table format that works just like a SQL table.
+
+.. tip::
+   This article assumes that you have mastered the basic knowledge and operation of `Iceberg`_.
+   For the knowledge about Iceberg not mentioned in this article,
+   you can obtain it from its `Official Documentation`_.
+
+By using kyuubi, we can run SQL queries towards Iceberg which is more
+convenient, easy to understand, and easy to expand than directly using
+spark to manipulate Iceberg.
+
+Iceberg Integration
+-------------------
+
+To enable the integration of kyuubi spark sql engine and Iceberg through
+Apache Spark Datasource V2 and Catalog APIs, you need to:
+
+- Referencing the Iceberg :ref:`dependencies<spark-iceberg-deps>`
+- Setting the spark extension and catalog :ref:`configurations<spark-iceberg-conf>`
+
+.. _spark-iceberg-deps:
+
+Dependencies
+************
+
+The **classpath** of kyuubi spark sql engine with Iceberg supported consists of
+
+1. kyuubi-spark-sql-engine-\ |release|\ _2.12.jar, the engine jar deployed with Kyuubi distributions
+2. a copy of spark distribution
+3. iceberg-spark-runtime-<spark.version>_<scala.version>-<iceberg.version>.jar (example: iceberg-spark-runtime-3.2_2.12-0.14.0.jar), which can be found in the `Maven Central`_
+
+In order to make the Iceberg packages visible for the runtime classpath of engines, we can use one of these methods:
+
+1. Put the Iceberg packages into ``$SPARK_HOME/jars`` directly
+2. Set ``spark.jars=/path/to/iceberg-spark-runtime``
+
+.. warning::
+   Please mind the compatibility of different Iceberg and Spark versions, which can be confirmed on the page of `Iceberg multi engine support`_.
+
+.. _spark-iceberg-conf:
+
+Configurations
+**************
+
+To activate functionality of Iceberg, we can set the following configurations:
+
+.. code-block:: properties
+
+   spark.sql.catalog.spark_catalog=org.apache.iceberg.spark.SparkCatalog
+   spark.sql.catalog.spark_catalog.type=hive
+   spark.sql.catalog.spark_catalog.uri=thrift://metastore-host:port
+   spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions
+
+Iceberg Operations
+------------------
+
+Taking ``CREATE TABLE`` as a example,
+
+.. code-block:: sql
+
+   CREATE TABLE foo (
+     id bigint COMMENT 'unique id',
+     data string)
+   USING iceberg;
+
+Taking ``SELECT`` as a example,
+
+.. code-block:: sql
+
+   SELECT * FROM foo;
+
+Taking ``INSERT`` as a example,
+
+.. code-block:: sql
+
+   INSERT INTO foo VALUES (1, 'a'), (2, 'b'), (3, 'c');
+
+Taking ``UPDATE`` as a example, Spark 3.1 added support for UPDATE queries that update matching rows in tables.
+
+.. code-block:: sql
+
+   UPDATE foo SET data = 'd', id = 4 WHERE id >= 3 and id < 4;
+
+Taking ``DELETE FROM`` as a example, Spark 3 added support for DELETE FROM queries to remove data from tables.
+
+.. code-block:: sql
+
+   DELETE FROM foo WHERE id >= 1 and id < 2;
+
+Taking ``MERGE INTO`` as a example,
+
+.. code-block:: sql
+
+   MERGE INTO target_table t
+   USING source_table s
+   ON t.id = s.id
+   WHEN MATCHED AND s.opType = 'delete' THEN DELETE
+   WHEN MATCHED AND s.opType = 'update' THEN UPDATE SET id = s.id, data = s.data
+   WHEN NOT MATCHED AND s.opType = 'insert' THEN INSERT (id, data) VALUES (s.id, s.data);
+
+.. _Iceberg: https://iceberg.apache.org/
+.. _Official Documentation: https://iceberg.apache.org/docs/latest/
+.. _Maven Central: https://mvnrepository.com/artifact/org.apache.iceberg
+.. _Iceberg multi engine support: https://iceberg.apache.org/multi-engine-support/
diff --git a/content/docs/latest/_sources/connector/spark/index.rst.txt b/content/docs/latest/_sources/connector/spark/index.rst.txt
new file mode 100644
index 0000000..7109eda
--- /dev/null
+++ b/content/docs/latest/_sources/connector/spark/index.rst.txt
@@ -0,0 +1,42 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+Connectors for Spark SQL Query Engine
+=====================================
+
+The Kyuubi Spark SQL Query Engine uses Spark DataSource APIs(V1/V2) to access
+data from different data sources.
+
+By default, it provides accessibility to hive warehouses with various file formats
+supported, such as parquet, orc, json, etc.
+
+Also,it can easily integrate with other third-party libraries, such as Hudi,
+Iceberg, Delta Lake, Kudu, Flink Table Store, HBase,Cassandra, etc.
+
+We also provide sample data sources like TDC-DS, TPC-H for testing and benchmarking
+purpose.
+
+.. toctree::
+    :maxdepth: 2
+
+    delta_lake
+    delta_lake_with_azure_blob
+    hudi
+    iceberg
+    kudu
+    flink_table_store
+    tidb
+    tpcds
+    tpch
diff --git a/content/docs/latest/_sources/connector/spark/kudu.md.txt b/content/docs/latest/_sources/connector/spark/kudu.md.txt
new file mode 100644
index 0000000..0d3c850
--- /dev/null
+++ b/content/docs/latest/_sources/connector/spark/kudu.md.txt
@@ -0,0 +1,185 @@
+<!--
+ - Licensed to the Apache Software Foundation (ASF) under one or more
+ - contributor license agreements.  See the NOTICE file distributed with
+ - this work for additional information regarding copyright ownership.
+ - The ASF licenses this file to You under the Apache License, Version 2.0
+ - (the "License"); you may not use this file except in compliance with
+ - the License.  You may obtain a copy of the License at
+ -
+ -   http://www.apache.org/licenses/LICENSE-2.0
+ -
+ - Unless required by applicable law or agreed to in writing, software
+ - distributed under the License is distributed on an "AS IS" BASIS,
+ - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ - See the License for the specific language governing permissions and
+ - limitations under the License.
+ -->
+
+# Kudu
+
+## What is Apache Kudu
+
+> A new addition to the open source Apache Hadoop ecosystem, Apache Kudu completes Hadoop's storage layer to enable fast analytics on fast data.
+
+When you are reading this documentation, we suppose that you are not necessary to be familiar with [Apache Kudu](https://kudu.apache.org/). But at least, you have one running Kudu cluster which is able to be connected for you. And it is even better for you to understand what Apache Kudu is capable with.
+
+Anything missing on this page about Apache Kudu background knowledge, you can refer to its official website.
+
+## Why Kyuubi on Kudu
+Basically, Kyuubi can take place of HiveServer2 as a multi tenant ad-hoc SQL on Hadoop solution, with the advantages of speed and power coming from Spark SQL. You can run SQL queries towards both data source and Hive tables whose data is secured only with computing resources you are authorized.
+
+> Spark SQL supports operating on a variety of data sources through the DataFrame interface. A DataFrame can be operated on using relational transformations and can also be used to create a temporary view. Registering a DataFrame as a temporary view allows you to run SQL queries over its data. This section describes the general methods for loading and saving data using the Spark Data Sources and then goes into specific options that are available for the built-in data sources.
+
+In Kyuubi, we can register Kudu tables and other data source tables as Spark temporary views to enable federated union queries across Hive, Kudu, and other data sources.
+
+## Kudu Integration with Apache Spark
+Before integrating Kyuubi with Kudu, we strongly suggest that you integrate and test Spark with Kudu first. You may find the guide from Kudu's online documentation -- [Kudu Integration with Spark](https://kudu.apache.org/docs/developing.html#_kudu_integration_with_spark)
+
+## Kudu Integration with Kyuubi
+
+#### Install Kudu Spark Dependency
+Confirm your Kudu cluster version and download the corresponding kudu spark dependency library, such as [org.apache.kudu:kudu-spark3_2.12-1.14.0](https://repo1.maven.org/maven2/org/apache/kudu/kudu-spark3_2.12/1.14.0/kudu-spark3_2.12-1.14.0.jar) to `$SPARK_HOME`/jars.
+
+#### Start Kyuubi
+
+Now, you can start Kyuubi server with this kudu embedded Spark distribution.
+
+#### Start Beeline Or Other Client You Prefer
+
+```shell
+bin/beeline -u 'jdbc:hive2://<host>:<port>/;principal=<if kerberized>;#spark.yarn.queue=kyuubi_test'
+```
+
+#### Register Kudu table as Spark Temporary view
+
+```sql
+CREATE TEMPORARY VIEW kudutest
+USING kudu
+options ( 
+  kudu.master "ip1:port1,ip2:port2,...",
+  kudu.table "kudu::test.testtbl")
+```
+
+```sql
+0: jdbc:hive2://spark5.jd.163.org:10009/> show tables;
+19/07/09 15:28:03 INFO ExecuteStatementInClientMode: Running query 'show tables' with 1104328b-515c-4f8b-8a68-1c0b202bc9ed
+19/07/09 15:28:03 INFO KyuubiSparkUtil$: Application application_1560304876299_3805060 has been activated
+19/07/09 15:28:03 INFO ExecuteStatementInClientMode: Executing query in incremental mode, running 1 jobs before optimization
+19/07/09 15:28:03 INFO ExecuteStatementInClientMode: Executing query in incremental mode, running 1 jobs without optimization
+19/07/09 15:28:03 INFO DAGScheduler: Asked to cancel job group 1104328b-515c-4f8b-8a68-1c0b202bc9ed
++-----------+-----------------------------+--------------+--+
+| database  |          tableName          | isTemporary  |
++-----------+-----------------------------+--------------+--+
+| kyuubi    | hive_tbl                    | false        |
+|           | kudutest                    | true         |
++-----------+-----------------------------+--------------+--+
+2 rows selected (0.29 seconds)
+```
+
+#### Query Kudu Table
+
+```sql
+0: jdbc:hive2://spark5.jd.163.org:10009/> select * from kudutest;
+19/07/09 15:25:17 INFO ExecuteStatementInClientMode: Running query 'select * from kudutest' with ac3e8553-0d79-4c57-add1-7d3ffe34ba16
+19/07/09 15:25:17 INFO KyuubiSparkUtil$: Application application_1560304876299_3805060 has been activated
+19/07/09 15:25:17 INFO ExecuteStatementInClientMode: Executing query in incremental mode, running 3 jobs before optimization
+19/07/09 15:25:17 INFO ExecuteStatementInClientMode: Executing query in incremental mode, running 3 jobs without optimization
+19/07/09 15:25:17 INFO DAGScheduler: Asked to cancel job group ac3e8553-0d79-4c57-add1-7d3ffe34ba16
++---------+---------------+----------------+--+
+| userid  | sharesetting  | notifysetting  |
++---------+---------------+----------------+--+
+| 1       | 1             | 1              |
+| 5       | 5             | 5              |
+| 2       | 2             | 2              |
+| 3       | 3             | 3              |
+| 4       | 4             | 4              |
++---------+---------------+----------------+--+
+5 rows selected (1.083 seconds)
+```
+
+
+#### Join Kudu table with Hive table
+
+```sql
+0: jdbc:hive2://spark5.jd.163.org:10009/> select t1.*, t2.* from hive_tbl t1 join kudutest t2 on t1.userid=t2.userid+1;
+19/07/09 15:31:01 INFO ExecuteStatementInClientMode: Running query 'select t1.*, t2.* from hive_tbl t1 join kudutest t2 on t1.userid=t2.userid+1' with 6982fa5c-29fa-49be-a5bf-54c935bbad18
+19/07/09 15:31:01 INFO KyuubiSparkUtil$: Application application_1560304876299_3805060 has been activated
+<omitted lines.... >
+19/07/09 15:31:01 INFO DAGScheduler: Asked to cancel job group 6982fa5c-29fa-49be-a5bf-54c935bbad18
++---------+---------------+----------------+---------+---------------+----------------+--+
+| userid  | sharesetting  | notifysetting  | userid  | sharesetting  | notifysetting  |
++---------+---------------+----------------+---------+---------------+----------------+--+
+| 2       | 2             | 2              | 1       | 1             | 1              |
+| 3       | 3             | 3              | 2       | 2             | 2              |
+| 4       | 4             | 4              | 3       | 3             | 3              |
++---------+---------------+----------------+---------+---------------+----------------+--+
+3 rows selected (1.63 seconds)
+```
+
+#### Insert to Kudu table
+
+You should notice that only `INSERT INTO` is supported by Kudu, `OVERWRITE` data is not supported
+
+```sql
+0: jdbc:hive2://spark5.jd.163.org:10009/> insert overwrite table kudutest select *  from hive_tbl;
+19/07/09 15:35:29 INFO ExecuteStatementInClientMode: Running query 'insert overwrite table kudutest select *  from hive_tbl' with 1afdb791-1aa7-4ceb-8ba8-ff53c17615d1
+19/07/09 15:35:29 INFO KyuubiSparkUtil$: Application application_1560304876299_3805060 has been activated
+19/07/09 15:35:30 ERROR ExecuteStatementInClientMode:
+Error executing query as bdms_hzyaoqin,
+insert overwrite table kudutest select *  from hive_tbl
+Current operation state RUNNING,
+java.lang.UnsupportedOperationException: overwrite is not yet supported
+	at org.apache.kudu.spark.kudu.KuduRelation.insert(DefaultSource.scala:424)
+	at org.apache.spark.sql.execution.datasources.InsertIntoDataSourceCommand.run(InsertIntoDataSourceCommand.scala:42)
+	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
+	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
+	at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
+	at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190)
+	at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190)
+	at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259)
+	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
+	at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258)
+	at org.apache.spark.sql.Dataset.<init>(Dataset.scala:190)
+	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75)
+	at org.apache.spark.sql.SparkSQLUtils$.toDataFrame(SparkSQLUtils.scala:39)
+	at org.apache.kyuubi.operation.statement.ExecuteStatementInClientMode.execute(ExecuteStatementInClientMode.scala:152)
+	at org.apache.kyuubi.operation.statement.ExecuteStatementOperation$$anon$1$$anon$2.run(ExecuteStatementOperation.scala:74)
+	at org.apache.kyuubi.operation.statement.ExecuteStatementOperation$$anon$1$$anon$2.run(ExecuteStatementOperation.scala:70)
+	at java.security.AccessController.doPrivileged(Native Method)
+	at javax.security.auth.Subject.doAs(Subject.java:422)
+	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
+	at org.apache.kyuubi.operation.statement.ExecuteStatementOperation$$anon$1.run(ExecuteStatementOperation.scala:70)
+	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
+	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
+	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
+	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
+	at java.lang.Thread.run(Thread.java:745)
+
+
+19/07/09 15:35:30 INFO DAGScheduler: Asked to cancel job group 1afdb791-1aa7-4ceb-8ba8-ff53c17615d1
+
+```
+
+```sql
+0: jdbc:hive2://spark5.jd.163.org:10009/> insert into table kudutest select * from hive_tbl;
+19/07/09 15:36:26 INFO ExecuteStatementInClientMode: Running query 'insert into table kudutest select *  from hive_tbl' with f7460400-0564-4f98-93b6-ad76e579e7af
+19/07/09 15:36:26 INFO KyuubiSparkUtil$: Application application_1560304876299_3805060 has been activated
+<omitted lines ...>
+19/07/09 15:36:27 INFO DAGScheduler: ResultStage 36 (foreachPartition at KuduContext.scala:332) finished in 0.322 s
+19/07/09 15:36:27 INFO DAGScheduler: Job 36 finished: foreachPartition at KuduContext.scala:332, took 0.324586 s
+19/07/09 15:36:27 INFO KuduContext: completed upsert ops: duration histogram: 33.333333333333336%: 2ms, 66.66666666666667%: 64ms, 100.0%: 102ms, 100.0%: 102ms
+19/07/09 15:36:27 INFO ExecuteStatementInClientMode: Executing query in incremental mode, running 1 jobs before optimization
+19/07/09 15:36:27 INFO ExecuteStatementInClientMode: Executing query in incremental mode, running 1 jobs without optimization
+19/07/09 15:36:27 INFO DAGScheduler: Asked to cancel job group f7460400-0564-4f98-93b6-ad76e579e7af
++---------+--+
+| Result  |
++---------+--+
++---------+--+
+No rows selected (0.611 seconds)
+```
+
+## References
+[https://kudu.apache.org/](https://kudu.apache.org/)
+[https://kudu.apache.org/docs/developing.html#_kudu_integration_with_spark](https://kudu.apache.org/docs/developing.html#_kudu_integration_with_spark)
+[https://github.com/apache/incubator-kyuubi](https://github.com/apache/incubator-kyuubi)
+[https://spark.apache.org/docs/latest/sql-data-sources.html](https://spark.apache.org/docs/latest/sql-data-sources.html)
diff --git a/content/docs/latest/_sources/connector/spark/tidb.rst.txt b/content/docs/latest/_sources/connector/spark/tidb.rst.txt
new file mode 100644
index 0000000..366f3b2
--- /dev/null
+++ b/content/docs/latest/_sources/connector/spark/tidb.rst.txt
@@ -0,0 +1,103 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+`TiDB`_
+==========
+
+TiDB is an open-source NewSQL database that supports Hybrid Transactional and Analytical Processing
+(HTAP) workloads.
+
+TiSpark is a thin layer built for running Apache Spark on top of TiDB/TiKV to answer complex OLAP
+queries. It enjoys the merits of both the Spark platform and the distributed clusters
+of TiKV while seamlessly integrated to TiDB to provide one-stop HTAP solutions for online
+transactions and analyses.
+
+.. tip::
+   This article assumes that you have mastered the basic knowledge and operation of TiDB and TiSpark.
+   For the knowledge not mentioned in this article, you can obtain it from TiDB `Official Documentation`_.
+
+By using kyuubi, we can run SQL queries towards TiDB/TiKV which is more
+convenient, easy to understand, and easy to expand than directly using
+spark to manipulate TiDB/TiKV.
+
+TiDB Integration
+-------------------
+
+To enable the integration of kyuubi spark sql engine and TiDB through
+Apache Spark Datasource V2 and Catalog APIs, you need to:
+
+- Referencing the TiSpark :ref:`dependencies<spark-tidb-deps>`
+- Setting the spark extension and catalog :ref:`configurations<spark-tidb-conf>`
+
+.. _spark-tidb-deps:
+
+Dependencies
+************
+The classpath of kyuubi spark sql engine with TiDB supported consists of
+
+1. kyuubi-spark-sql-engine-\ |release|\ _2.12.jar, the engine jar deployed with Kyuubi distributions
+2. a copy of spark distribution
+3. tispark-assembly-<spark.version>_<scala.version>-<tispark.version>.jar (example: tispark-assembly-3.2_2.12-3.0.1.jar), which can be found in the `Maven Central`_
+
+In order to make the TiSpark packages visible for the runtime classpath of engines, we can use one of these methods:
+
+1. Put the TiSpark packages into ``$SPARK_HOME/jars`` directly
+2. Set ``spark.jars=/path/to/tispark-assembly``
+
+.. warning::
+   Please mind the compatibility of different TiDB, TiSpark and Spark versions, which can be confirmed on the page of `TiSpark Environment setup`_.
+
+.. _spark-tidb-conf:
+
+Configurations
+**************
+
+To activate functionality of TiSpark, we can set the following configurations:
+
+.. code-block:: properties
+
+   spark.tispark.pd.addresses $pd_host:$pd_port
+   spark.sql.extensions org.apache.spark.sql.TiExtensions
+   spark.sql.catalog.tidb_catalog  org.apache.spark.sql.catalyst.catalog.TiCatalog
+   spark.sql.catalog.tidb_catalog.pd.addresses $pd_host:$pd_port
+
+The `spark.tispark.pd.addresses` and `spark.sql.catalog.tidb_catalog.pd.addresses` configurations
+allow you to put in multiple PD servers. Specify the port number for each of them.
+
+For example, when you have multiple PD servers on `10.16.20.1,10.16.20.2,10.16.20.3` with the port `2379`,
+put it as `10.16.20.1:2379,10.16.20.2:2379,10.16.20.3:2379`.
+
+TiDB Operations
+------------------
+
+Taking ``SELECT`` as a example,
+
+.. code-block:: sql
+
+   SELECT * FROM foo;
+
+Taking ``DELETE FROM`` as a example, Spark 3 added support for DELETE FROM queries to remove data from tables.
+
+.. code-block:: sql
+
+   DELETE FROM foo WHERE id >= 1 and id < 2;
+
+.. note::
+   As for now (TiSpark 3.0.1), TiSpark does not support ``CREATE TABLE``, ``INSERT INTO/OVERWRITE`` operations
+   through Apache Spark Datasource V2 and Catalog APIs.
+
+.. _Official Documentation: https://docs.pingcap.com/tidb/stable/overview
+.. _Maven Central: https://repo1.maven.org/maven2/com/pingcap/tispark/
+.. _TiSpark Environment setup: https://docs.pingcap.com/tidb/stable/tispark-overview#environment-setup
\ No newline at end of file
diff --git a/content/docs/latest/_sources/connector/spark/tpcds.rst.txt b/content/docs/latest/_sources/connector/spark/tpcds.rst.txt
new file mode 100644
index 0000000..e52e56c
--- /dev/null
+++ b/content/docs/latest/_sources/connector/spark/tpcds.rst.txt
@@ -0,0 +1,108 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+TPC-DS
+=====
+
+The TPC-DS is a decision support benchmark. It consists of a suite of business oriented ad-hoc queries and concurrent
+data modifications. The queries and the data populating the database have been chosen to have broad industry-wide
+relevance.
+
+.. tip::
+   This article assumes that you have mastered the basic knowledge and operation of `TPC-DS`_.
+   For the knowledge about TPC-DS not mentioned in this article, you can obtain it from its `Official Documentation`_.
+
+This connector can be used to test the capabilities and query syntax of Spark without configuring access to an external
+data source. When you query a TPC-DS table, the connector generates the data on the fly using a deterministic algorithm.
+
+Goto `Try Kyuubi`_ to explore TPC-DS data instantly!
+
+TPC-DS Integration
+------------------
+
+To enable the integration of kyuubi spark sql engine and TPC-DS through
+Apache Spark Datasource V2 and Catalog APIs, you need to:
+
+- Referencing the TPC-DS connector :ref:`dependencies<spark-tpcds-deps>`
+- Setting the spark catalog :ref:`configurations<spark-tpcds-conf>`
+
+.. _spark-tpcds-deps:
+
+Dependencies
+************
+
+The **classpath** of kyuubi spark sql engine with TPC-DS supported consists of
+
+1. kyuubi-spark-sql-engine-\ |release|\ _2.12.jar, the engine jar deployed with Kyuubi distributions
+2. a copy of spark distribution
+3. kyuubi-spark-connector-tpcds-\ |release|\ _2.12.jar, which can be found in the `Maven Central`_
+
+In order to make the TPC-DS connector package visible for the runtime classpath of engines, we can use one of these methods:
+
+1. Put the TPC-DS connector package into ``$SPARK_HOME/jars`` directly
+2. Set spark.jars=kyuubi-spark-connector-tpcds-\ |release|\ _2.12.jar
+
+.. _spark-tpcds-conf:
+
+Configurations
+**************
+
+To add TPC-DS tables as a catalog, we can set the following configurations in ``$SPARK_HOME/conf/spark-defaults.conf``:
+
+.. code-block:: properties
+
+   # (required) Register a catalog named `tpcds` for the spark engine.
+   spark.sql.catalog.tpcds=org.apache.kyuubi.spark.connector.tpcds.TPCDSCatalog
+
+   # (optional) Excluded database list from the catalog, all available databases are:
+   #            sf0, tiny, sf1, sf10, sf30, sf100, sf300, sf1000, sf3000, sf10000, sf30000, sf100000.
+   spark.sql.catalog.tpcds.excludeDatabases=sf10000,sf30000
+
+   # (optional) When true, use CHAR/VARCHAR, otherwise use STRING. It affects output of the table schema,
+   #            e.g. `SHOW CREATE TABLE <table>`, `DESC <table>`.
+   spark.sql.catalog.tpcds.useAnsiStringType=false
+
+   # (optional) TPCDS changed table schemas in v2.6.0, turn off this option to use old table schemas.
+   #            See detail at: https://www.tpc.org/tpc_documents_current_versions/pdf/tpc-ds_v3.2.0.pdf
+   spark.sql.catalog.tpcds.useTableSchema_2_6=true
+
+   # (optional) Maximum bytes per task, consider reducing it if you want higher parallelism.
+   spark.sql.catalog.tpcds.read.maxPartitionBytes=128m
+
+TPC-DS Operations
+----------------
+
+Listing databases under `tpcds` catalog.
+
+.. code-block:: sql
+
+   SHOW DATABASES IN tpcds;
+
+Listing tables under `tpcds.sf1` database.
+
+.. code-block:: sql
+
+   SHOW TABLES IN tpcds.sf1;
+
+Switch current database to `tpcds.sf1` and run a query against it.
+
+.. code-block:: sql
+
+   USE tpcds.sf1;
+   SELECT * FROM orders;
+
+.. _Official Documentation: https://www.tpc.org/tpcds/
+.. _Try Kyuubi: https://try.kyuubi.cloud/
+.. _Maven Central: https://repo1.maven.org/maven2/org/apache/kyuubi/kyuubi-spark-connector-tpcds_2.12/
\ No newline at end of file
diff --git a/content/docs/latest/_sources/connector/spark/tpch.rst.txt b/content/docs/latest/_sources/connector/spark/tpch.rst.txt
new file mode 100644
index 0000000..72ad8e9
--- /dev/null
+++ b/content/docs/latest/_sources/connector/spark/tpch.rst.txt
@@ -0,0 +1,104 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+TPC-H
+=====
+
+The TPC-H is a decision support benchmark. It consists of a suite of business oriented ad-hoc queries and concurrent
+data modifications. The queries and the data populating the database have been chosen to have broad industry-wide
+relevance.
+
+.. tip::
+   This article assumes that you have mastered the basic knowledge and operation of `TPC-H`_.
+   For the knowledge about TPC-H not mentioned in this article, you can obtain it from its `Official Documentation`_.
+
+This connector can be used to test the capabilities and query syntax of Spark without configuring access to an external
+data source. When you query a TPC-H table, the connector generates the data on the fly using a deterministic algorithm.
+
+Goto `Try Kyuubi`_ to explore TPC-H data instantly!
+
+TPC-H Integration
+------------------
+
+To enable the integration of kyuubi spark sql engine and TPC-H through
+Apache Spark Datasource V2 and Catalog APIs, you need to:
+
+- Referencing the TPC-H connector :ref:`dependencies<spark-tpch-deps>`
+- Setting the spark catalog :ref:`configurations<spark-tpch-conf>`
+
+.. _spark-tpch-deps:
+
+Dependencies
+************
+
+The **classpath** of kyuubi spark sql engine with TPC-H supported consists of
+
+1. kyuubi-spark-sql-engine-\ |release|\ _2.12.jar, the engine jar deployed with Kyuubi distributions
+2. a copy of spark distribution
+3. kyuubi-spark-connector-tpch-\ |release|\ _2.12.jar, which can be found in the `Maven Central`_
+
+In order to make the TPC-H connector package visible for the runtime classpath of engines, we can use one of these methods:
+
+1. Put the TPC-H connector package into ``$SPARK_HOME/jars`` directly
+2. Set spark.jars=kyuubi-spark-connector-tpch-\ |release|\ _2.12.jar
+
+.. _spark-tpch-conf:
+
+Configurations
+**************
+
+To add TPC-H tables as a catalog, we can set the following configurations in ``$SPARK_HOME/conf/spark-defaults.conf``:
+
+.. code-block:: properties
+
+   # (required) Register a catalog named `tpch` for the spark engine.
+   spark.sql.catalog.tpch=org.apache.kyuubi.spark.connector.tpch.TPCHCatalog
+
+   # (optional) Excluded database list from the catalog, all available databases are:
+   #            sf0, tiny, sf1, sf10, sf30, sf100, sf300, sf1000, sf3000, sf10000, sf30000, sf100000.
+   spark.sql.catalog.tpch.excludeDatabases=sf10000,sf30000
+
+   # (optional) When true, use CHAR/VARCHAR, otherwise use STRING. It affects output of the table schema,
+   #            e.g. `SHOW CREATE TABLE <table>`, `DESC <table>`.
+   spark.sql.catalog.tpch.useAnsiStringType=false
+
+   # (optional) Maximum bytes per task, consider reducing it if you want higher parallelism.
+   spark.sql.catalog.tpch.read.maxPartitionBytes=128m
+
+TPC-H Operations
+----------------
+
+Listing databases under `tpch` catalog.
+
+.. code-block:: sql
+
+   SHOW DATABASES IN tpch;
+
+Listing tables under `tpch.sf1` database.
+
+.. code-block:: sql
+
+   SHOW TABLES IN tpch.sf1;
+
+Switch current database to `tpch.sf1` and run a query against it.
+
+.. code-block:: sql
+
+   USE tpch.sf1;
+   SELECT * FROM orders;
+
+.. _Official Documentation: https://www.tpc.org/tpch/
+.. _Try Kyuubi: https://try.kyuubi.cloud/
+.. _Maven Central: https://repo1.maven.org/maven2/org/apache/kyuubi/kyuubi-spark-connector-tpch_2.12/
\ No newline at end of file
diff --git a/content/docs/latest/_sources/connector/trino/flink_table_store.rst.txt b/content/docs/latest/_sources/connector/trino/flink_table_store.rst.txt
new file mode 100644
index 0000000..8dd0c40
--- /dev/null
+++ b/content/docs/latest/_sources/connector/trino/flink_table_store.rst.txt
@@ -0,0 +1,94 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+`Flink Table Store`_
+==========
+
+Flink Table Store is a unified storage to build dynamic tables for both streaming and batch processing in Flink,
+supporting high-speed data ingestion and timely data query.
+
+.. tip::
+   This article assumes that you have mastered the basic knowledge and operation of `Flink Table Store`_.
+   For the knowledge about Flink Table Store not mentioned in this article,
+   you can obtain it from its `Official Documentation`_.
+
+By using kyuubi, we can run SQL queries towards Flink Table Store which is more
+convenient, easy to understand, and easy to expand than directly using
+trino to manipulate Flink Table Store.
+
+Flink Table Store Integration
+-------------------
+
+To enable the integration of kyuubi trino sql engine and Flink Table Store, you need to:
+
+- Referencing the Flink Table Store :ref:`dependencies<trino-flink-table-store-deps>`
+- Setting the trino extension and catalog :ref:`configurations<trino-flink-table-store-conf>`
+
+.. _trino-flink-table-store-deps:
+
+Dependencies
+************
+
+The **classpath** of kyuubi trino sql engine with Flink Table Store supported consists of
+
+1. kyuubi-trino-sql-engine-\ |release|\ _2.12.jar, the engine jar deployed with Kyuubi distributions
+2. a copy of trino distribution
+3. flink-table-store-trino-<version>.jar (example: flink-table-store-trino-0.2.jar), which code can be found in the `Source Code`_
+4. flink-shaded-hadoop-2-uber-2.8.3-10.0.jar, which code can be found in the `Pre-bundled Hadoop 2.8.3`_
+
+In order to make the Flink Table Store packages visible for the runtime classpath of engines, we can use these methods:
+
+1. Build the flink-table-store-trino-<version>.jar by reference to `Flink Table Store Trino README`_
+2. Put the flink-table-store-trino-<version>.jar and flink-shaded-hadoop-2-uber-2.8.3-10.0.jar packages into ``$TRINO_SERVER_HOME/plugin/tablestore`` directly
+
+.. warning::
+   Please mind the compatibility of different Flink Table Store and Trino versions, which can be confirmed on the page of `Flink Table Store multi engine support`_.
+
+.. _trino-flink-table-store-conf:
+
+Configurations
+**************
+
+To activate functionality of Flink Table Store, we can set the following configurations:
+
+Catalogs are registered by creating a catalog properties file in the $TRINO_SERVER_HOME/etc/catalog directory.
+For example, create $TRINO_SERVER_HOME/etc/catalog/tablestore.properties with the following contents to mount the tablestore connector as the tablestore catalog:
+
+.. code-block:: properties
+
+   connector.name=tablestore
+   warehouse=file:///tmp/warehouse
+
+Flink Table Store Operations
+------------------
+
+Flink Table Store supports reading table store tables through Trino.
+A common scenario is to write data with Flink and read data with Trino.
+You can follow this document `Flink Table Store Quick Start`_  to write data to a table store table
+and then use kyuubi trino sql engine to query the table with the following SQL ``SELECT`` statement.
+
+
+.. code-block:: sql
+
+   SELECT * FROM tablestore.default.t1
+
+
+.. _Flink Table Store: https://nightlies.apache.org/flink/flink-table-store-docs-stable/
+.. _Flink Table Store Quick Start: https://nightlies.apache.org/flink/flink-table-store-docs-stable/docs/try-table-store/quick-start/
+.. _Official Documentation: https://nightlies.apache.org/flink/flink-table-store-docs-stable/
+.. _Source Code: https://github.com/JingsongLi/flink-table-store-trino
+.. _Flink Table Store multi engine support: https://nightlies.apache.org/flink/flink-table-store-docs-stable/docs/engines/overview/
+.. _Pre-bundled Hadoop 2.8.3: https://repo.maven.apache.org/maven2/org/apache/flink/flink-shaded-hadoop-2-uber/2.8.3-10.0/flink-shaded-hadoop-2-uber-2.8.3-10.0.jar
+.. _Flink Table Store Trino README: https://github.com/JingsongLi/flink-table-store-trino#readme
diff --git a/content/docs/latest/_sources/connector/trino/iceberg.rst.txt b/content/docs/latest/_sources/connector/trino/iceberg.rst.txt
new file mode 100644
index 0000000..6fc09bc
--- /dev/null
+++ b/content/docs/latest/_sources/connector/trino/iceberg.rst.txt
@@ -0,0 +1,92 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+`Iceberg`_
+==========
+
+Apache Iceberg is an open table format for huge analytic datasets.
+Iceberg adds tables to compute engines including Spark, Trino, PrestoDB, Flink, Hive and Impala
+using a high-performance table format that works just like a SQL table.
+
+.. tip::
+   This article assumes that you have mastered the basic knowledge and operation of `Iceberg`_.
+   For the knowledge about Iceberg not mentioned in this article,
+   you can obtain it from its `Official Documentation`_.
+
+By using kyuubi, we can run SQL queries towards Iceberg which is more
+convenient, easy to understand, and easy to expand than directly using
+Trino to manipulate Iceberg.
+
+Iceberg Integration
+-------------------
+
+To enable the integration of kyuubi trino sql engine and Iceberg through Catalog APIs, you need to:
+
+- Setting the Trino extension and catalog :ref:`configurations`
+
+.. _configurations:
+
+Configurations
+**************
+
+To activate functionality of Iceberg, we can set the following configurations:
+
+.. code-block:: properties
+
+   connector.name=iceberg
+   hive.metastore.uri=thrift://localhost:9083
+
+Iceberg Operations
+------------------
+
+Taking ``CREATE TABLE`` as a example,
+
+.. code-block:: sql
+
+   CREATE TABLE orders (
+     orderkey bigint,
+     orderstatus varchar,
+     totalprice double,
+     orderdate date
+   ) WITH (
+     format = 'ORC'
+   );
+
+Taking ``SELECT`` as a example,
+
+.. code-block:: sql
+
+   SELECT * FROM new_orders;
+
+Taking ``INSERT`` as a example,
+
+.. code-block:: sql
+
+   INSERT INTO cities VALUES (1, 'San Francisco');
+
+Taking ``UPDATE`` as a example,
+
+.. code-block:: sql
+
+   UPDATE purchases SET status = 'OVERDUE' WHERE ship_date IS NULL;
+
+Taking ``DELETE FROM`` as a example,
+
+.. code-block:: sql
+
+   DELETE FROM lineitem WHERE shipmode = 'AIR';
+
+.. _Iceberg: https://iceberg.apache.org/
+.. _Official Documentation: https://trino.io/docs/current/connector/iceberg.html#
diff --git a/content/docs/latest/_sources/connector/trino/index.rst.txt b/content/docs/latest/_sources/connector/trino/index.rst.txt
new file mode 100644
index 0000000..a5c5675
--- /dev/null
+++ b/content/docs/latest/_sources/connector/trino/index.rst.txt
@@ -0,0 +1,23 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+Connectors For Trino SQL Engine
+=====================================
+
+.. toctree::
+    :maxdepth: 2
+
+    flink_table_store
+    iceberg
\ No newline at end of file
diff --git a/content/docs/latest/deployment/engine_lifecycle.html b/content/docs/latest/_sources/deployment/engine_lifecycle.md.txt
similarity index 55%
copy from content/docs/latest/deployment/engine_lifecycle.html
copy to content/docs/latest/_sources/deployment/engine_lifecycle.md.txt
index e2f46ad..35944fa 100644
--- a/content/docs/latest/deployment/engine_lifecycle.html
+++ b/content/docs/latest/_sources/deployment/engine_lifecycle.md.txt
@@ -1,205 +1,4 @@
-
-
-<!DOCTYPE html>
-<html class="writer-html5" lang="en" >
-<head>
-  <meta charset="utf-8" />
-  
-  <meta name="viewport" content="width=device-width, initial-scale=1.0" />
-  
-  <title>4. The TTL Of Kyuubi Engines &mdash; Kyuubi 1.5.1-incubating documentation</title>
-  
-
-  
-  <link rel="stylesheet" href="../_static/css/custom.css" type="text/css" />
-  <link rel="stylesheet" href="../_static/pygments.css" type="text/css" />
-  <link rel="stylesheet" href="../_static/pygments.css" type="text/css" />
-  <link rel="stylesheet" href="../_static/css/custom.css" type="text/css" />
-
-  
-  
-
-  
-  
-
-  
-
-  
-  <!--[if lt IE 9]>
-    <script src="../_static/js/html5shiv.min.js"></script>
-  <![endif]-->
-  
-    
-      <script type="text/javascript" id="documentation_options" data-url_root="../" src="../_static/documentation_options.js"></script>
-        <script data-url_root="../" id="documentation_options" src="../_static/documentation_options.js"></script>
-        <script src="../_static/jquery.js"></script>
-        <script src="../_static/underscore.js"></script>
-        <script src="../_static/doctools.js"></script>
-    
-    <script type="text/javascript" src="../_static/js/theme.js"></script>
-
-    
-    <link rel="index" title="Index" href="../genindex.html" />
-    <link rel="search" title="Search" href="../search.html" />
-    <link rel="next" title="5. The Spark SQL Engine Configuration Guide" href="spark/index.html" />
-    <link rel="prev" title="3. The Share Level Of Kyuubi Engines" href="engine_share_level.html" /> 
-</head>
-
-<body class="wy-body-for-nav">
-
-   
-  <div class="wy-grid-for-nav">
-    
-    <nav data-toggle="wy-nav-shift" class="wy-nav-side">
-      <div class="wy-side-scroll">
-        <div class="wy-side-nav-search" >
-          
-
-          
-            <a href="../index.html" class="icon icon-home"> Kyuubi
-          
-
-          
-            
-            <img src="../_static/kyuubi_logo_gray.png" class="logo" alt="Logo"/>
-          
-          </a>
-
-          
-            
-            
-          
-
-          
-<div role="search">
-  <form id="rtd-search-form" class="wy-form" action="../search.html" method="get">
-    <input type="text" name="q" placeholder="Search docs" />
-    <input type="hidden" name="check_keywords" value="yes" />
-    <input type="hidden" name="area" value="default" />
-  </form>
-</div>
-
-          
-        </div>
-
-        
-        <div class="wy-menu wy-menu-vertical" data-spy="affix" role="navigation" aria-label="main navigation">
-          
-            
-            
-              
-            
-            
-              <p class="caption" role="heading"><span class="caption-text">Usage Guide</span></p>
-<ul class="current">
-<li class="toctree-l1"><a class="reference internal" href="../quick_start/index.html">Quick Start</a></li>
-<li class="toctree-l1 current"><a class="reference internal" href="index.html">Deploying Kyuubi</a><ul class="current">
-<li class="toctree-l2"><a class="reference internal" href="index.html#basics">Basics</a></li>
-<li class="toctree-l2"><a class="reference internal" href="index.html#configurations">Configurations</a></li>
-<li class="toctree-l2 current"><a class="reference internal" href="index.html#engines">Engines</a><ul class="current">
-<li class="toctree-l3"><a class="reference internal" href="engine_on_yarn.html">1. Deploy Kyuubi engines on Yarn</a></li>
-<li class="toctree-l3"><a class="reference internal" href="engine_on_kubernetes.html">2. Deploy Kyuubi engines on Kubernetes</a></li>
-<li class="toctree-l3"><a class="reference internal" href="engine_share_level.html">3. The Share Level Of Kyuubi Engines</a></li>
-<li class="toctree-l3 current"><a class="current reference internal" href="#">4. The TTL Of Kyuubi Engines</a><ul>
-<li class="toctree-l4"><a class="reference internal" href="#the-big-contributors-of-resource-waste">4.1. The Big Contributors Of Resource Waste</a></li>
-<li class="toctree-l4"><a class="reference internal" href="#ttl-types-in-kyuubi-engines">4.2. TTL Types In Kyuubi Engines</a></li>
-<li class="toctree-l4"><a class="reference internal" href="#configurations">4.3. Configurations</a></li>
-</ul>
-</li>
-<li class="toctree-l3"><a class="reference internal" href="spark/index.html">5. The Spark SQL Engine Configuration Guide</a></li>
-</ul>
-</li>
-</ul>
-</li>
-<li class="toctree-l1"><a class="reference internal" href="../security/index.html">Security</a></li>
-<li class="toctree-l1"><a class="reference internal" href="../client/index.html">Client Documentation</a></li>
-<li class="toctree-l1"><a class="reference internal" href="../integrations/index.html">Integrations</a></li>
-<li class="toctree-l1"><a class="reference internal" href="../monitor/index.html">Monitoring</a></li>
-<li class="toctree-l1"><a class="reference internal" href="../sql/index.html">SQL References</a></li>
-<li class="toctree-l1"><a class="reference internal" href="../tools/index.html">Tools</a></li>
-</ul>
-<p class="caption" role="heading"><span class="caption-text">Kyuubi Insider</span></p>
-<ul>
-<li class="toctree-l1"><a class="reference internal" href="../overview/index.html">Overview</a></li>
-</ul>
-<p class="caption" role="heading"><span class="caption-text">Contributing</span></p>
-<ul>
-<li class="toctree-l1"><a class="reference internal" href="../develop_tools/index.html">Develop Tools</a></li>
-<li class="toctree-l1"><a class="reference internal" href="../community/index.html">Community</a></li>
-</ul>
-<p class="caption" role="heading"><span class="caption-text">Appendix</span></p>
-<ul>
-<li class="toctree-l1"><a class="reference internal" href="../appendix/index.html">Appendixes</a></li>
-</ul>
-
-            
-          
-        </div>
-        
-      </div>
-    </nav>
-
-    <section data-toggle="wy-nav-shift" class="wy-nav-content-wrap">
-
-      
-      <nav class="wy-nav-top" aria-label="top navigation">
-        
-          <i data-toggle="wy-nav-top" class="fa fa-bars"></i>
-          <a href="../index.html">Kyuubi</a>
-        
-      </nav>
-
-
-      <div class="wy-nav-content">
-        
-        <div class="rst-content">
-        
-          
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-<div role="navigation" aria-label="breadcrumbs navigation">
-
-  <ul class="wy-breadcrumbs">
-    
-      <li><a href="../index.html" class="icon icon-home"></a> &raquo;</li>
-        
-          <li><a href="index.html">Deploying Kyuubi</a> &raquo;</li>
-        
-      <li><span class="section-number">4. </span>The TTL Of Kyuubi Engines</li>
-    
-    
-      <li class="wy-breadcrumbs-aside">
-        
-          
-            <a href="../_sources/deployment/engine_lifecycle.md.txt" rel="nofollow"> View page source</a>
-          
-        
-      </li>
-    
-  </ul>
-
-  
-  <hr/>
-</div>
-          <div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
-           <div itemprop="articleBody">
-            
-  <!--
+<!--
  - Licensed to the Apache Software Foundation (ASF) under one or more
  - contributor license agreements.  See the NOTICE file distributed with
  - this work for additional information regarding copyright ownership.
@@ -214,149 +13,47 @@
  - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  - See the License for the specific language governing permissions and
  - limitations under the License.
- --><div align=center><p><img alt="../_images/kyuubi_logo.png" src="../_images/kyuubi_logo.png" /></p>
-</div><div class="section" id="the-ttl-of-kyuubi-engines">
-<h1><span class="section-number">4. </span>The TTL Of Kyuubi Engines<a class="headerlink" href="#the-ttl-of-kyuubi-engines" title="Permalink to this headline">¶</a></h1>
-<p>For a multi-tenant cluster, its overall resource utilization is a KPI that measures how effectively its resource is utilized against its availability or capacity.
-To better improve the overall resource utilization of the cluster,</p>
-<ul class="simple">
-<li><p>At cluster layer, we leverage the capabilities, such as <a class="reference external" href="https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html">Capacity Scheduler</a>, of resource scheduling management services, such as YARN and K8s.</p></li>
-<li><p>At application layer, we’d be better to acquire and release resources according to the real workloads.</p></li>
-</ul>
-<div class="section" id="the-big-contributors-of-resource-waste">
-<h2><span class="section-number">4.1. </span>The Big Contributors Of Resource Waste<a class="headerlink" href="#the-big-contributors-of-resource-waste" title="Permalink to this headline">¶</a></h2>
-<ul class="simple">
-<li><p>The time to wait for the resource to be allocated, such as the scheduling delay, the start/stop cost.</p>
-<ul>
-<li><p>A longer time-to-live(TTL) for allocated resources can significantly reduce such time costs within an application.</p></li>
-</ul>
-</li>
-<li><p>The time being idle of the resource.</p>
-<ul>
-<li><p>A shorter time to live for allocated resources can make all resources in rapid turnarounds across applications.</p></li>
-</ul>
-</li>
-</ul>
-</div>
-<div class="section" id="ttl-types-in-kyuubi-engines">
-<h2><span class="section-number">4.2. </span>TTL Types In Kyuubi Engines<a class="headerlink" href="#ttl-types-in-kyuubi-engines" title="Permalink to this headline">¶</a></h2>
-<body><div class="mxgraph" style="" data-mxgraph="{&quot;lightbox&quot;:false,&quot;nav&quot;:true,&quot;edit&quot;:&quot;_blank&quot;,&quot;xml&quot;:&quot;&lt;mxfile host=\&quot;Electron\&quot; modified=\&quot;2021-12-10T05:54:16.011Z\&quot; agent=\&quot;5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/15.8.7 Chrome/91.0.4472.164 Electron/13.6.2 Safari/537.36\&quot; etag=\&quot;OsSJijKpSO7wcXva956r\&quot; version=\&quot;15.8.7\&quot; type=\&quot;de [...]
-<script type="text/javascript" src="https://viewer.diagrams.net/js/viewer-static.min.js"></script>
-</body><ul class="simple">
-<li><p>Engine TTL</p>
-<ul>
-<li><p>The TTL of engines describes how long an engine will be cached after all sessions are disconnected.</p></li>
-</ul>
-</li>
-<li><p>Executor TTL</p>
-<ul>
-<li><p>The TTL of the executor describes how long an executor will be cached when no tasks come.</p></li>
-</ul>
-</li>
-</ul>
-</div>
-<div class="section" id="configurations">
-<h2><span class="section-number">4.3. </span>Configurations<a class="headerlink" href="#configurations" title="Permalink to this headline">¶</a></h2>
-<div class="section" id="engine-ttl">
-<h3><span class="section-number">4.3.1. </span>Engine TTL<a class="headerlink" href="#engine-ttl" title="Permalink to this headline">¶</a></h3>
-<table border="1" class="docutils">
-<thead>
-<tr>
-<th>Key</th>
-<th>Default</th>
-<th>Meaning</th>
-<th>Type</th>
-<th>Since</th>
-</tr>
-</thead>
-<tbody>
-<tr>
-<td>kyuubi.session.engine<br>.check.interval</td>
-<td><div style='width: 65pt;word-wrap: break-word;white-space: normal'>PT5M</div></td>
-<td><div style='width: 170pt;word-wrap: break-word;white-space: normal'>The check interval for engine timeout</div></td>
-<td><div style='width: 30pt'>duration</div></td>
-<td><div style='width: 20pt'>1.0.0</div></td>
-</tr>
-<tr>
-<td>kyuubi.session.engine<br>.idle.timeout</td>
-<td><div style='width: 65pt;word-wrap: break-word;white-space: normal'>PT30M</div></td>
-<td><div style='width: 170pt;word-wrap: break-word;white-space: normal'>engine timeout, the engine will self-terminate when it's not accessed for this duration. 0 or negative means not to self-terminate.</div></td>
-<td><div style='width: 30pt'>duration</div></td>
-<td><div style='width: 20pt'>1.0.0</div></td>
-</tr>
-</tbody>
-</table><p>The above two configurations can be used together to set the TTL of engines.
-These configurations are user-facing and able to use in JDBC urls.
-Note that, for <a class="reference external" href="engine_share_level.html#connection">connection</a> share level engines that will be terminated at once when the connection is disconnected, these configurations not necessarily work in this case.</p>
-</div>
-<div class="section" id="executor-ttl">
-<h3><span class="section-number">4.3.2. </span>Executor TTL<a class="headerlink" href="#executor-ttl" title="Permalink to this headline">¶</a></h3>
-<p>Executor TTL is part of functionality of Apache Spark’s <a class="reference internal" href="spark/dynamic_allocation.html"><span class="doc">Dynamic Resource Allocation</span></a>.</p>
-</div>
-</div>
-</div>
+ -->
 
+# The TTL Of Kyuubi Engines
 
-           </div>
-           
-          </div>
-          <footer>
-    <div class="rst-footer-buttons" role="navigation" aria-label="footer navigation">
-        <a href="spark/index.html" class="btn btn-neutral float-right" title="5. The Spark SQL Engine Configuration Guide" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right" aria-hidden="true"></span></a>
-        <a href="engine_share_level.html" class="btn btn-neutral float-left" title="3. The Share Level Of Kyuubi Engines" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left" aria-hidden="true"></span> Previous</a>
-    </div>
+For a multi-tenant cluster, its overall resource utilization is a KPI that measures how effectively its resource is utilized against its availability or capacity.
+To better improve the overall resource utilization of the cluster,
+- At cluster layer, we leverage the capabilities, such as [Capacity Scheduler](https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html), of resource scheduling management services, such as YARN and K8s.
+- At application layer, we'd be better to acquire and release resources according to the real workloads.
 
-  <hr/>
+## The Big Contributors Of Resource Waste
 
-  <div role="contentinfo">
-    <p>
-        &#169; Copyright 
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the &#34;License&#34;); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
+- The time to wait for the resource to be allocated, such as the scheduling delay, the start/stop cost.
+  - A longer time-to-live(TTL) for allocated resources can significantly reduce such time costs within an application.
 
-   http://www.apache.org/licenses/LICENSE-2.0
+- The time being idle of the resource.
+  - A shorter time to live for allocated resources can make all resources in rapid turnarounds across applications.
 
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an &#34;AS IS&#34; BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-.
+## TTL Types In Kyuubi Engines
 
-    </p>
-  </div>
-    
-    
-    
-    Built with <a href="https://www.sphinx-doc.org/">Sphinx</a> using a
-    
-    <a href="https://github.com/readthedocs/sphinx_rtd_theme">theme</a>
-    
-    provided by <a href="https://readthedocs.org">Read the Docs</a>. 
+<body><div class="mxgraph" style="" data-mxgraph="{&quot;lightbox&quot;:false,&quot;nav&quot;:true,&quot;edit&quot;:&quot;_blank&quot;,&quot;xml&quot;:&quot;&lt;mxfile host=\&quot;Electron\&quot; modified=\&quot;2021-12-10T05:54:16.011Z\&quot; agent=\&quot;5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/15.8.7 Chrome/91.0.4472.164 Electron/13.6.2 Safari/537.36\&quot; etag=\&quot;OsSJijKpSO7wcXva956r\&quot; version=\&quot;15.8.7\&quot; type=\&quot;de [...]
+<script type="text/javascript" src="https://viewer.diagrams.net/js/viewer-static.min.js"></script>
+</body>
 
-</footer>
-        </div>
-      </div>
+- Engine TTL
+  - The TTL of engines describes how long an engine will be cached after all sessions are disconnected.
+- Executor TTL
+  - The TTL of the executor describes how long an executor will be cached when no tasks come.
 
-    </section>
+## Configurations
 
-  </div>
-  
+### Engine TTL
 
-  <script type="text/javascript">
-      jQuery(function () {
-          SphinxRtdTheme.Navigation.enable(true);
-      });
-  </script>
+| Key                                          | Default                                                                        | Meaning                                                                                                                                                                                                       | Type                                    | Since                                |
+|----------------------------------------------|--------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|--------------------------------------|
+| kyuubi\.session\.engine<br>\.check\.interval | <div style='width: 65pt;word-wrap: break-word;white-space: normal'>PT5M</div>  | <div style='width: 170pt;word-wrap: break-word;white-space: normal'>The check interval for engine timeout</div>                                                                                               | <div style='width: 30pt'>duration</div> | <div style='width: 20pt'>1.0.0</div> |
+| kyuubi\.session\.engine<br>\.idle\.timeout   | <div style='width: 65pt;word-wrap: break-word;white-space: normal'>PT30M</div> | <div style='width: 170pt;word-wrap: break-word;white-space: normal'>engine timeout, the engine will self-terminate when it's not accessed for this duration. 0 or negative means not to self-terminate.</div> | <div style='width: 30pt'>duration</div> | <div style='width: 20pt'>1.0.0</div> |
 
-  
-  
-    
-   
+The above two configurations can be used together to set the TTL of engines.
+These configurations are user-facing and able to use in JDBC urls.
+Note that, for [connection](engine_share_level.html#connection) share level engines that will be terminated at once when the connection is disconnected, these configurations not necessarily work in this case.
 
-</body>
-</html>
\ No newline at end of file
+### Executor TTL
+
+Executor TTL is part of functionality of Apache Spark's [Dynamic Resource Allocation](./spark/dynamic_allocation.md).
diff --git a/content/docs/latest/_sources/deployment/engine_on_kubernetes.md.txt b/content/docs/latest/_sources/deployment/engine_on_kubernetes.md.txt
new file mode 100644
index 0000000..6f3e73a
--- /dev/null
+++ b/content/docs/latest/_sources/deployment/engine_on_kubernetes.md.txt
@@ -0,0 +1,121 @@
+<!--
+ - Licensed to the Apache Software Foundation (ASF) under one or more
+ - contributor license agreements.  See the NOTICE file distributed with
+ - this work for additional information regarding copyright ownership.
+ - The ASF licenses this file to You under the Apache License, Version 2.0
+ - (the "License"); you may not use this file except in compliance with
+ - the License.  You may obtain a copy of the License at
+ -
+ -   http://www.apache.org/licenses/LICENSE-2.0
+ -
+ - Unless required by applicable law or agreed to in writing, software
+ - distributed under the License is distributed on an "AS IS" BASIS,
+ - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ - See the License for the specific language governing permissions and
+ - limitations under the License.
+ -->
+
+
+# Deploy Kyuubi engines on Kubernetes
+
+## Requirements
+
+When you want to run Kyuubi's Spark SQL engines on Kubernetes, you'd better have cognition upon the following things.
+
+* Read about [Running Spark On Kubernetes](http://spark.apache.org/docs/latest/running-on-kubernetes.html)
+* An active Kubernetes cluster
+* [Kubectl](https://kubernetes.io/docs/reference/kubectl/overview/)
+* KubeConfig of the target cluster
+
+## Configurations
+
+### Master
+
+Spark on Kubernetes config master by using a special format.
+
+`spark.master=k8s://https://<k8s-apiserver-host>:<k8s-apiserver-port>`
+
+You can use cmd `kubectl cluster-info` to get api-server host and port.
+
+### Docker Image
+
+Spark ships a `./bin/docker-image-tool.sh` script to build and publish the Docker images for running Spark applications on Kubernetes.
+
+When deploying Kyuubi engines against a Kubernetes cluster, we need to set up the docker images in the Docker registry first.
+
+Example usage is:
+
+```shell
+./bin/docker-image-tool.sh -r <repo> -t my-tag build
+./bin/docker-image-tool.sh -r <repo> -t my-tag push
+# To build docker image with specify openJdk 
+./bin/docker-image-tool.sh -r <repo> -t my-tag -b java_image_tag=<openjdk:${java_image_tag}> build
+# To build additional PySpark docker image
+./bin/docker-image-tool.sh -r <repo> -t my-tag -p ./kubernetes/dockerfiles/spark/bindings/python/Dockerfile build
+# To build additional SparkR docker image
+./bin/docker-image-tool.sh -r <repo> -t my-tag -R ./kubernetes/dockerfiles/spark/bindings/R/Dockerfile build
+```
+
+### Test Cluster
+
+You can use the shell code to test your cluster whether it is normal or not.
+
+```shell
+$SPARK_HOME/bin/spark-submit \
+ --master k8s://https://<k8s-apiserver-host>:<k8s-apiserver-port> \
+ --class org.apache.spark.examples.SparkPi \
+ --conf spark.executor.instances=5 \
+ --conf spark.dynamicAllocation.enabled=false \
+ --conf spark.shuffle.service.enabled=false \
+ --conf spark.kubernetes.container.image=<spark-image> \
+ local://<path_to_examples.jar>
+```
+
+When running shell, you can use cmd `kubectl describe pod <podName>` to check if the information meets expectations.
+
+### ServiceAccount
+
+When use Client mode to submit application, spark driver use the kubeconfig to access api-service to create and watch executor pods.
+
+When use Cluster mode to submit application, spark driver pod use serviceAccount to access api-service to create and watch executor pods.
+
+In both cases, you need to figure out whether you have the permissions under the corresponding namespace. You can use following cmd to create serviceAccount (You need to have the kubeconfig which have the create serviceAccount permission).
+
+```shell
+# create serviceAccount
+kubectl create serviceaccount spark -n <namespace>
+# binding role
+kubectl create clusterrolebinding spark-role --clusterrole=edit --serviceaccount=<namespace>:spark --namespace=<namespace>
+```
+
+### Volumes
+
+As it known to us all, Kubernetes can use configurations to mount volumes into driver and executor pods.
+
+* hostPath: mounts a file or directory from the host node’s filesystem into a pod.
+* emptyDir: an initially empty volume created when a pod is assigned to a node.
+* nfs: mounts an existing NFS(Network File System) into a pod.
+* persistentVolumeClaim: mounts a PersistentVolume into a pod.
+
+Note: Please
+see [the Security section of this document](http://spark.apache.org/docs/latest/running-on-kubernetes.html#security) for security issues related to volume mounts.
+
+```
+spark.kubernetes.driver.volumes.<type>.<name>.options.path=<dist_path>
+spark.kubernetes.driver.volumes.<type>.<name>.mount.path=<container_path>
+
+spark.kubernetes.executor.volumes.<type>.<name>.options.path=<dist_path>
+spark.kubernetes.executor.volumes.<type>.<name>.mount.path=<container_path>
+```
+
+Read [Using Kubernetes Volumes](http://spark.apache.org/docs/latest/running-on-kubernetes.html#using-kubernetes-volumes) for more about volumes.
+
+### PodTemplateFile
+
+Kubernetes allows defining pods from template files. Spark users can similarly use template files to define the driver or executor pod configurations that Spark configurations do not support.
+
+To do so, specify the spark properties `spark.kubernetes.driver.podTemplateFile` and `spark.kubernetes.executor.podTemplateFile` to point to local files accessible to the spark-submit process.
+
+### Other
+
+You can read Spark's official documentation for [Running on Kubernetes](http://spark.apache.org/docs/latest/running-on-kubernetes.html) for more information.
\ No newline at end of file
diff --git a/content/docs/latest/_sources/deployment/engine_on_yarn.md.txt b/content/docs/latest/_sources/deployment/engine_on_yarn.md.txt
new file mode 100644
index 0000000..54f8b50
--- /dev/null
+++ b/content/docs/latest/_sources/deployment/engine_on_yarn.md.txt
@@ -0,0 +1,258 @@
+<!--
+ - Licensed to the Apache Software Foundation (ASF) under one or more
+ - contributor license agreements.  See the NOTICE file distributed with
+ - this work for additional information regarding copyright ownership.
+ - The ASF licenses this file to You under the Apache License, Version 2.0
+ - (the "License"); you may not use this file except in compliance with
+ - the License.  You may obtain a copy of the License at
+ -
+ -   http://www.apache.org/licenses/LICENSE-2.0
+ -
+ - Unless required by applicable law or agreed to in writing, software
+ - distributed under the License is distributed on an "AS IS" BASIS,
+ - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ - See the License for the specific language governing permissions and
+ - limitations under the License.
+ -->
+
+
+# Deploy Kyuubi engines on Yarn
+
+## Deploy Kyuubi Spark Engine on Yarn
+
+### Requirements
+
+When you want to deploy Kyuubi's Spark SQL engines on YARN, you'd better have cognition upon the following things.
+
+- Knowing the basics about [Running Spark on YARN](http://spark.apache.org/docs/latest/running-on-yarn.html)
+- A binary distribution of Spark which is built with YARN support
+  - You can use the built-in Spark distribution
+  - You can get it from [Spark official website](https://spark.apache.org/downloads.html) directly
+  - You can [Build Spark](http://spark.apache.org/docs/latest/building-spark.html#specifying-the-hadoop-version-and-enabling-yarn) with `-Pyarn` maven option
+- An active [Apache Hadoop YARN](https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html) cluster
+- An active Apache Hadoop HDFS cluster
+- Setup Hadoop client configurations at the machine the Kyuubi server locates
+
+### Configurations
+
+#### Environment
+
+Either `HADOOP_CONF_DIR` or `YARN_CONF_DIR` is configured and points to the Hadoop client configurations directory, usually, `$HADOOP_HOME/etc/hadoop`.
+
+If the `HADOOP_CONF_DIR` points the YARN and HDFS cluster correctly, you should be able to run the `SparkPi` example on YARN.
+```bash
+$ HADOOP_CONF_DIR=/path/to/hadoop/conf $SPARK_HOME/bin/spark-submit \
+    --class org.apache.spark.examples.SparkPi \
+    --master yarn \
+    --queue thequeue \
+    $SPARK_HOME/examples/jars/spark-examples*.jar \
+    10
+```
+
+If the `SparkPi` passes, configure it in `$KYUUBI_HOME/conf/kyuubi-env.sh` or `$SPARK_HOME/conf/spark-env.sh`, e.g.
+
+```bash
+$ echo "export HADOOP_CONF_DIR=/path/to/hadoop/conf" >> $KYUUBI_HOME/conf/kyuubi-env.sh
+```
+
+#### Spark Properties
+
+These properties are defined by Spark and Kyuubi will pass them to `spark-submit` to create Spark applications.
+
+**Note:** None of these would take effect if the application for a particular user already exists.
+
+- Specify it in the JDBC connection URL, e.g. `jdbc:hive2://localhost:10009/;#spark.master=yarn;spark.yarn.queue=thequeue`
+- Specify it in `$KYUUBI_HOME/conf/kyuubi-defaults.conf`
+- Specify it in `$SPARK_HOME/conf/spark-defaults.conf`
+
+**Note:** The priority goes down from top to bottom.
+
+##### Master
+
+Setting `spark.master=yarn` tells Kyuubi to submit Spark SQL engine applications to the YARN cluster manager.
+
+##### Queue
+
+Set `spark.yarn.queue=thequeue` in the JDBC connection string to tell Kyuubi to use the QUEUE in the YARN cluster, otherwise,
+the QUEUE configured at Kyuubi server side will be used as default.
+
+##### Sizing
+
+Pass the configurations below through the JDBC connection string to set how many instances of Spark executor will be used
+and how many cpus and memory will Spark driver, ApplicationMaster and each executor take.
+
+Name | Default | Meaning
+--- | --- | ---
+spark.executor.instances | 1 | The number of executors for static allocation
+spark.executor.cores | 1 | The number of cores to use on each executor
+spark.yarn.am.memory | 512m | Amount of memory to use for the YARN Application Master in client mode
+spark.yarn.am.memoryOverhead | amMemory * 0.10, with minimum of 384 | Amount of non-heap memory to be allocated per am process in client mode
+spark.driver.memory | 1g | Amount of memory to use for the driver process
+spark.driver.memoryOverhead | driverMemory * 0.10, with minimum of 384 | Amount of non-heap memory to be allocated per driver process in cluster mode
+spark.executor.memory | 1g | Amount of memory to use for the executor process
+spark.executor.memoryOverhead | executorMemory * 0.10, with minimum of 384 | Amount of additional memory to be allocated per executor process. This is memory that accounts for things like VM overheads, interned strings other native overheads, etc
+
+It is recommended to use [Dynamic Allocation](http://spark.apache.org/docs/3.0.1/configuration.html#dynamic-allocation) with Kyuubi,
+since the SQL engine will be long-running for a period, execute user's queries from clients periodically,
+and the demand for computing resources is not the same for those queries.
+It is better for Spark to release some executors when either the query is lightweight, or the SQL engine is being idled. 
+
+##### Tuning
+
+You can specify `spark.yarn.archive` or `spark.yarn.jars` to point to a world-readable location that contains Spark jars on HDFS,
+which allows YARN to cache it on nodes so that it doesn't need to be distributed each time an application runs. 
+
+##### Others
+
+Please refer to [Spark properties](http://spark.apache.org/docs/latest/running-on-yarn.html#spark-properties) to check other acceptable configs.
+
+### Kerberos
+
+Kyuubi currently does not support Spark's [YARN-specific Kerberos Configuration](http://spark.apache.org/docs/3.0.1/running-on-yarn.html#kerberos),
+so `spark.kerberos.keytab` and `spark.kerberos.principal` should not use now.
+
+Instead, you can schedule a periodically `kinit` process via `crontab` task on the local machine that hosts Kyuubi server or simply use [Kyuubi Kinit](settings.html#kinit).
+
+## Deploy Kyuubi Flink Engine on Yarn
+
+### Requirements
+
+When you want to deploy Kyuubi's Flink SQL engines on YARN, you'd better have cognition upon the following things.
+
+- Knowing the basics about [Running Flink on YARN](https://nightlies.apache.org/flink/flink-docs-stable/docs/deployment/resource-providers/yarn)
+- A binary distribution of Flink which is built with YARN support
+  - Download a recent Flink distribution from the [Flink official website](https://flink.apache.org/downloads.html) and unpack it
+- An active [Apache Hadoop YARN](https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html) cluster
+  - Make sure your YARN cluster is ready for accepting Flink applications by running yarn top. It should show no error messages
+- An active Object Storage cluster, e.g. [HDFS](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html), S3 and [Minio](https://min.io/) etc.
+- Setup Hadoop client configurations at the machine the Kyuubi server locates
+
+### Yarn Session Mode
+
+#### Flink Configurations
+
+```bash
+execution.target: yarn-session
+# Yarn Session Cluster application id.
+yarn.application.id: application_00000000XX_00XX
+```
+
+#### Environment
+
+Either `HADOOP_CONF_DIR` or `YARN_CONF_DIR` is configured and points to the Hadoop client configurations directory, usually, `$HADOOP_HOME/etc/hadoop`.
+
+If the `HADOOP_CONF_DIR` points to the YARN and HDFS cluster correctly, and the `HADOOP_CLASSPATH` environment variable is set, you can launch a Flink on YARN session, and submit an example job:
+```bash
+# we assume to be in the root directory of 
+# the unzipped Flink distribution
+
+# (0) export HADOOP_CLASSPATH
+export HADOOP_CLASSPATH=`hadoop classpath`
+
+# (1) Start YARN Session
+./bin/yarn-session.sh --detached
+
+# (2) You can now access the Flink Web Interface through the
+# URL printed in the last lines of the command output, or through
+# the YARN ResourceManager web UI.
+
+# (3) Submit example job
+./bin/flink run ./examples/streaming/TopSpeedWindowing.jar
+
+# (4) Stop YARN session (replace the application id based 
+# on the output of the yarn-session.sh command)
+echo "stop" | ./bin/yarn-session.sh -id application_XXXXX_XXX
+ ```
+
+If the `TopSpeedWindowing` passes, configure it in `$KYUUBI_HOME/conf/kyuubi-env.sh`
+
+```bash
+$ echo "export HADOOP_CONF_DIR=/path/to/hadoop/conf" >> $KYUUBI_HOME/conf/kyuubi-env.sh
+```
+
+#### Required Environment Variable
+
+The `FLINK_HADOOP_CLASSPATH` is required, too.
+
+For users who are using Hadoop 3.x, Hadoop shaded client is recommended instead of Hadoop vanilla jars. 
+For users who are using Hadoop 2.x, `FLINK_HADOOP_CLASSPATH` should be set to hadoop classpath to use Hadoop 
+vanilla jars. For users which does not use Hadoop services, e.g. HDFS, YARN at all, Hadoop client jars 
+is also required, and recommend to use Hadoop shaded client as Hadoop 3.x's users do.
+
+See [HADOOP-11656](https://issues.apache.org/jira/browse/HADOOP-11656) for details of Hadoop shaded client.
+
+To use Hadoop shaded client, please configure $KYUUBI_HOME/conf/kyuubi-env.sh as follows:
+
+```bash
+$ echo "export FLINK_HADOOP_CLASSPATH=/path/to/hadoop-client-runtime-3.3.2.jar:/path/to/hadoop-client-api-3.3.2.jar" >> $KYUUBI_HOME/conf/kyuubi-env.sh
+```
+To use Hadoop vanilla jars, please configure $KYUUBI_HOME/conf/kyuubi-env.sh as follows:
+
+```bash
+$ echo "export FLINK_HADOOP_CLASSPATH=`hadoop classpath`" >> $KYUUBI_HOME/conf/kyuubi-env.sh
+```
+### Deployment Modes Supported by Flink on YARN
+
+For experiment use, we recommend deploying Kyuubi Flink SQL engine in [Session Mode](https://nightlies.apache.org/flink/flink-docs-stable/docs/deployment/resource-providers/yarn/#session-mode).
+At present, [Application Mode](https://nightlies.apache.org/flink/flink-docs-stable/docs/deployment/resource-providers/yarn/#application-mode) and [Per-Job Mode (deprecated)](https://nightlies.apache.org/flink/flink-docs-stable/docs/deployment/resource-providers/yarn/#per-job-mode-deprecated) are not supported for Flink engine.
+
+### Kerberos
+
+As Kyuubi Flink SQL engine wraps the Flink SQL client that currently does not support [Flink Kerberos Configuration](https://nightlies.apache.org/flink/flink-docs-stable/docs/deployment/config/#security-kerberos-login-keytab),
+so `security.kerberos.login.keytab` and `security.kerberos.login.principal` should not use now.
+
+Instead, you can schedule a periodically `kinit` process via `crontab` task on the local machine that hosts Kyuubi server or simply use [Kyuubi Kinit](settings.html#kinit).
+
+## Deploy Kyuubi Hive Engine on Yarn
+
+### Requirements
+
+When you want to deploy Kyuubi's Hive SQL engines on YARN, you'd better have cognition upon the following things.
+
+- Knowing the basics about [Running Hive on YARN](https://cwiki.apache.org/confluence/display/Hive/GettingStarted)
+- A binary distribution of Hive
+  - You can use the built-in Hive distribution
+  - Download a recent Hive distribution from the [Hive official website](https://hive.apache.org/downloads.html) and unpack it
+  - You can [Build Hive](https://cwiki.apache.org/confluence/display/Hive//GettingStarted#GettingStarted-BuildingHivefromSource)
+- An active [Apache Hadoop YARN](https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html) cluster
+  - Make sure your YARN cluster is ready for accepting Hive applications by running yarn top. It should show no error messages
+- An active [Apache Hadoop HDFS](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html) cluster
+- Setup Hadoop client configurations at the machine the Kyuubi server locates
+- An active [Hive Metastore Service](https://cwiki.apache.org/confluence/display/hive/design#Design-Metastore)
+
+### Configurations
+
+#### Environment
+
+Either `HADOOP_CONF_DIR` or `YARN_CONF_DIR` is configured and points to the Hadoop client configurations directory, usually, `$HADOOP_HOME/etc/hadoop`.
+
+If the `HADOOP_CONF_DIR` points to the YARN and HDFS cluster correctly, you should be able to run the `Hive SQL` example on YARN.
+
+```bash
+$ $HIVE_HOME/bin/hiveserver2
+# In another terminal
+$ $HIVE_HOME/bin/beeline -u 'jdbc:hive2://localhost:10000/default'
+0: jdbc:hive2://localhost:10000/default> CREATE TABLE pokes (foo INT, bar STRING);
+0: jdbc:hive2://localhost:10000/default> INSERT INTO TABLE pokes VALUES (1, 'hello');
+```
+
+If the `Hive SQL` passes and there is a job in Yarn Web UI, It indicates the hive environment is normal.
+
+#### Required Environment Variable
+
+The `HIVE_HADOOP_CLASSPATH` is required, too. It should contain `commons-collections-*.jar`, 
+`hadoop-client-runtime-*.jar`, `hadoop-client-api-*.jar` and `htrace-core4-*.jar`.
+All four jars are in the `HADOOP_HOME`. 
+
+For example, in Hadoop 3.1.0 version, the following is their location. 
+- `${HADOOP_HOME}/share/hadoop/common/lib/commons-collections-3.2.2.jar`
+- `${HADOOP_HOME}/share/hadoop/client/hadoop-client-runtime-3.1.0.jar`
+- `${HADOOP_HOME}/share/hadoop/client/hadoop-client-api-3.1.0.jar`
+- `${HADOOP_HOME}/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar`
+
+Configure them in `$KYUUBI_HOME/conf/kyuubi-env.sh` or `$HIVE_HOME/conf/hive-env.sh`, e.g.
+
+```bash
+$ echo "export HADOOP_CONF_DIR=/path/to/hadoop/conf" >> $KYUUBI_HOME/conf/kyuubi-env.sh
+$ echo "export HIVE_HADOOP_CLASSPATH=${HADOOP_HOME}/share/hadoop/common/lib/commons-collections-3.2.2.jar:${HADOOP_HOME}/share/hadoop/client/hadoop-client-runtime-3.1.0.jar:${HADOOP_HOME}/share/hadoop/client/hadoop-client-api-3.1.0.jar:${HADOOP_HOME}/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar" >> $KYUUBI_HOME/conf/kyuubi-env.sh
+```
diff --git a/content/docs/latest/deployment/engine_share_level.html b/content/docs/latest/_sources/deployment/engine_share_level.md.txt
similarity index 75%
copy from content/docs/latest/deployment/engine_share_level.html
copy to content/docs/latest/_sources/deployment/engine_share_level.md.txt
index 77e426d..2272c9d 100644
--- a/content/docs/latest/deployment/engine_share_level.html
+++ b/content/docs/latest/_sources/deployment/engine_share_level.md.txt
@@ -1,206 +1,4 @@
-
-
-<!DOCTYPE html>
-<html class="writer-html5" lang="en" >
-<head>
-  <meta charset="utf-8" />
-  
-  <meta name="viewport" content="width=device-width, initial-scale=1.0" />
-  
-  <title>3. The Share Level Of Kyuubi Engines &mdash; Kyuubi 1.5.1-incubating documentation</title>
-  
-
-  
-  <link rel="stylesheet" href="../_static/css/custom.css" type="text/css" />
-  <link rel="stylesheet" href="../_static/pygments.css" type="text/css" />
-  <link rel="stylesheet" href="../_static/pygments.css" type="text/css" />
-  <link rel="stylesheet" href="../_static/css/custom.css" type="text/css" />
-
-  
-  
-
-  
-  
-
-  
-
-  
-  <!--[if lt IE 9]>
-    <script src="../_static/js/html5shiv.min.js"></script>
-  <![endif]-->
-  
-    
-      <script type="text/javascript" id="documentation_options" data-url_root="../" src="../_static/documentation_options.js"></script>
-        <script data-url_root="../" id="documentation_options" src="../_static/documentation_options.js"></script>
-        <script src="../_static/jquery.js"></script>
-        <script src="../_static/underscore.js"></script>
-        <script src="../_static/doctools.js"></script>
-    
-    <script type="text/javascript" src="../_static/js/theme.js"></script>
-
-    
-    <link rel="index" title="Index" href="../genindex.html" />
-    <link rel="search" title="Search" href="../search.html" />
-    <link rel="next" title="4. The TTL Of Kyuubi Engines" href="engine_lifecycle.html" />
-    <link rel="prev" title="2. Deploy Kyuubi engines on Kubernetes" href="engine_on_kubernetes.html" /> 
-</head>
-
-<body class="wy-body-for-nav">
-
-   
-  <div class="wy-grid-for-nav">
-    
-    <nav data-toggle="wy-nav-shift" class="wy-nav-side">
-      <div class="wy-side-scroll">
-        <div class="wy-side-nav-search" >
-          
-
-          
-            <a href="../index.html" class="icon icon-home"> Kyuubi
-          
-
-          
-            
-            <img src="../_static/kyuubi_logo_gray.png" class="logo" alt="Logo"/>
-          
-          </a>
-
-          
-            
-            
-          
-
-          
-<div role="search">
-  <form id="rtd-search-form" class="wy-form" action="../search.html" method="get">
-    <input type="text" name="q" placeholder="Search docs" />
-    <input type="hidden" name="check_keywords" value="yes" />
-    <input type="hidden" name="area" value="default" />
-  </form>
-</div>
-
-          
-        </div>
-
-        
-        <div class="wy-menu wy-menu-vertical" data-spy="affix" role="navigation" aria-label="main navigation">
-          
-            
-            
-              
-            
-            
-              <p class="caption" role="heading"><span class="caption-text">Usage Guide</span></p>
-<ul class="current">
-<li class="toctree-l1"><a class="reference internal" href="../quick_start/index.html">Quick Start</a></li>
-<li class="toctree-l1 current"><a class="reference internal" href="index.html">Deploying Kyuubi</a><ul class="current">
-<li class="toctree-l2"><a class="reference internal" href="index.html#basics">Basics</a></li>
-<li class="toctree-l2"><a class="reference internal" href="index.html#configurations">Configurations</a></li>
-<li class="toctree-l2 current"><a class="reference internal" href="index.html#engines">Engines</a><ul class="current">
-<li class="toctree-l3"><a class="reference internal" href="engine_on_yarn.html">1. Deploy Kyuubi engines on Yarn</a></li>
-<li class="toctree-l3"><a class="reference internal" href="engine_on_kubernetes.html">2. Deploy Kyuubi engines on Kubernetes</a></li>
-<li class="toctree-l3 current"><a class="current reference internal" href="#">3. The Share Level Of Kyuubi Engines</a><ul>
-<li class="toctree-l4"><a class="reference internal" href="#why-do-we-need-this-feature">3.1. Why do we need this feature?</a></li>
-<li class="toctree-l4"><a class="reference internal" href="#the-current-supported-share-levels">3.2. The current supported share levels</a></li>
-<li class="toctree-l4"><a class="reference internal" href="#related-configurations">3.3. Related Configurations</a></li>
-<li class="toctree-l4"><a class="reference internal" href="#conclusion">3.4. Conclusion</a></li>
-</ul>
-</li>
-<li class="toctree-l3"><a class="reference internal" href="engine_lifecycle.html">4. The TTL Of Kyuubi Engines</a></li>
-<li class="toctree-l3"><a class="reference internal" href="spark/index.html">5. The Spark SQL Engine Configuration Guide</a></li>
-</ul>
-</li>
-</ul>
-</li>
-<li class="toctree-l1"><a class="reference internal" href="../security/index.html">Security</a></li>
-<li class="toctree-l1"><a class="reference internal" href="../client/index.html">Client Documentation</a></li>
-<li class="toctree-l1"><a class="reference internal" href="../integrations/index.html">Integrations</a></li>
-<li class="toctree-l1"><a class="reference internal" href="../monitor/index.html">Monitoring</a></li>
-<li class="toctree-l1"><a class="reference internal" href="../sql/index.html">SQL References</a></li>
-<li class="toctree-l1"><a class="reference internal" href="../tools/index.html">Tools</a></li>
-</ul>
-<p class="caption" role="heading"><span class="caption-text">Kyuubi Insider</span></p>
-<ul>
-<li class="toctree-l1"><a class="reference internal" href="../overview/index.html">Overview</a></li>
-</ul>
-<p class="caption" role="heading"><span class="caption-text">Contributing</span></p>
-<ul>
-<li class="toctree-l1"><a class="reference internal" href="../develop_tools/index.html">Develop Tools</a></li>
-<li class="toctree-l1"><a class="reference internal" href="../community/index.html">Community</a></li>
-</ul>
-<p class="caption" role="heading"><span class="caption-text">Appendix</span></p>
-<ul>
-<li class="toctree-l1"><a class="reference internal" href="../appendix/index.html">Appendixes</a></li>
-</ul>
-
-            
-          
-        </div>
-        
-      </div>
-    </nav>
-
-    <section data-toggle="wy-nav-shift" class="wy-nav-content-wrap">
-
-      
-      <nav class="wy-nav-top" aria-label="top navigation">
-        
-          <i data-toggle="wy-nav-top" class="fa fa-bars"></i>
-          <a href="../index.html">Kyuubi</a>
-        
-      </nav>
-
-
-      <div class="wy-nav-content">
-        
-        <div class="rst-content">
-        
-          
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-<div role="navigation" aria-label="breadcrumbs navigation">
-
-  <ul class="wy-breadcrumbs">
-    
-      <li><a href="../index.html" class="icon icon-home"></a> &raquo;</li>
-        
-          <li><a href="index.html">Deploying Kyuubi</a> &raquo;</li>
-        
-      <li><span class="section-number">3. </span>The Share Level Of Kyuubi Engines</li>
-    
-    
-      <li class="wy-breadcrumbs-aside">
-        
-          
-            <a href="../_sources/deployment/engine_share_level.md.txt" rel="nofollow"> View page source</a>
-          
-        
-      </li>
-    
-  </ul>
-
-  
-  <hr/>
-</div>
-          <div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
-           <div itemprop="articleBody">
-            
-  <!--
+<!--
  - Licensed to the Apache Software Foundation (ASF) under one or more
  - contributor license agreements.  See the NOTICE file distributed with
  - this work for additional information regarding copyright ownership.
@@ -215,231 +13,151 @@
  - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  - See the License for the specific language governing permissions and
  - limitations under the License.
- --><div align=center><p><img alt="../_images/kyuubi_logo.png" src="../_images/kyuubi_logo.png" /></p>
-</div><div class="section" id="the-share-level-of-kyuubi-engines">
-<h1><span class="section-number">3. </span>The Share Level Of Kyuubi Engines<a class="headerlink" href="#the-share-level-of-kyuubi-engines" title="Permalink to this headline">¶</a></h1>
-<p>The share level of Kyuubi engines describes the relationship between sessions and engines.
+ -->
+
+
+# The Share Level Of Kyuubi Engines
+
+The share level of Kyuubi engines describes the relationship between sessions and engines.
 It determines whether a new session can share an existing backend engine with other sessions or not.
-The sessions are also known as JDBC/ODBC/Thrift connections from clients that end-users create, and the engines are standalone applications with the full capabilities of Spark SQL, Flink SQL(under dev), running on single-node machines or clusters.</p>
-<p>The share level of Kyuubi engines works the same whether in HA or single node mode.
-In other words, an engine is cluster widely shared by all Kyuubi server peers if could.</p>
-<div class="section" id="why-do-we-need-this-feature">
-<h2><span class="section-number">3.1. </span>Why do we need this feature?<a class="headerlink" href="#why-do-we-need-this-feature" title="Permalink to this headline">¶</a></h2>
-<p>Apache Spark is a unified engine for large-scale data analytics.
+The sessions are also known as JDBC/ODBC/Thrift connections from clients that end-users create, and the engines are standalone applications with the full capabilities of Spark SQL, Flink SQL(under dev), running on single-node machines or clusters.
+
+The share level of Kyuubi engines works the same whether in HA or single node mode.
+In other words, an engine is cluster widely shared by all Kyuubi server peers if could.
+
+## Why do we need this feature?
+
+Apache Spark is a unified engine for large-scale data analytics.
 Using Spark to process data is like driving an all-wheel-drive hefty horsepower supercar.
-However,</p>
-<ul class="simple">
-<li><p>Cars have their limit of 0-60 times.
-In a similar way, all Spark applications also have to warm up before go full speed.</p></li>
-<li><p>Cars have a constant number of seats and are not allowed to be overloaded.
-Due to the master-slave architecture of Spark and the resource configured ahead, the overall workload of a single application is predictable.</p></li>
-<li><p>Cars have various shapes to meet our needs.</p></li>
-</ul>
-<p>With this feature, Kyuubi give you a more flexible way to handle different big data workloads.</p>
-</div>
-<div class="section" id="the-current-supported-share-levels">
-<h2><span class="section-number">3.2. </span>The current supported share levels<a class="headerlink" href="#the-current-supported-share-levels" title="Permalink to this headline">¶</a></h2>
-<p>The current supported share levels are,</p>
-<table border="1" class="docutils">
-<thead>
-<tr>
-<th>Share Level</th>
-<th>Syntax</th>
-<th>Scenario</th>
-<th>Isolation Degree</th>
-<th>Sharability</th>
-</tr>
-</thead>
-<tbody>
-<tr>
-<td><strong>CONNECTION</strong></td>
-<td>One engine per session</td>
-<td>Large-scale ETL </br> Ad hoc</td>
-<td>High</td>
-<td>Low</td>
-</tr>
-<tr>
-<td><strong>USER</strong></td>
-<td>One engine per user</td>
-<td>Ad hoc </br> Small-scale ETL</td>
-<td>Medium</td>
-<td>Medium</td>
-</tr>
-<tr>
-<td><strong>GROUP</strong></td>
-<td>One engine per primary group</td>
-<td>Ad hoc </br> Small-scale ETL</td>
-<td>Low</td>
-<td>High</td>
-</tr>
-<tr>
-<td><strong>SERVER</strong></td>
-<td>One engine per cluster</td>
-<td>Admin</td>
-<td>Highest If Secured </br> Lowest If Unsecured</td>
-<td>Admin ONLY If Secured</td>
-</tr>
-</tbody>
-</table><ul class="simple">
-<li><p>Better isolation degree of engines gives us better stability of an engine and the query executions running on it.</p></li>
-<li><p>Better sharability of engines means we are more likely to reuse an engine which is already in full speed.</p></li>
-</ul>
-<div class="section" id="connection">
-<h3><span class="section-number">3.2.1. </span>CONNECTION<a class="headerlink" href="#connection" title="Permalink to this headline">¶</a></h3>
+However,
+
+- Cars have their limit of 0-60 times.
+In a similar way, all Spark applications also have to warm up before go full speed.
+- Cars have a constant number of seats and are not allowed to be overloaded.
+Due to the master-slave architecture of Spark and the resource configured ahead, the overall workload of a single application is predictable.
+- Cars have various shapes to meet our needs.
+
+With this feature, Kyuubi give you a more flexible way to handle different big data workloads.
+
+## The current supported share levels
+
+The current supported share levels are,
+
+| Share Level | Syntax | Scenario | Isolation Degree | Sharability |
+| --- | --- | ---- | --- | --- |
+| **CONNECTION** | One engine per session | Large-scale ETL </br> Ad hoc | High | Low |
+| **USER** | One engine per user | Ad hoc </br> Small-scale ETL | Medium | Medium|
+| **GROUP** | One engine per primary group | Ad hoc </br> Small-scale ETL | Low | High |
+| **SERVER**| One engine per cluster | Admin | Highest If Secured </br> Lowest If Unsecured | Admin ONLY If Secured |
+
+- Better isolation degree of engines gives us better stability of an engine and the query executions running on it.
+- Better sharability of engines means we are more likely to reuse an engine which is already in full speed.
+
+### CONNECTION
+
 <body><div class="mxgraph" style="max-width:100%;border:1px solid transparent;" data-mxgraph="{&quot;nav&quot;:true,&quot;resize&quot;:true,&quot;toolbar&quot;:&quot;zoom layers tags lightbox&quot;,&quot;edit&quot;:&quot;_blank&quot;,&quot;xml&quot;:&quot;&lt;mxfile host=\&quot;Electron\&quot; modified=\&quot;2021-11-15T06:45:25.722Z\&quot; agent=\&quot;5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/15.4.0 Chrome/91.0.4472.164 Electron/13.5.0 Safar [...]
 <script type="text/javascript" src="https://viewer.diagrams.net/js/viewer-static.min.js"></script>
 </body>
-<div align=center><p><em>Figure.1 CONNECTION Share Level</em></p>
-</div><p>Each session with CONNECTION share level has a standalone engine for itself which is unreachable for anyone else.
-Within the session, a user or client can send multiple operation request, including metadata calls or queries, to the corresponding engine.</p>
-<p>Although it is still an interactive form, this model does allow for more practical batch processing jobs as well.</p>
-<p>When closing session, the corresponding engine will be shutdown at the same time.</p>
+<div align=center>
+
+*Figure.1 CONNECTION Share Level*
+
 </div>
-<div class="section" id="user-default">
-<h3><span class="section-number">3.2.2. </span>USER(Default)<a class="headerlink" href="#user-default" title="Permalink to this headline">¶</a></h3>
+
+Each session with CONNECTION share level has a standalone engine for itself which is unreachable for anyone else.
+Within the session, a user or client can send multiple operation request, including metadata calls or queries, to the corresponding engine.
+
+Although it is still an interactive form, this model does allow for more practical batch processing jobs as well.
+
+When closing session, the corresponding engine will be shutdown at the same time.
+
+### USER(Default)
+
 <body><div class="mxgraph" style="max-width:100%;border:1px solid transparent;" data-mxgraph="{&quot;nav&quot;:true,&quot;resize&quot;:true,&quot;toolbar&quot;:&quot;zoom layers tags lightbox&quot;,&quot;edit&quot;:&quot;_blank&quot;,&quot;xml&quot;:&quot;&lt;mxfile host=\&quot;Electron\&quot; modified=\&quot;2021-11-15T06:49:50.020Z\&quot; agent=\&quot;5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/15.4.0 Chrome/91.0.4472.164 Electron/13.5.0 Safar [...]
 <script type="text/javascript" src="https://viewer.diagrams.net/js/viewer-static.min.js"></script>
 </body>
-<div align=center><p><em>Figure.2 USER Share Level</em></p>
-</div><p>All sessions with USER share level use the same engine if and only if the session user is the same.</p>
-<p>Those sessions share the same engine with objects belong to the one and only <code class="docutils literal notranslate"><span class="pre">SparkContext</span></code> instance, including <code class="docutils literal notranslate"><span class="pre">Classes/Classloaders</span></code>, <code class="docutils literal notranslate"><span class="pre">SparkConf</span></code>, <code class="docutils literal notranslate"><span class="pre">Driver</span></code>/<code class="docutils literal notransla [...]
-But each session can still have its own <code class="docutils literal notranslate"><span class="pre">SparkSession</span></code> instance, which contains separate session state, including temporary views, SQL config, UDFs etc.
-Setting <code class="docutils literal notranslate"><span class="pre">kyuubi.engine.single.spark.session</span></code> to true will make <code class="docutils literal notranslate"><span class="pre">SparkSession</span></code> instance a singleton and share across sessions.</p>
-<p>When closing session, the corresponding engine will not be shutdown.
-When all sessions are closed, the corresponding engine still has a time-to-live lifespan.
-This TTL allows new sessions to be established quickly without waiting for the engine to start.</p>
+<div align=center>
+
+*Figure.2 USER Share Level*
 </div>
-<div class="section" id="group">
-<h3><span class="section-number">3.2.3. </span>GROUP<a class="headerlink" href="#group" title="Permalink to this headline">¶</a></h3>
+
+All sessions with USER share level use the same engine if and only if the session user is the same.
+
+Those sessions share the same engine with objects belong to the one and only `SparkContext` instance, including `Classes/Classloaders`, `SparkConf`, `Driver`/`Executor`s, `Hive Metastore Client`, etc.
+But each session can still have its own `SparkSession` instance, which contains separate session state, including temporary views, SQL config, UDFs etc.
+Setting `kyuubi.engine.single.spark.session` to true will make `SparkSession` instance a singleton and share across sessions.
+
+When closing session, the corresponding engine will not be shutdown.
+When all sessions are closed, the corresponding engine still has a time-to-live lifespan.
+This TTL allows new sessions to be established quickly without waiting for the engine to start.
+
+### GROUP
+
 <body><div class="mxgraph" style="max-width:100%;border:1px solid transparent;" data-mxgraph="{&quot;nav&quot;:true,&quot;resize&quot;:true,&quot;toolbar&quot;:&quot;zoom layers tags lightbox&quot;,&quot;edit&quot;:&quot;_blank&quot;,&quot;xml&quot;:&quot;&lt;mxfile host=\&quot;Electron\&quot; modified=\&quot;2021-11-15T06:39:03.927Z\&quot; agent=\&quot;5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/15.4.0 Chrome/91.0.4472.164 Electron/13.5.0 Safar [...]
 <script type="text/javascript" src="https://viewer.diagrams.net/js/viewer-static.min.js"></script>
 </body>
-<div align=center><p><em>Figure.3 GROUP Share Level</em></p>
-</div><p>An engine will be shared by all sessions created by all users belong to the same primary group name.
+<div align=center>
+
+*Figure.3 GROUP Share Level*
+
+</div>
+
+
+An engine will be shared by all sessions created by all users belong to the same primary group name.
 The engine will be launched by the group name as the effective username, so here the group name is kind of special user who is able to visit the compute resources/data of a team.
-It follows the <a class="reference external" href="https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/GroupsMapping.html">Hadoop GroupsMapping</a> to map user to a primary group. If the primary group is not found, it falls back to the USER level.</p>
-<p>The mechanisms of <code class="docutils literal notranslate"><span class="pre">SparkContext</span></code>, <code class="docutils literal notranslate"><span class="pre">SparkSession</span></code> and TTL works similarly to USER share level.</p>
-<p><strong>Tips for authorization in GROUP share level</strong>:</p>
-<p>The session user and the primary group name(as sparkUser/execute user) will be both accessible at engine-side.
+It follows the [Hadoop GroupsMapping](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/GroupsMapping.html) to map user to a primary group. If the primary group is not found, it falls back to the USER level.
+
+The mechanisms of `SparkContext`, `SparkSession` and TTL works similarly to USER share level.
+
+**Tips for authorization in GROUP share level**:
+
+The session user and the primary group name(as sparkUser/execute user) will be both accessible at engine-side.
 By default, the sparkUser will be used to check the YARN/HDFS ACLs.
-If you want fine-grained access control for session user, you need to get it from <code class="docutils literal notranslate"><span class="pre">SparkContext.getLocalProperty(&quot;kyuubi.session.user&quot;)</span></code> and send it to security service, like Apache Ranger.</p>
-</div>
-<div class="section" id="server">
-<h3><span class="section-number">3.2.4. </span>SERVER<a class="headerlink" href="#server" title="Permalink to this headline">¶</a></h3>
+If you want fine-grained access control for session user, you need to get it from `SparkContext.getLocalProperty("kyuubi.session.user")` and send it to security service, like Apache Ranger.
+
+### SERVER
+
 <body><div class="mxgraph" style="max-width:100%;border:1px solid transparent;" data-mxgraph="{&quot;nav&quot;:true,&quot;resize&quot;:true,&quot;toolbar&quot;:&quot;zoom layers tags lightbox&quot;,&quot;edit&quot;:&quot;_blank&quot;,&quot;xml&quot;:&quot;&lt;mxfile host=\&quot;Electron\&quot; modified=\&quot;2021-11-15T07:07:11.985Z\&quot; agent=\&quot;5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/15.4.0 Chrome/91.0.4472.164 Electron/13.5.0 Safar [...]
 <script type="text/javascript" src="https://viewer.diagrams.net/js/viewer-static.min.js"></script>
 </body>
-<div align=center><p><em>Figure.4 SERVER Share Level</em></p>
-</div><p>Literally, this model is similar to Spark Thrift Server with High availability.</p>
+<div align=center>
+
+*Figure.4 SERVER Share Level*
+
 </div>
-<div class="section" id="subdomain">
-<h3><span class="section-number">3.2.5. </span>Subdomain<a class="headerlink" href="#subdomain" title="Permalink to this headline">¶</a></h3>
-<p>For USER, GROUP, or SERVER share levels, you can further use <code class="docutils literal notranslate"><span class="pre">kyuubi.engine.share.level.subdomain</span></code> to isolate the engine.
+
+Literally, this model is similar to Spark Thrift Server with High availability.
+
+### Subdomain
+
+For USER, GROUP, or SERVER share levels, you can further use `kyuubi.engine.share.level.subdomain` to isolate the engine.
 That is, you can also create multiple engines for a single user, group or server(cluster).
-For example, in USER share level, you can use <code class="docutils literal notranslate"><span class="pre">kyuubi.engine.share.level.subdomain=sd1</span></code> and <code class="docutils literal notranslate"><span class="pre">kyuubi.engine.share.level.subdomain=sd2</span></code> to create two standalone engines for user <code class="docutils literal notranslate"><span class="pre">Tom</span></code>.</p>
-<p>The <code class="docutils literal notranslate"><span class="pre">kyuubi.engine.share.level.subdomain</span></code> shall be configured in the JDBC connection URL to tell the Kyuubi server which engine you want to use.</p>
-</div>
-<div class="section" id="hybrid">
-<h3><span class="section-number">3.2.6. </span>Hybrid<a class="headerlink" href="#hybrid" title="Permalink to this headline">¶</a></h3>
-<p>All supported share levels can be used together in a single Kyuubi server or cluster.</p>
-</div>
-</div>
-<div class="section" id="related-configurations">
-<h2><span class="section-number">3.3. </span>Related Configurations<a class="headerlink" href="#related-configurations" title="Permalink to this headline">¶</a></h2>
-<ul class="simple">
-<li><p>kyuubi.engine.share.level(kyuubi.session.engine.share.level)</p>
-<ul>
-<li><p>Default: USER</p></li>
-<li><p>Candidates: USER, CONNECTION, GROUP, SERVER</p></li>
-<li><p>Meaning: The base level for how an engine is created, cached and shared to sessions.</p></li>
-<li><p>Usage: It can be set both in the server configuration file and also connection URL. The latter has higher priority.</p></li>
-</ul>
-</li>
-<li><p>kyuubi.session.engine.idle.timeout</p>
-<ul>
-<li><p>Default: PT30M (30 min)</p></li>
-<li><p>Candidates: a proper timeout</p></li>
-<li><p>Meaning: Time to live since engine becomes idle</p></li>
-<li><p>Usage: It can be set both in the server configuration file and also connection URL. The latter has higher priority.</p></li>
-</ul>
-</li>
-<li><p>kyuubi.engine.share.level.subdomain(kyuubi.engine.share.level.sub.domain)</p>
-<ul>
-<li><p>Default: <none></p></li>
-<li><p>Candidates: a valid zookeeper a child node</p></li>
-<li><p>Meaning: Add a subdomain under the base level to make further isolation for engines</p></li>
-<li><p>Usage: It can be set both in the server configuration file and also connection URL. The latter has higher priority.</p></li>
-</ul>
-</li>
-</ul>
-</div>
-<div class="section" id="conclusion">
-<h2><span class="section-number">3.4. </span>Conclusion<a class="headerlink" href="#conclusion" title="Permalink to this headline">¶</a></h2>
-<p>With This feature, end-users are able to leverage engines in different ways to handle their different workloads, such as large-scale ETL jobs and interactive ad hoc queries.</p>
-</div>
-</div>
+For example, in USER share level, you can use `kyuubi.engine.share.level.subdomain=sd1` and `kyuubi.engine.share.level.subdomain=sd2` to create two standalone engines for user `Tom`.
 
+The `kyuubi.engine.share.level.subdomain` shall be configured in the JDBC connection URL to tell the Kyuubi server which engine you want to use.
 
-           </div>
-           
-          </div>
-          <footer>
-    <div class="rst-footer-buttons" role="navigation" aria-label="footer navigation">
-        <a href="engine_lifecycle.html" class="btn btn-neutral float-right" title="4. The TTL Of Kyuubi Engines" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right" aria-hidden="true"></span></a>
-        <a href="engine_on_kubernetes.html" class="btn btn-neutral float-left" title="2. Deploy Kyuubi engines on Kubernetes" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left" aria-hidden="true"></span> Previous</a>
-    </div>
-
-  <hr/>
-
-  <div role="contentinfo">
-    <p>
-        &#169; Copyright 
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the &#34;License&#34;); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-   http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an &#34;AS IS&#34; BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-.
-
-    </p>
-  </div>
-    
-    
-    
-    Built with <a href="https://www.sphinx-doc.org/">Sphinx</a> using a
-    
-    <a href="https://github.com/readthedocs/sphinx_rtd_theme">theme</a>
-    
-    provided by <a href="https://readthedocs.org">Read the Docs</a>. 
-
-</footer>
-        </div>
-      </div>
-
-    </section>
-
-  </div>
-  
-
-  <script type="text/javascript">
-      jQuery(function () {
-          SphinxRtdTheme.Navigation.enable(true);
-      });
-  </script>
-
-  
-  
-    
-   
+### Hybrid
 
-</body>
-</html>
\ No newline at end of file
+All supported share levels can be used together in a single Kyuubi server or cluster. 
+
+## Related Configurations
+
+- kyuubi.engine.share.level(kyuubi.session.engine.share.level)
+  - Default: USER
+  - Candidates: USER, CONNECTION, GROUP, SERVER
+  - Meaning: The base level for how an engine is created, cached and shared to sessions. 
+  - Usage: It can be set both in the server configuration file and also connection URL. The latter has higher priority.
+- kyuubi.session.engine.idle.timeout
+  - Default: PT30M (30 min)
+  - Candidates: a proper timeout
+  - Meaning: Time to live since engine becomes idle
+  - Usage: It can be set both in the server configuration file and also connection URL. The latter has higher priority.
+- kyuubi.engine.share.level.subdomain(kyuubi.engine.share.level.sub.domain)
+  - Default: <none>
+  - Candidates: a valid zookeeper a child node
+  - Meaning: Add a subdomain under the base level to make further isolation for engines
+  - Usage: It can be set both in the server configuration file and also connection URL. The latter has higher priority.
+
+## Conclusion
+
+With This feature, end-users are able to leverage engines in different ways to handle their different workloads, such as large-scale ETL jobs and interactive ad hoc queries.
diff --git a/content/docs/latest/deployment/high_availability_guide.html b/content/docs/latest/_sources/deployment/high_availability_guide.md.txt
similarity index 87%
copy from content/docs/latest/deployment/high_availability_guide.html
copy to content/docs/latest/_sources/deployment/high_availability_guide.md.txt
index ea2ff06..0189432 100644
--- a/content/docs/latest/deployment/high_availability_guide.html
+++ b/content/docs/latest/_sources/deployment/high_availability_guide.md.txt
@@ -1,203 +1,4 @@
-
-
-<!DOCTYPE html>
-<html class="writer-html5" lang="en" >
-<head>
-  <meta charset="utf-8" />
-  
-  <meta name="viewport" content="width=device-width, initial-scale=1.0" />
-  
-  <title>3. Kyuubi High Availability Guide &mdash; Kyuubi 1.5.1-incubating documentation</title>
-  
-
-  
-  <link rel="stylesheet" href="../_static/css/custom.css" type="text/css" />
-  <link rel="stylesheet" href="../_static/pygments.css" type="text/css" />
-  <link rel="stylesheet" href="../_static/pygments.css" type="text/css" />
-  <link rel="stylesheet" href="../_static/css/custom.css" type="text/css" />
-
-  
-  
-
-  
-  
-
-  
-
-  
-  <!--[if lt IE 9]>
-    <script src="../_static/js/html5shiv.min.js"></script>
-  <![endif]-->
-  
-    
-      <script type="text/javascript" id="documentation_options" data-url_root="../" src="../_static/documentation_options.js"></script>
-        <script data-url_root="../" id="documentation_options" src="../_static/documentation_options.js"></script>
-        <script src="../_static/jquery.js"></script>
-        <script src="../_static/underscore.js"></script>
-        <script src="../_static/doctools.js"></script>
-    
-    <script type="text/javascript" src="../_static/js/theme.js"></script>
-
-    
-    <link rel="index" title="Index" href="../genindex.html" />
-    <link rel="search" title="Search" href="../search.html" />
-    <link rel="next" title="1. Introduction to the Kyuubi Configurations System" href="settings.html" />
-    <link rel="prev" title="2. Integration with Hive Metastore" href="hive_metastore.html" /> 
-</head>
-
-<body class="wy-body-for-nav">
-
-   
-  <div class="wy-grid-for-nav">
-    
-    <nav data-toggle="wy-nav-shift" class="wy-nav-side">
-      <div class="wy-side-scroll">
-        <div class="wy-side-nav-search" >
-          
-
-          
-            <a href="../index.html" class="icon icon-home"> Kyuubi
-          
-
-          
-            
-            <img src="../_static/kyuubi_logo_gray.png" class="logo" alt="Logo"/>
-          
-          </a>
-
-          
-            
-            
-          
-
-          
-<div role="search">
-  <form id="rtd-search-form" class="wy-form" action="../search.html" method="get">
-    <input type="text" name="q" placeholder="Search docs" />
-    <input type="hidden" name="check_keywords" value="yes" />
-    <input type="hidden" name="area" value="default" />
-  </form>
-</div>
-
-          
-        </div>
-
-        
-        <div class="wy-menu wy-menu-vertical" data-spy="affix" role="navigation" aria-label="main navigation">
-          
-            
-            
-              
-            
-            
-              <p class="caption" role="heading"><span class="caption-text">Usage Guide</span></p>
-<ul class="current">
-<li class="toctree-l1"><a class="reference internal" href="../quick_start/index.html">Quick Start</a></li>
-<li class="toctree-l1 current"><a class="reference internal" href="index.html">Deploying Kyuubi</a><ul class="current">
-<li class="toctree-l2 current"><a class="reference internal" href="index.html#basics">Basics</a><ul class="current">
-<li class="toctree-l3"><a class="reference internal" href="kyuubi_on_kubernetes.html">1. Deploy Kyuubi On Kubernetes</a></li>
-<li class="toctree-l3"><a class="reference internal" href="hive_metastore.html">2. Integration with Hive Metastore</a></li>
-<li class="toctree-l3 current"><a class="current reference internal" href="#">3. Kyuubi High Availability Guide</a><ul>
-<li class="toctree-l4"><a class="reference internal" href="#ha-architecture">3.1. HA Architecture</a></li>
-<li class="toctree-l4"><a class="reference internal" href="#system-side-deployment">3.2. System-side Deployment</a></li>
-<li class="toctree-l4"><a class="reference internal" href="#client-side-usage">3.3. Client-side Usage</a></li>
-</ul>
-</li>
-</ul>
-</li>
-<li class="toctree-l2"><a class="reference internal" href="index.html#configurations">Configurations</a></li>
-<li class="toctree-l2"><a class="reference internal" href="index.html#engines">Engines</a></li>
-</ul>
-</li>
-<li class="toctree-l1"><a class="reference internal" href="../security/index.html">Security</a></li>
-<li class="toctree-l1"><a class="reference internal" href="../client/index.html">Client Documentation</a></li>
-<li class="toctree-l1"><a class="reference internal" href="../integrations/index.html">Integrations</a></li>
-<li class="toctree-l1"><a class="reference internal" href="../monitor/index.html">Monitoring</a></li>
-<li class="toctree-l1"><a class="reference internal" href="../sql/index.html">SQL References</a></li>
-<li class="toctree-l1"><a class="reference internal" href="../tools/index.html">Tools</a></li>
-</ul>
-<p class="caption" role="heading"><span class="caption-text">Kyuubi Insider</span></p>
-<ul>
-<li class="toctree-l1"><a class="reference internal" href="../overview/index.html">Overview</a></li>
-</ul>
-<p class="caption" role="heading"><span class="caption-text">Contributing</span></p>
-<ul>
-<li class="toctree-l1"><a class="reference internal" href="../develop_tools/index.html">Develop Tools</a></li>
-<li class="toctree-l1"><a class="reference internal" href="../community/index.html">Community</a></li>
-</ul>
-<p class="caption" role="heading"><span class="caption-text">Appendix</span></p>
-<ul>
-<li class="toctree-l1"><a class="reference internal" href="../appendix/index.html">Appendixes</a></li>
-</ul>
-
-            
-          
-        </div>
-        
-      </div>
-    </nav>
-
-    <section data-toggle="wy-nav-shift" class="wy-nav-content-wrap">
-
-      
-      <nav class="wy-nav-top" aria-label="top navigation">
-        
-          <i data-toggle="wy-nav-top" class="fa fa-bars"></i>
-          <a href="../index.html">Kyuubi</a>
-        
-      </nav>
-
-
-      <div class="wy-nav-content">
-        
-        <div class="rst-content">
-        
-          
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-<div role="navigation" aria-label="breadcrumbs navigation">
-
-  <ul class="wy-breadcrumbs">
-    
-      <li><a href="../index.html" class="icon icon-home"></a> &raquo;</li>
-        
-          <li><a href="index.html">Deploying Kyuubi</a> &raquo;</li>
-        
-      <li><span class="section-number">3. </span>Kyuubi High Availability Guide</li>
-    
-    
-      <li class="wy-breadcrumbs-aside">
-        
-          
-            <a href="../_sources/deployment/high_availability_guide.md.txt" rel="nofollow"> View page source</a>
-          
-        
-      </li>
-    
-  </ul>
-
-  
-  <hr/>
-</div>
-          <div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
-           <div itemprop="articleBody">
-            
-  <!--
+<!--
  - Licensed to the Apache Software Foundation (ASF) under one or more
  - contributor license agreements.  See the NOTICE file distributed with
  - this work for additional information regarding copyright ownership.
@@ -212,129 +13,96 @@
  - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  - See the License for the specific language governing permissions and
  - limitations under the License.
- --><div align=center><p><img alt="../_images/kyuubi_logo.png" src="../_images/kyuubi_logo.png" /></p>
-</div><div class="section" id="kyuubi-high-availability-guide">
-<h1><span class="section-number">3. </span>Kyuubi High Availability Guide<a class="headerlink" href="#kyuubi-high-availability-guide" title="Permalink to this headline">¶</a></h1>
-<p>As an enterprise-class ad-hoc SQL query service built on top of <a class="reference external" href="https://spark.apache.org/">Apache Spark</a>, Kyuubi takes high availability (HA) as a major characteristic, aiming to ensure an agreed level of service availability, such as a higher than normal period of uptime.</p>
-<p>Running Kyuubi in HA mode is to use groups of computers or containers that support SQL query service on Kyuubi that can be reliably utilized with a minimum amount of down-time. Kyuubi operates by using <a class="reference external" href="https://zookeeper.apache.org/">Apache ZooKeeper</a> to harness redundant service instances in groups that provide continuous service when one or more components fail.</p>
-<p>Without HA, if a server crashes, Kyuubi will be unavailable until the crashed server is fixed. With HA, this situation will be remedied by hardware/software faults auto-detecting, and immediately another Kyuubi service instance will be ready to serve without requiring human intervention.</p>
-<div class="section" id="ha-architecture">
-<h2><span class="section-number">3.1. </span>HA Architecture<a class="headerlink" href="#ha-architecture" title="Permalink to this headline">¶</a></h2>
-<p>Currently, Kyuubi supports load balancing to make the whole system highly available.</p>
-<p>Load balancing aims to optimize all Kyuubi service unit’s usage, maximize throughput, minimize response time, and avoid overload of a single unit.
-Using multiple Kyuubi service units with load balancing instead of a single unit may increase reliability and availability through redundancy.</p>
+ -->
+
+
+# Kyuubi High Availability Guide
+
+As an enterprise-class ad-hoc SQL query service built on top of [Apache Spark](https://spark.apache.org/), Kyuubi takes high availability (HA) as a major characteristic, aiming to ensure an agreed level of service availability, such as a higher than normal period of uptime.
+
+Running Kyuubi in HA mode is to use groups of computers or containers that support SQL query service on Kyuubi that can be reliably utilized with a minimum amount of down-time. Kyuubi operates by using [Apache ZooKeeper](https://zookeeper.apache.org/) to harness redundant service instances in groups that provide continuous service when one or more components fail.
+
+Without HA, if a server crashes, Kyuubi will be unavailable until the crashed server is fixed. With HA, this situation will be remedied by hardware/software faults auto-detecting, and immediately another Kyuubi service instance will be ready to serve without requiring human intervention. 
+
+## HA Architecture
+
+Currently, Kyuubi supports load balancing to make the whole system highly available.
+
+Load balancing aims to optimize all Kyuubi service unit's usage, maximize throughput, minimize response time, and avoid overload of a single unit.
+Using multiple Kyuubi service units with load balancing instead of a single unit may increase reliability and availability through redundancy. 
+
 <body><div class="mxgraph" style="max-width:100%;border:1px solid transparent;" data-mxgraph="{&quot;lightbox&quot;:false,&quot;nav&quot;:true,&quot;resize&quot;:true,&quot;toolbar&quot;:&quot;zoom layers tags&quot;,&quot;edit&quot;:&quot;_blank&quot;,&quot;xml&quot;:&quot;&lt;mxfile host=\&quot;Electron\&quot; modified=\&quot;2021-12-08T07:03:35.897Z\&quot; agent=\&quot;5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/15.8.7 Chrome/91.0.4472.164 Ele [...]
 <script type="text/javascript" src="https://viewer.diagrams.net/js/viewer-static.min.js"></script>
-</body><div class="section" id="key-benefits">
-<h3><span class="section-number">3.1.1. </span>Key Benefits<a class="headerlink" href="#key-benefits" title="Permalink to this headline">¶</a></h3>
-<ul class="simple">
-<li><p>High concurrency</p>
-<ul>
-<li><p>By adding or removing Kyuubi server instances can easily scale up or down to meet the need of client requests.</p></li>
-</ul>
-</li>
-<li><p>Upgrade smoothly</p>
-<ul>
-<li><p>Kyuubi server supports stop gracefully. We could delete a <code class="docutils literal notranslate"><span class="pre">k.i.</span></code> but not stop it immediately.
-In this case, the <code class="docutils literal notranslate"><span class="pre">k.i.</span></code> will not take any new connection request but only operation requests from existing connections.
-After all connection are released, it stops then.</p></li>
-<li><p>The dependencies of Kyuubi engines are free to change, such as bump up versions, modify configurations, add external jars, relocate to another engine home. Everything will be reloaded during start and stop.</p></li>
-</ul>
-</li>
-</ul>
-</div>
-</div>
-<div class="section" id="system-side-deployment">
-<h2><span class="section-number">3.2. </span>System-side Deployment<a class="headerlink" href="#system-side-deployment" title="Permalink to this headline">¶</a></h2>
-<p>When applying HA to Kyuubi deployment, we need to be aware of the below two thing basically,</p>
-<ul class="simple">
-<li><p><code class="docutils literal notranslate"><span class="pre">kyuubi.ha.zookeeper.quorum</span></code> - the external zookeeper cluster address for deploy a <code class="docutils literal notranslate"><span class="pre">k.i.</span></code></p></li>
-<li><p><code class="docutils literal notranslate"><span class="pre">kyuubi.ha.zookeeper.namespace</span></code> - the root directory, a.k.a. the ServerSpace for deploy a <code class="docutils literal notranslate"><span class="pre">k.i.</span></code></p></li>
-</ul>
-<p>For more configurations, please see the HA section of <a class="reference external" href="./settings.html#ha">Introduction to the Kyuubi Configurations System</a></p>
-<div class="section" id="pseudo-mode">
-<h3><span class="section-number">3.2.1. </span>Pseudo mode<a class="headerlink" href="#pseudo-mode" title="Permalink to this headline">¶</a></h3>
-<p>When <code class="docutils literal notranslate"><span class="pre">kyuubi.ha.zookeeper.quorum</span></code> is not configured, a <code class="docutils literal notranslate"><span class="pre">k.i.</span></code> will start an embedded zookeeper service and expose the address of itself there.
-In this pseduo mode, the <code class="docutils literal notranslate"><span class="pre">k.i.</span></code> can be connected by clients through both raw ip address and zk quorum + namespace.
-But it doesn’t have any availability to being highly available.</p>
-</div>
-<div class="section" id="production-mode">
-<h3><span class="section-number">3.2.2. </span>Production mode<a class="headerlink" href="#production-mode" title="Permalink to this headline">¶</a></h3>
-<p>For production deployment purpose, an external zookeeper cluster is required for <code class="docutils literal notranslate"><span class="pre">kyuubi.ha.zookeeper.quorum</span></code>.
-In this mode, multiple <code class="docutils literal notranslate"><span class="pre">k.i.</span></code>s can be registered to the same ServerSpace configured by <code class="docutils literal notranslate"><span class="pre">kyuubi.ha.zookeeper.namespace</span></code> and serve together.</p>
-</div>
-</div>
-<div class="section" id="client-side-usage">
-<h2><span class="section-number">3.3. </span>Client-side Usage<a class="headerlink" href="#client-side-usage" title="Permalink to this headline">¶</a></h2>
-<p>With <a class="reference external" href="https://mvnrepository.com/artifact/org.apache.kyuubi/kyuubi-hive-jdbc">Kyuubi Hive JDBC Driver</a> or vanilla Hive JDBC Driver, a client can specify service discovery mode in JDBC connection string, i.e. <code class="docutils literal notranslate"><span class="pre">serviceDiscoveryMode=zooKeeper;</span></code> and set <code class="docutils literal notranslate"><span class="pre">zooKeeperNamespace=kyuubi;</span></code>, then it can randomly pick  [...]
-<p>For example,</p>
-<div class="highlight-shell notranslate"><div class="highlight"><pre><span></span>bin/beeline -u <span class="s1">&#39;jdbc:hive2://10.242.189.214:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=kyuubi&#39;</span> -n kentyao
-</pre></div>
-</div>
-</div>
-</div>
-
-
-           </div>
-           
-          </div>
-          <footer>
-    <div class="rst-footer-buttons" role="navigation" aria-label="footer navigation">
-        <a href="settings.html" class="btn btn-neutral float-right" title="1. Introduction to the Kyuubi Configurations System" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right" aria-hidden="true"></span></a>
-        <a href="hive_metastore.html" class="btn btn-neutral float-left" title="2. Integration with Hive Metastore" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left" aria-hidden="true"></span> Previous</a>
-    </div>
-
-  <hr/>
-
-  <div role="contentinfo">
-    <p>
-        &#169; Copyright 
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the &#34;License&#34;); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-   http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an &#34;AS IS&#34; BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-.
-
-    </p>
-  </div>
-    
-    
-    
-    Built with <a href="https://www.sphinx-doc.org/">Sphinx</a> using a
-    
-    <a href="https://github.com/readthedocs/sphinx_rtd_theme">theme</a>
-    
-    provided by <a href="https://readthedocs.org">Read the Docs</a>. 
-
-</footer>
-        </div>
-      </div>
-
-    </section>
-
-  </div>
-  
+</body>
+
+
+### Key Benefits
+
+- High concurrency
+  - By adding or removing Kyuubi server instances can easily scale up or down to meet the need of client requests.
+- Upgrade smoothly
+  - Kyuubi server supports stop gracefully. We could delete a `k.i.` but not stop it immediately.
+    In this case, the `k.i.` will not take any new connection request but only operation requests from existing connections.
+    After all connection are released, it stops then.
+  - The dependencies of Kyuubi engines are free to change, such as bump up versions, modify configurations, add external jars, relocate to another engine home. Everything will be reloaded during start and stop.
+
+
+## System-side Deployment
+
+When applying HA to Kyuubi deployment, we need to be aware of the below two thing basically,
+
+- `kyuubi.ha.zookeeper.quorum` - the external zookeeper cluster address for deploy a `k.i.`
+- `kyuubi.ha.zookeeper.namespace` - the root directory, a.k.a. the ServerSpace for deploy a `k.i.`
+
+For more configurations, please see the HA section of [Introduction to the Kyuubi Configurations System](./settings.html#ha)
+
+### Pseudo mode
+
+When `kyuubi.ha.zookeeper.quorum` is not configured, a `k.i.` will start an embedded zookeeper service and expose the address of itself there.
+In this pseduo mode, the `k.i.` can be connected by clients through both raw ip address and zk quorum + namespace.
+But it doesn't have any availability to being highly available.
+
+### Production mode
 
-  <script type="text/javascript">
-      jQuery(function () {
-          SphinxRtdTheme.Navigation.enable(true);
-      });
-  </script>
+For production deployment purpose, an external zookeeper cluster is required for `kyuubi.ha.zookeeper.quorum`.
+In this mode, multiple `k.i.`s can be registered to the same ServerSpace configured by `kyuubi.ha.zookeeper.namespace` and serve together.
 
+
+## Client-side Usage
+
+With [Kyuubi Hive JDBC Driver](https://mvnrepository.com/artifact/org.apache.kyuubi/kyuubi-hive-jdbc) or vanilla Hive JDBC Driver, a client can specify service discovery mode in JDBC connection string, i.e. `serviceDiscoveryMode=zooKeeper;` and set `zooKeeperNamespace=kyuubi;`, then it can randomly pick one of the Kyuubi service uris from the specified ZooKeeper addresses in the `/kyuubi` path.
+
+For example,
+
+```shell
+bin/beeline -u 'jdbc:hive2://10.242.189.214:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=kyuubi' -n kentyao
+```
+
+## How to Hot Upgrade Kyuubi Server
+
+Kyuubi supports hot upgrade one of server in a HA cluster which is transparent to users.
+
+- If you have specified a custom port for Kyuubi server 
+
+  For example, the Kyuubi server started at host `kyuubi.host` with port `10009`, you can run the following cmd using `bin/kyuubi-ctl`:
   
+  ```shell
+  ./bin/kyuubi-ctl delete server --host "kyuubi.host" --port "10009"
+  ```
   
-    
-   
+  Kyuubi server will stop until all session closed, and then you can start a new Kyuubi server.
 
-</body>
-</html>
\ No newline at end of file
+- If you use a random port for Kyuubi server
+
+  You can just start the new Kyuubi Server, then runing cmd using `bin/kyuubi-ctl`:
+
+  ```shell
+  ./bin/kyuubi-ctl delete server --host "kyuubi.host" --port "${PORT_FPR_OLD_KYUUBI_SERVER}"
+  ```
+
+  The `${PORT_FPR_OLD_KYUUBI_SERVER}` can be found by:
+
+  ```shell
+  grep "server.KyuubiThriftBinaryFrontendService: Starting and exposing JDBC connection at" logs/kyuubi-*.out
+  ```
+  Note that, you do not need to care when the old Kyuubi server actually stopped since the new coming session are routed to the new Kyuubi server and others.
diff --git a/content/docs/latest/_sources/deployment/hive_metastore.md.txt b/content/docs/latest/_sources/deployment/hive_metastore.md.txt
new file mode 100644
index 0000000..d4592b7
--- /dev/null
+++ b/content/docs/latest/_sources/deployment/hive_metastore.md.txt
@@ -0,0 +1,210 @@
+<!--
+ - Licensed to the Apache Software Foundation (ASF) under one or more
+ - contributor license agreements.  See the NOTICE file distributed with
+ - this work for additional information regarding copyright ownership.
+ - The ASF licenses this file to You under the Apache License, Version 2.0
+ - (the "License"); you may not use this file except in compliance with
+ - the License.  You may obtain a copy of the License at
+ -
+ -   http://www.apache.org/licenses/LICENSE-2.0
+ -
+ - Unless required by applicable law or agreed to in writing, software
+ - distributed under the License is distributed on an "AS IS" BASIS,
+ - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ - See the License for the specific language governing permissions and
+ - limitations under the License.
+ -->
+
+
+# Integration with Hive Metastore
+
+In this section, you will learn how to configure Kyuubi to interact with Hive Metastore.
+
+- A common Hive metastore server could be set at Kyuubi server side
+- Individual Hive metastore servers could be used for end users to set
+
+## Requirements
+
+- A running Hive metastore server
+  - [Hive Metastore Administration](https://cwiki.apache.org/confluence/display/Hive/AdminManual+Metastore+Administration)
+  - [Configuring the Hive Metastore for CDH](https://docs.cloudera.com/documentation/enterprise/latest/topics/cdh_ig_hive_metastore_configure.html)
+- A Spark binary distribution built with `-Phive` support
+  - Use the built-in one in the Kyuubi distribution
+  - Download from [Spark official website](https://spark.apache.org/downloads.html)
+  - Build from Spark source, [Building With Hive and JDBC Support](http://spark.apache.org/docs/latest/building-spark.html#building-with-hive-and-jdbc-support)
+- A copy of Hive client configuration
+
+So the whole thing here is to let Spark applications use this copy of Hive configuration to start a Hive metastore client for their own to talk to the Hive metastore server.
+
+## Default Behavior
+
+By default, Kyuubi launches Spark SQL engines pointing to a dummy embedded [Apache Derby](https://db.apache.org/derby/)-based metastore for each application,
+and this metadata can only be seen by one user at a time, e.g.
+
+```shell script
+bin/beeline -u 'jdbc:hive2://localhost:10009/' -n kentyao
+Connecting to jdbc:hive2://localhost:10009/
+Connected to: Spark SQL (version 1.0.0-SNAPSHOT)
+Driver: Hive JDBC (version 2.3.7)
+Transaction isolation: TRANSACTION_REPEATABLE_READ
+Beeline version 2.3.7 by Apache Hive
+0: jdbc:hive2://localhost:10009/> show databases;
+2020-11-16 23:50:50.388 INFO operation.ExecuteStatement:
+           Spark application name: kyuubi_kentyao_spark_2020-11-16T15:50:08.968Z
+                 application ID:  local-1605541809797
+                 application web UI: http://192.168.1.14:60165
+                 master: local[*]
+                 deploy mode: client
+                 version: 3.0.1
+           Start time: 2020-11-16T15:50:09.123Z
+           User: kentyao
+2020-11-16 23:50:50.404 INFO metastore.HiveMetaStore: 2: get_databases: *
+2020-11-16 23:50:50.404 INFO HiveMetaStore.audit: ugi=kentyao	ip=unknown-ip-addr	cmd=get_databases: *
+2020-11-16 23:50:50.423 INFO operation.ExecuteStatement: Processing kentyao's query[8453e657-c1c4-4391-8406-ab4747a66c45]: RUNNING_STATE -> FINISHED_STATE, statement: show databases, time taken: 0.035 seconds
++------------+
+| namespace  |
++------------+
+| default    |
++------------+
+1 row selected (0.122 seconds)
+0: jdbc:hive2://localhost:10009/> show tables;
+2020-11-16 23:50:52.957 INFO operation.ExecuteStatement:
+           Spark application name: kyuubi_kentyao_spark_2020-11-16T15:50:08.968Z
+                 application ID:  local-1605541809797
+                 application web UI: http://192.168.1.14:60165
+                 master: local[*]
+                 deploy mode: client
+                 version: 3.0.1
+           Start time: 2020-11-16T15:50:09.123Z
+           User: kentyao
+2020-11-16 23:50:52.968 INFO metastore.HiveMetaStore: 2: get_database: default
+2020-11-16 23:50:52.968 INFO HiveMetaStore.audit: ugi=kentyao	ip=unknown-ip-addr	cmd=get_database: default
+2020-11-16 23:50:52.970 INFO metastore.HiveMetaStore: 2: get_database: default
+2020-11-16 23:50:52.970 INFO HiveMetaStore.audit: ugi=kentyao	ip=unknown-ip-addr	cmd=get_database: default
+2020-11-16 23:50:52.972 INFO metastore.HiveMetaStore: 2: get_tables: db=default pat=*
+2020-11-16 23:50:52.972 INFO HiveMetaStore.audit: ugi=kentyao	ip=unknown-ip-addr	cmd=get_tables: db=default pat=*
+2020-11-16 23:50:52.986 INFO operation.ExecuteStatement: Processing kentyao's query[ff902582-ba29-433b-b70a-c25ead1353a8]: RUNNING_STATE -> FINISHED_STATE, statement: show tables, time taken: 0.03 seconds
++-----------+------------+--------------+
+| database  | tableName  | isTemporary  |
++-----------+------------+--------------+
++-----------+------------+--------------+
+No rows selected (0.04 seconds)
+```
+Using this mode for experimental purposes only.
+
+In a real production environment, we always have a communal standalone metadata store,
+to manage the metadata of persistent relational entities, e.g. databases, tables, columns, partitions, for fast access.
+Usually, Hive metastore as the de facto.
+
+## Related Configurations
+
+These are the basic needs for a Hive metastore client to communicate with the remote Hive Metastore server.
+
+Use remote metastore database or server mode depends on the server-side configuration.
+
+### Remote Metastore Database
+
+Name | Value | Meaning
+--- | --- | ---
+javax.jdo.option.ConnectionURL | jdbc:mysql://&lt;hostname&gt;/&lt;databaseName&gt;?<br>createDatabaseIfNotExist=true | metadata is stored in a MySQL server
+javax.jdo.option.ConnectionDriverName | com.mysql.jdbc.Driver | MySQL JDBC driver class
+javax.jdo.option.ConnectionUserName | &lt;username&gt; | user name for connecting to MySQL server
+javax.jdo.option.ConnectionPassword | &lt;password&gt; | password for connecting to MySQL server
+
+### Remote Metastore Server
+
+Name | Value | Meaning
+--- | --- | ---
+hive.metastore.uris | thrift://&lt;host&gt;:&lt;port&gt;,thrift://&lt;host1&gt;:&lt;port1&gt; | <div style='width: 200pt;word-wrap: break-word;white-space: normal'>host and port for the Thrift metastore server.</div>
+
+## Activate Configurations
+
+### Via kyuubi-defaults.conf
+
+In `$KYUUBI_HOME/conf/kyuubi-defaults.conf`, all _**Hive primitive configurations**_, e.g. `hive.metastore.uris`,
+and the **_Spark derivatives_**, which are prefixed with `spark.hive.` or `spark.hadoop.`, e.g `spark.hive.metastore.uris` or `spark.hadoop.hive.metastore.uris`,
+will be loaded as Hive primitives by the Hive client inside the Spark application.
+
+Kyuubi will take these configurations as system wide defaults for all applications it launches.
+
+### Via hive-site.xml
+
+Place your copy of `hive-site.xml` into `$SPARK_HOME/conf`,
+every single Spark application will automatically load this config file to its classpath.
+
+This version of configuration has lower priority than those in `$KYUUBI_HOME/conf/kyuubi-defaults.conf`.
+
+### Via JDBC Connection URL
+
+We can pass _**Hive primitives**_ or **_Spark derivatives_** directly in the JDBC connection URL, e.g.
+
+```
+jdbc:hive2://localhost:10009/;#hive.metastore.uris=thrift://localhost:9083
+```
+
+This will override the defaults in `$SPARK_HOME/conf/hive-site.xml` and `$KYUUBI_HOME/conf/kyuubi-defaults.conf` for each _**user account**_.
+
+With this feature, end users are possible to visit different Hive metastore server instance.
+Similarly, this works for other services like HDFS, YARN too.
+
+**Limitation:** As most Hive configurations are final and unmodifiable in Spark at runtime,
+this only takes effect during instantiating the Spark applications and will be ignored when reusing an existing application.
+So, keep this in our mind.
+
+**!!!THIS WORKS ONLY ONCE!!!**
+
+**!!!THIS WORKS ONLY ONCE!!!**
+
+**!!!THIS WORKS ONLY ONCE!!!**
+
+### Via SET syntax
+
+Most Hive configurations are final and unmodifiable in Spark at runtime, so keep this in our mind.
+
+**!!!THIS WON'T WORK!!!**
+
+**!!!THIS WON'T WORK!!!**
+
+**!!!THIS WON'T WORK!!!**
+
+## Version Compatibility
+
+If backward compatibility is guaranteed by Hive versioning,
+we can always use a lower version Hive metastore client to communicate with the higher version Hive metastore server.
+
+For example, Spark 3.0 was released with a built-in Hive client (2.3.7), so, ideally, the version of server should &gt;= 2.3.x.
+
+If you do have a legacy Hive metastore server that cannot be easily upgraded, and you may face the issue by default like this,
+
+```java
+Caused by: org.apache.thrift.TApplicationException: Invalid method name: 'get_table_req'
+	at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79)
+	at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_table_req(ThriftHiveMetastore.java:1567)
+	at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_table_req(ThriftHiveMetastore.java:1554)
+	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:1350)
+	at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.getTable(SessionHiveMetaStoreClient.java:127)
+	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
+	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
+	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
+	at java.lang.reflect.Method.invoke(Method.java:498)
+	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:173)
+	at com.sun.proxy.$Proxy37.getTable(Unknown Source)
+	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
+	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
+	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
+	at java.lang.reflect.Method.invoke(Method.java:498)
+	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2336)
+	at com.sun.proxy.$Proxy37.getTable(Unknown Source)
+	at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1274)
+	... 93 more
+```
+
+To prevent this problem, we can use Spark's [Interacting with Different Versions of Hive Metastore](http://spark.apache.org/docs/latest/sql-data-sources-hive-tables.html#interacting-with-different-versions-of-hive-metastore).
+
+## Further Readings
+
+- Hive Wiki
+  - [Hive Metastore Administration](https://cwiki.apache.org/confluence/display/Hive/AdminManual+Metastore+Administration)
+- Spark Online Documentation
+  - [Custom Hadoop/Hive Configuration](http://spark.apache.org/docs/latest/configuration.html#custom-hadoophive-configuration)
+  - [Hive Tables](http://spark.apache.org/docs/latest/sql-data-sources-hive-tables.html)
diff --git a/content/docs/latest/_sources/deployment/index.rst.txt b/content/docs/latest/_sources/deployment/index.rst.txt
new file mode 100644
index 0000000..e682680
--- /dev/null
+++ b/content/docs/latest/_sources/deployment/index.rst.txt
@@ -0,0 +1,53 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+Deploying Kyuubi
+================
+
+In this section, you will learn how to deploy Kyuubi against different platforms.
+
+Basics
+------
+
+.. toctree::
+    :maxdepth: 2
+    :glob:
+
+    kyuubi_on_kubernetes
+    hive_metastore
+    high_availability_guide
+
+Configurations
+--------------
+
+.. toctree::
+    :maxdepth: 2
+    :glob:
+
+    settings
+
+Engines
+-------
+
+.. toctree::
+    :maxdepth: 2
+    :glob:
+
+    engine_on_yarn
+    engine_on_kubernetes
+    engine_share_level
+    engine_lifecycle
+    spark/index
\ No newline at end of file
diff --git a/content/docs/latest/_sources/deployment/kyuubi_on_kubernetes.md.txt b/content/docs/latest/_sources/deployment/kyuubi_on_kubernetes.md.txt
new file mode 100644
index 0000000..8125920
--- /dev/null
+++ b/content/docs/latest/_sources/deployment/kyuubi_on_kubernetes.md.txt
@@ -0,0 +1,103 @@
+<!--
+ - Licensed to the Apache Software Foundation (ASF) under one or more
+ - contributor license agreements.  See the NOTICE file distributed with
+ - this work for additional information regarding copyright ownership.
+ - The ASF licenses this file to You under the Apache License, Version 2.0
+ - (the "License"); you may not use this file except in compliance with
+ - the License.  You may obtain a copy of the License at
+ -
+ -   http://www.apache.org/licenses/LICENSE-2.0
+ -
+ - Unless required by applicable law or agreed to in writing, software
+ - distributed under the License is distributed on an "AS IS" BASIS,
+ - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ - See the License for the specific language governing permissions and
+ - limitations under the License.
+ -->
+
+
+# Deploy Kyuubi On Kubernetes
+
+## Requirements
+
+If you want to deploy Kyuubi on Kubernetes, you'd better get a sense of the following things.
+
+* Use Kyuubi official docker image or build Kyuubi docker image
+* An active Kubernetes cluster
+* Reading About [Deploy Kyuubi engines on Kubernetes](engine_on_kubernetes.md)
+* [Kubectl](https://kubernetes.io/docs/reference/kubectl/overview/)
+* KubeConfig of the target cluster
+
+## Kyuubi Official Docker Image 
+
+You can find the official docker image at [Apache Kyuubi (Incubating) Docker Hub](https://registry.hub.docker.com/r/apache/kyuubi).
+
+## Build Kyuubi Docker Image
+
+You can build custom Docker images from the `${KYUUBI_HOME}/bin/docker-image-tool.sh` contained in the binary package.
+
+Examples:
+```shell
+  - Build and push image with tag "v1.4.0" to docker.io/myrepo
+    $0 -r docker.io/myrepo -t v1.4.0 build
+    $0 -r docker.io/myrepo -t v1.4.0 push
+
+  - Build and push with tag "v1.4.0" and Spark-3.2.1 as base image to docker.io/myrepo
+    $0 -r docker.io/myrepo -t v1.4.0 -b BASE_IMAGE=repo/spark:3.2.1 build
+    $0 -r docker.io/myrepo -t v1.4.0 push
+
+  - Build and push for multiple archs to docker.io/myrepo
+    $0 -r docker.io/myrepo -t v1.4.0 -X build
+
+  - Build with Spark placed "/path/spark"
+    $0 -s /path/spark build
+    
+  - Build with Spark Image myrepo/spark:3.1.0
+    $0 -S /opt/spark -b BASE_IMAGE=myrepo/spark:3.1.0 build
+```
+
+`${KYUUBI_HOME}/bin/docker-image-tool.sh` use `Kyuubi Version` as default docker tag and always build `${repo}/kyuubi:${tag}` image.
+
+The script can also help build external Spark into a Kyuubi image that acts as a client for submitting tasks by `-s ${SPAAK_HOME}`.
+
+Of course, if you have an image that contains the Spark binary package, you don't have to copy Spark locally. Make your Spark Image as BASE_IMAGE by using the `-S ${SPARK_HOME_IN_DOCKER}` and `-b BASE_IMAGE=${SPARK_IMAGE}` arguments.
+
+You can use `${KYUUBI_HOME}/bin/docker-image-tool.sh -h` for more parameters.
+
+## Deploy
+
+Multiple YAML files are provided under `${KYUUBI_HOME}/docker/` to help you deploy Kyuubi.
+
+You can deploy single-node Kyuubi through `${KYUUBI_HOME}/docker/kyuubi-pod.yaml` or `${KYUUBI_HOME}/docker/kyuubi-deployment.yaml`.
+
+Also, you can use `${KYUUBI_HOME}/docker/kyuubi-service.yaml` to deploy Kyuubi Service.
+
+## Config
+
+You can configure Kyuubi the old-fashioned way by placing kyuubi-default.conf inside the image. Kyuubi do not recommend using this way on Kubernetes.
+
+Kyuubi provide `${KYUUBI_HOME}/docker/kyuubi-configmap.yaml` to build Configmap for Kyuubi.
+
+You can find out how to use it in the comments inside the above file.
+
+If you want to know kyuubi engine on kubernetes configurations, you can refer to [Deploy Kyuubi engines on Kubernetes](engine_on_kubernetes.md)
+
+## Connect
+
+If you do not use Service or HostNetwork to get the IP address of the node where Kyuubi deployed.
+You should connect like:
+```shell
+kubectl exec -it kyuubi-example -- /bin/bash
+${SPARK_HOME}/bin/beeline -u 'jdbc:hive2://localhost:10009'
+```
+
+Or you can submit tasks directly through local beeline:
+```shell
+${SPARK_HOME}/bin/beeline -u 'jdbc:hive2://${hostname}:${port}'
+```
+As using service nodePort, port means nodePort and hostname means any hostname of kubernetes node.
+
+As using HostNetwork, port means kyuubi containerPort and hostname means hostname of node where Kyuubi deployed.
+
+## TODO 
+Kyuubi will provide other connection methods in the future, like `Ingress`, `Load Balance`.
diff --git a/content/docs/latest/_sources/deployment/settings.md.txt b/content/docs/latest/_sources/deployment/settings.md.txt
new file mode 100644
index 0000000..6513ed0
--- /dev/null
+++ b/content/docs/latest/_sources/deployment/settings.md.txt
@@ -0,0 +1,620 @@
+<!--
+ - Licensed to the Apache Software Foundation (ASF) under one or more
+ - contributor license agreements.  See the NOTICE file distributed with
+ - this work for additional information regarding copyright ownership.
+ - The ASF licenses this file to You under the Apache License, Version 2.0
+ - (the "License"); you may not use this file except in compliance with
+ - the License.  You may obtain a copy of the License at
+ -
+ -   http://www.apache.org/licenses/LICENSE-2.0
+ -
+ - Unless required by applicable law or agreed to in writing, software
+ - distributed under the License is distributed on an "AS IS" BASIS,
+ - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ - See the License for the specific language governing permissions and
+ - limitations under the License.
+ -->
+
+<!-- DO NOT MODIFY THIS FILE DIRECTLY, IT IS AUTO GENERATED BY [org.apache.kyuubi.config.AllKyuubiConfiguration] -->
+
+
+# Introduction to the Kyuubi Configurations System
+
+Kyuubi provides several ways to configure the system and corresponding engines.
+
+
+## Environments
+
+
+You can configure the environment variables in `$KYUUBI_HOME/conf/kyuubi-env.sh`, e.g, `JAVA_HOME`, then this java runtime will be used both for Kyuubi server instance and the applications it launches. You can also change the variable in the subprocess's env configuration file, e.g.`$SPARK_HOME/conf/spark-env.sh` to use more specific ENV for SQL engine applications.
+```bash
+#!/usr/bin/env bash
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+#
+# - JAVA_HOME               Java runtime to use. By default use "java" from PATH.
+#
+#
+# - KYUUBI_CONF_DIR         Directory containing the Kyuubi configurations to use.
+#                           (Default: $KYUUBI_HOME/conf)
+# - KYUUBI_LOG_DIR          Directory for Kyuubi server-side logs.
+#                           (Default: $KYUUBI_HOME/logs)
+# - KYUUBI_PID_DIR          Directory stores the Kyuubi instance pid file.
+#                           (Default: $KYUUBI_HOME/pid)
+# - KYUUBI_MAX_LOG_FILES    Maximum number of Kyuubi server logs can rotate to.
+#                           (Default: 5)
+# - KYUUBI_JAVA_OPTS        JVM options for the Kyuubi server itself in the form "-Dx=y".
+#                           (Default: none).
+# - KYUUBI_CTL_JAVA_OPTS    JVM options for the Kyuubi ctl itself in the form "-Dx=y".
+#                           (Default: none).
+# - KYUUBI_BEELINE_OPTS     JVM options for the Kyuubi BeeLine in the form "-Dx=Y".
+#                           (Default: none)
+# - KYUUBI_NICENESS         The scheduling priority for Kyuubi server.
+#                           (Default: 0)
+# - KYUUBI_WORK_DIR_ROOT    Root directory for launching sql engine applications.
+#                           (Default: $KYUUBI_HOME/work)
+# - HADOOP_CONF_DIR         Directory containing the Hadoop / YARN configuration to use.
+# - YARN_CONF_DIR           Directory containing the YARN configuration to use.
+#
+# - SPARK_HOME              Spark distribution which you would like to use in Kyuubi.
+# - SPARK_CONF_DIR          Optional directory where the Spark configuration lives.
+#                           (Default: $SPARK_HOME/conf)
+# - FLINK_HOME              Flink distribution which you would like to use in Kyuubi.
+# - FLINK_CONF_DIR          Optional directory where the Flink configuration lives.
+#                           (Default: $FLINK_HOME/conf)
+# - FLINK_HADOOP_CLASSPATH  Required Hadoop jars when you use the Kyuubi Flink engine.
+# - HIVE_HOME               Hive distribution which you would like to use in Kyuubi.
+# - HIVE_CONF_DIR           Optional directory where the Hive configuration lives.
+#                           (Default: $HIVE_HOME/conf)
+# - HIVE_HADOOP_CLASSPATH   Required Hadoop jars when you use the Kyuubi Hive engine.
+#
+
+
+## Examples ##
+
+# export JAVA_HOME=/usr/jdk64/jdk1.8.0_152
+# export SPARK_HOME=/opt/spark
+# export FLINK_HOME=/opt/flink
+# export HIVE_HOME=/opt/hive
+# export FLINK_HADOOP_CLASSPATH=/path/to/hadoop-client-runtime-3.3.2.jar:/path/to/hadoop-client-api-3.3.2.jar
+# export HIVE_HADOOP_CLASSPATH=${HADOOP_HOME}/share/hadoop/common/lib/commons-collections-3.2.2.jar:${HADOOP_HOME}/share/hadoop/client/hadoop-client-runtime-3.1.0.jar:${HADOOP_HOME}/share/hadoop/client/hadoop-client-api-3.1.0.jar:${HADOOP_HOME}/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar
+# export HADOOP_CONF_DIR=/usr/ndp/current/mapreduce_client/conf
+# export YARN_CONF_DIR=/usr/ndp/current/yarn/conf
+# export KYUUBI_JAVA_OPTS="-Xmx10g -XX:+UnlockDiagnosticVMOptions -XX:ParGCCardsPerStrideChunk=4096 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSConcurrentMTEnabled -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSClassUnloadingEnabled -XX:+CMSParallelRemarkEnabled -XX:+UseCondCardMark -XX:MaxDirectMemorySize=1024m  -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=./logs -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribut [...]
+# export KYUUBI_BEELINE_OPTS="-Xmx2g -XX:+UnlockDiagnosticVMOptions -XX:ParGCCardsPerStrideChunk=4096 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSConcurrentMTEnabled -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSClassUnloadingEnabled -XX:+CMSParallelRemarkEnabled -XX:+UseCondCardMark"
+```
+
+For the environment variables that only needed to be transferred into engine side, you can set it with a Kyuubi configuration item formatted `kyuubi.engineEnv.VAR_NAME`. For example, with `kyuubi.engineEnv.SPARK_DRIVER_MEMORY=4g`, the environment variable `SPARK_DRIVER_MEMORY` with value `4g` would be transferred into engine side. With `kyuubi.engineEnv.SPARK_CONF_DIR=/apache/confs/spark/conf`, the value of `SPARK_CONF_DIR` in engine side is set to `/apache/confs/spark/conf`.
+
+## Kyuubi Configurations
+
+You can configure the Kyuubi properties in `$KYUUBI_HOME/conf/kyuubi-defaults.conf`. For example:
+```bash
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+## Kyuubi Configurations
+
+#
+# kyuubi.authentication           NONE
+# kyuubi.frontend.bind.host       localhost
+# kyuubi.frontend.bind.port       10009
+#
+
+# Details in https://kyuubi.apache.org/docs/latest/deployment/settings.html
+```
+
+### Authentication
+
+Key | Default | Meaning | Type | Since
+--- | --- | --- | --- | ---
+kyuubi.authentication|NONE|A comma separated list of client authentication types.<ul> <li>NOSASL: raw transport.</li> <li>NONE: no authentication check.</li> <li>KERBEROS: Kerberos/GSSAPI authentication.</li> <li>CUSTOM: User-defined authentication.</li> <li>JDBC: JDBC query authentication.</li> <li>LDAP: Lightweight Directory Access Protocol authentication.</li></ul> Note that: For KERBEROS, it is SASL/GSSAPI mechanism, and for NONE, CUSTOM and LDAP, they are all SASL/PLAIN mechanism. I [...]
+kyuubi.authentication.custom.class|&lt;undefined&gt;|User-defined authentication implementation of org.apache.kyuubi.service.authentication.PasswdAuthenticationProvider|string|1.3.0
+kyuubi.authentication.jdbc.driver.class|&lt;undefined&gt;|Driver class name for JDBC Authentication Provider.|string|1.6.0
+kyuubi.authentication.jdbc.password|&lt;undefined&gt;|Database password for JDBC Authentication Provider.|string|1.6.0
+kyuubi.authentication.jdbc.query|&lt;undefined&gt;|Query SQL template with placeholders for JDBC Authentication Provider to execute. Authentication passes if the result set is not empty.The SQL statement must start with the `SELECT` clause. Available placeholders are `${user}` and `${password}`.|string|1.6.0
+kyuubi.authentication.jdbc.url|&lt;undefined&gt;|JDBC URL for JDBC Authentication Provider.|string|1.6.0
+kyuubi.authentication.jdbc.user|&lt;undefined&gt;|Database user for JDBC Authentication Provider.|string|1.6.0
+kyuubi.authentication.ldap.base.dn|&lt;undefined&gt;|LDAP base DN.|string|1.0.0
+kyuubi.authentication.ldap.domain|&lt;undefined&gt;|LDAP domain.|string|1.0.0
+kyuubi.authentication.ldap.guidKey|uid|LDAP attribute name whose values are unique in this LDAP server.For example:uid or cn.|string|1.2.0
+kyuubi.authentication.ldap.url|&lt;undefined&gt;|SPACE character separated LDAP connection URL(s).|string|1.0.0
+kyuubi.authentication.sasl.qop|auth|Sasl QOP enable higher levels of protection for Kyuubi communication with clients.<ul> <li>auth - authentication only (default)</li> <li>auth-int - authentication plus integrity protection</li> <li>auth-conf - authentication plus integrity and confidentiality protection. This is applicable only if Kyuubi is configured to use Kerberos authentication.</li> </ul>|string|1.0.0
+
+
+### Backend
+
+Key | Default | Meaning | Type | Since
+--- | --- | --- | --- | ---
+kyuubi.backend.engine.exec.pool.keepalive.time|PT1M|Time(ms) that an idle async thread of the operation execution thread pool will wait for a new task to arrive before terminating in SQL engine applications|duration|1.0.0
+kyuubi.backend.engine.exec.pool.shutdown.timeout|PT10S|Timeout(ms) for the operation execution thread pool to terminate in SQL engine applications|duration|1.0.0
+kyuubi.backend.engine.exec.pool.size|100|Number of threads in the operation execution thread pool of SQL engine applications|int|1.0.0
+kyuubi.backend.engine.exec.pool.wait.queue.size|100|Size of the wait queue for the operation execution thread pool in SQL engine applications|int|1.0.0
+kyuubi.backend.server.event.json.log.path|file:///tmp/kyuubi/events|The location of server events go for the builtin JSON logger|string|1.4.0
+kyuubi.backend.server.event.loggers||A comma separated list of server history loggers, where session/operation etc events go.<ul> <li>JSON: the events will be written to the location of kyuubi.backend.server.event.json.log.path</li> <li>JDBC: to be done</li> <li>CUSTOM: to be done.</li></ul>|seq|1.4.0
+kyuubi.backend.server.exec.pool.keepalive.time|PT1M|Time(ms) that an idle async thread of the operation execution thread pool will wait for a new task to arrive before terminating in Kyuubi server|duration|1.0.0
+kyuubi.backend.server.exec.pool.shutdown.timeout|PT10S|Timeout(ms) for the operation execution thread pool to terminate in Kyuubi server|duration|1.0.0
+kyuubi.backend.server.exec.pool.size|100|Number of threads in the operation execution thread pool of Kyuubi server|int|1.0.0
+kyuubi.backend.server.exec.pool.wait.queue.size|100|Size of the wait queue for the operation execution thread pool of Kyuubi server|int|1.0.0
+
+
+### Batch
+
+Key | Default | Meaning | Type | Since
+--- | --- | --- | --- | ---
+kyuubi.batch.application.check.interval|PT5S|The interval to check batch job application information.|duration|1.6.0
+kyuubi.batch.conf.ignore.list||A comma separated list of ignored keys for batch conf. If the batch conf contains any of them, the key and the corresponding value will be removed silently during batch job submission. Note that this rule is for server-side protection defined via administrators to prevent some essential configs from tampering. You can also pre-define some config for batch job submission with prefix: kyuubi.batchConf.[batchType]. For example, you can pre-define `spark.master [...]
+
+
+### Credentials
+
+Key | Default | Meaning | Type | Since
+--- | --- | --- | --- | ---
+kyuubi.credentials.check.interval|PT5M|The interval to check the expiration of cached <user, CredentialsRef> pairs.|duration|1.6.0
+kyuubi.credentials.hadoopfs.enabled|true|Whether to renew Hadoop filesystem delegation tokens|boolean|1.4.0
+kyuubi.credentials.hadoopfs.uris||Extra Hadoop filesystem URIs for which to request delegation tokens. The filesystem that hosts fs.defaultFS does not need to be listed here.|seq|1.4.0
+kyuubi.credentials.hive.enabled|true|Whether to renew Hive metastore delegation token|boolean|1.4.0
+kyuubi.credentials.idle.timeout|PT6H|inactive users' credentials will be expired after a configured timeout|duration|1.6.0
+kyuubi.credentials.renewal.interval|PT1H|How often Kyuubi renews one user's delegation tokens|duration|1.4.0
+kyuubi.credentials.renewal.retry.wait|PT1M|How long to wait before retrying to fetch new credentials after a failure.|duration|1.4.0
+kyuubi.credentials.update.wait.timeout|PT1M|How long to wait until credentials are ready.|duration|1.5.0
+
+
+### Ctl
+
+Key | Default | Meaning | Type | Since
+--- | --- | --- | --- | ---
+kyuubi.ctl.batch.log.query.interval|PT3S|The interval for fetching batch logs.|duration|1.6.0
+kyuubi.ctl.rest.auth.schema|basic|The authentication schema. Valid values are: basic, spnego.|string|1.6.0
+kyuubi.ctl.rest.base.url|&lt;undefined&gt;|The REST API base URL, which contains the scheme (http:// or https://), host name, port number|string|1.6.0
+kyuubi.ctl.rest.connect.timeout|PT30S|The timeout[ms] for establishing the connection with the kyuubi server.A timeout value of zero is interpreted as an infinite timeout.|duration|1.6.0
+kyuubi.ctl.rest.request.attempt.wait|PT3S|How long to wait between attempts of ctl rest request.|duration|1.6.0
+kyuubi.ctl.rest.request.max.attempts|3|The max attempts number for ctl rest request.|int|1.6.0
+kyuubi.ctl.rest.socket.timeout|PT2M|The timeout[ms] for waiting for data packets after connection is established.A timeout value of zero is interpreted as an infinite timeout.|duration|1.6.0
+kyuubi.ctl.rest.spnego.host|&lt;undefined&gt;|When auth schema is spnego, need to config spnego host.|string|1.6.0
+
+
+### Delegation
+
+Key | Default | Meaning | Type | Since
+--- | --- | --- | --- | ---
+kyuubi.delegation.key.update.interval|PT24H|unused yet|duration|1.0.0
+kyuubi.delegation.token.gc.interval|PT1H|unused yet|duration|1.0.0
+kyuubi.delegation.token.max.lifetime|PT168H|unused yet|duration|1.0.0
+kyuubi.delegation.token.renew.interval|PT168H|unused yet|duration|1.0.0
+
+
+### Engine
+
+Key | Default | Meaning | Type | Since
+--- | --- | --- | --- | ---
+kyuubi.engine.connection.url.use.hostname|true|(deprecated) When true, engine register with hostname to zookeeper. When spark run on k8s with cluster mode, set to false to ensure that server can connect to engine|boolean|1.3.0
+kyuubi.engine.deregister.exception.classes||A comma separated list of exception classes. If there is any exception thrown, whose class matches the specified classes, the engine would deregister itself.|seq|1.2.0
+kyuubi.engine.deregister.exception.messages||A comma separated list of exception messages. If there is any exception thrown, whose message or stacktrace matches the specified message list, the engine would deregister itself.|seq|1.2.0
+kyuubi.engine.deregister.exception.ttl|PT30M|Time to live(TTL) for exceptions pattern specified in kyuubi.engine.deregister.exception.classes and kyuubi.engine.deregister.exception.messages to deregister engines. Once the total error count hits the kyuubi.engine.deregister.job.max.failures within the TTL, an engine will deregister itself and wait for self-terminated. Otherwise, we suppose that the engine has recovered from temporary failures.|duration|1.2.0
+kyuubi.engine.deregister.job.max.failures|4|Number of failures of job before deregistering the engine.|int|1.2.0
+kyuubi.engine.event.json.log.path|file:///tmp/kyuubi/events|The location of all the engine events go for the builtin JSON logger.<ul><li>Local Path: start with 'file://'</li><li>HDFS Path: start with 'hdfs://'</li></ul>|string|1.3.0
+kyuubi.engine.event.loggers|SPARK|A comma separated list of engine history loggers, where engine/session/operation etc events go. We use spark logger by default.<ul> <li>SPARK: the events will be written to the spark listener bus.</li> <li>JSON: the events will be written to the location of kyuubi.engine.event.json.log.path</li> <li>JDBC: to be done</li> <li>CUSTOM: to be done.</li></ul>|seq|1.3.0
+kyuubi.engine.flink.extra.classpath|&lt;undefined&gt;|The extra classpath for the flink sql engine, for configuring location of hadoop client jars, etc|string|1.6.0
+kyuubi.engine.flink.java.options|&lt;undefined&gt;|The extra java options for the flink sql engine|string|1.6.0
+kyuubi.engine.flink.memory|1g|The heap memory for the flink sql engine|string|1.6.0
+kyuubi.engine.hive.extra.classpath|&lt;undefined&gt;|The extra classpath for the hive query engine, for configuring location of hadoop client jars, etc|string|1.6.0
+kyuubi.engine.hive.java.options|&lt;undefined&gt;|The extra java options for the hive query engine|string|1.6.0
+kyuubi.engine.hive.memory|1g|The heap memory for the hive query engine|string|1.6.0
+kyuubi.engine.initialize.sql|SHOW DATABASES|SemiColon-separated list of SQL statements to be initialized in the newly created engine before queries. i.e. use `SHOW DATABASES` to eagerly active HiveClient. This configuration can not be used in JDBC url due to the limitation of Beeline/JDBC driver.|seq|1.2.0
+kyuubi.engine.jdbc.connection.password|&lt;undefined&gt;|The password is used for connecting to server|string|1.6.0
+kyuubi.engine.jdbc.connection.properties||The additional properties are used for connecting to server|seq|1.6.0
+kyuubi.engine.jdbc.connection.provider|&lt;undefined&gt;|The connection provider is used for getting a connection from server|string|1.6.0
+kyuubi.engine.jdbc.connection.url|&lt;undefined&gt;|The server url that engine will connect to|string|1.6.0
+kyuubi.engine.jdbc.connection.user|&lt;undefined&gt;|The user is used for connecting to server|string|1.6.0
+kyuubi.engine.jdbc.driver.class|&lt;undefined&gt;|The driver class for jdbc engine connection|string|1.6.0
+kyuubi.engine.jdbc.extra.classpath|&lt;undefined&gt;|The extra classpath for the jdbc query engine, for configuring location of jdbc driver, etc|string|1.6.0
+kyuubi.engine.jdbc.java.options|&lt;undefined&gt;|The extra java options for the jdbc query engine|string|1.6.0
+kyuubi.engine.jdbc.memory|1g|The heap memory for the jdbc query engine|string|1.6.0
+kyuubi.engine.jdbc.type|&lt;undefined&gt;|The short name of jdbc type|string|1.6.0
+kyuubi.engine.operation.convert.catalog.database.enabled|true|When set to true, The engine converts the JDBC methods of set/get Catalog and set/get Schema to the implementation of different engines|boolean|1.6.0
+kyuubi.engine.operation.log.dir.root|engine_operation_logs|Root directory for query operation log at engine-side.|string|1.4.0
+kyuubi.engine.pool.name|engine-pool|The name of engine pool.|string|1.5.0
+kyuubi.engine.pool.size|-1|The size of engine pool. Note that, if the size is less than 1, the engine pool will not be enabled; otherwise, the size of the engine pool will be min(this, kyuubi.engine.pool.size.threshold).|int|1.4.0
+kyuubi.engine.pool.size.threshold|9|This parameter is introduced as a server-side parameter, and controls the upper limit of the engine pool.|int|1.4.0
+kyuubi.engine.session.initialize.sql||SemiColon-separated list of SQL statements to be initialized in the newly created engine session before queries. This configuration can not be used in JDBC url due to the limitation of Beeline/JDBC driver.|seq|1.3.0
+kyuubi.engine.share.level|USER|Engines will be shared in different levels, available configs are: <ul> <li>CONNECTION: engine will not be shared but only used by the current client connection</li> <li>USER: engine will be shared by all sessions created by a unique username, see also kyuubi.engine.share.level.subdomain</li> <li>GROUP: engine will be shared by all sessions created by all users belong to the same primary group name. The engine will be launched by the group name as the effec [...]
+kyuubi.engine.share.level.sub.domain|&lt;undefined&gt;|(deprecated) - Using kyuubi.engine.share.level.subdomain instead|string|1.2.0
+kyuubi.engine.share.level.subdomain|&lt;undefined&gt;|Allow end-users to create a subdomain for the share level of an engine. A subdomain is a case-insensitive string values that must be a valid zookeeper sub path. For example, for `USER` share level, an end-user can share a certain engine within a subdomain, not for all of its clients. End-users are free to create multiple engines in the `USER` share level. When disable engine pool, use 'default' if absent.|string|1.4.0
+kyuubi.engine.single.spark.session|false|When set to true, this engine is running in a single session mode. All the JDBC/ODBC connections share the temporary views, function registries, SQL configuration and the current database.|boolean|1.3.0
+kyuubi.engine.trino.extra.classpath|&lt;undefined&gt;|The extra classpath for the trino query engine, for configuring other libs which may need by the trino engine |string|1.6.0
+kyuubi.engine.trino.java.options|&lt;undefined&gt;|The extra java options for the trino query engine|string|1.6.0
+kyuubi.engine.trino.memory|1g|The heap memory for the trino query engine|string|1.6.0
+kyuubi.engine.type|SPARK_SQL|Specify the detailed engine that supported by the Kyuubi. The engine type bindings to SESSION scope. This configuration is experimental. Currently, available configs are: <ul> <li>SPARK_SQL: specify this engine type will launch a Spark engine which can provide all the capacity of the Apache Spark. Note, it's a default engine type.</li> <li>FLINK_SQL: specify this engine type will launch a Flink engine which can provide all the capacity of the Apache Flink.</l [...]
+kyuubi.engine.ui.retainedSessions|200|The number of SQL client sessions kept in the Kyuubi Query Engine web UI.|int|1.4.0
+kyuubi.engine.ui.retainedStatements|200|The number of statements kept in the Kyuubi Query Engine web UI.|int|1.4.0
+kyuubi.engine.ui.stop.enabled|true|When true, allows Kyuubi engine to be killed from the Spark Web UI.|boolean|1.3.0
+kyuubi.engine.user.isolated.spark.session|true|When set to false, if the engine is running in a group or server share level, all the JDBC/ODBC connections will be isolated against the user. Including: the temporary views, function registries, SQL configuration and the current database. Note that, it does not affect if the share level is connection or user.|boolean|1.6.0
+kyuubi.engine.user.isolated.spark.session.idle.interval|PT1M|The interval to check if the user isolated spark session is timeout.|duration|1.6.0
+kyuubi.engine.user.isolated.spark.session.idle.timeout|PT6H|If kyuubi.engine.user.isolated.spark.session is false, we will release the spark session if its corresponding user is inactive after this configured timeout.|duration|1.6.0
+
+
+### Frontend
+
+Key | Default | Meaning | Type | Since
+--- | --- | --- | --- | ---
+kyuubi.frontend.backoff.slot.length|PT0.1S|(deprecated) Time to back off during login to the thrift frontend service.|duration|1.0.0
+kyuubi.frontend.bind.host|&lt;undefined&gt;|(deprecated) Hostname or IP of the machine on which to run the thrift frontend service via binary protocol.|string|1.0.0
+kyuubi.frontend.bind.port|10009|(deprecated) Port of the machine on which to run the thrift frontend service via binary protocol.|int|1.0.0
+kyuubi.frontend.connection.url.use.hostname|true|When true, frontend services prefer hostname, otherwise, ip address|boolean|1.5.0
+kyuubi.frontend.login.timeout|PT20S|(deprecated) Timeout for Thrift clients during login to the thrift frontend service.|duration|1.0.0
+kyuubi.frontend.max.message.size|104857600|(deprecated) Maximum message size in bytes a Kyuubi server will accept.|int|1.0.0
+kyuubi.frontend.max.worker.threads|999|(deprecated) Maximum number of threads in the of frontend worker thread pool for the thrift frontend service|int|1.0.0
+kyuubi.frontend.min.worker.threads|9|(deprecated) Minimum number of threads in the of frontend worker thread pool for the thrift frontend service|int|1.0.0
+kyuubi.frontend.mysql.bind.host|&lt;undefined&gt;|Hostname or IP of the machine on which to run the MySQL frontend service.|string|1.4.0
+kyuubi.frontend.mysql.bind.port|3309|Port of the machine on which to run the MySQL frontend service.|int|1.4.0
+kyuubi.frontend.mysql.max.worker.threads|999|Maximum number of threads in the command execution thread pool for the MySQL frontend service|int|1.4.0
+kyuubi.frontend.mysql.min.worker.threads|9|Minimum number of threads in the command execution thread pool for the MySQL frontend service|int|1.4.0
+kyuubi.frontend.mysql.netty.worker.threads|&lt;undefined&gt;|Number of thread in the netty worker event loop of MySQL frontend service. Use min(cpu_cores, 8) in default.|int|1.4.0
+kyuubi.frontend.mysql.worker.keepalive.time|PT1M|Time(ms) that an idle async thread of the command execution thread pool will wait for a new task to arrive before terminating in MySQL frontend service|duration|1.4.0
+kyuubi.frontend.protocols|THRIFT_BINARY|A comma separated list for all frontend protocols <ul> <li>THRIFT_BINARY - HiveServer2 compatible thrift binary protocol.</li> <li>THRIFT_HTTP - HiveServer2 compatible thrift http protocol.</li> <li>REST - Kyuubi defined REST API(experimental).</li>  <li>MYSQL - MySQL compatible text protocol(experimental).</li> </ul>|seq|1.4.0
+kyuubi.frontend.proxy.http.client.ip.header|X-Real-IP|The http header to record the real client ip address. If your server is behind a load balancer or other proxy, the server will see this load balancer or proxy IP address as the client IP address, to get around this common issue, most load balancers or proxies offer the ability to record the real remote IP address in an HTTP header that will be added to the request for other devices to use. Note that, because the header value can be sp [...]
+kyuubi.frontend.rest.bind.host|&lt;undefined&gt;|Hostname or IP of the machine on which to run the REST frontend service.|string|1.4.0
+kyuubi.frontend.rest.bind.port|10099|Port of the machine on which to run the REST frontend service.|int|1.4.0
+kyuubi.frontend.thrift.backoff.slot.length|PT0.1S|Time to back off during login to the thrift frontend service.|duration|1.4.0
+kyuubi.frontend.thrift.binary.bind.host|&lt;undefined&gt;|Hostname or IP of the machine on which to run the thrift frontend service via binary protocol.|string|1.4.0
+kyuubi.frontend.thrift.binary.bind.port|10009|Port of the machine on which to run the thrift frontend service via binary protocol.|int|1.4.0
+kyuubi.frontend.thrift.http.allow.user.substitution|true|Allow alternate user to be specified as part of open connection request when using HTTP transport mode.|boolean|1.6.0
+kyuubi.frontend.thrift.http.bind.host|&lt;undefined&gt;|Hostname or IP of the machine on which to run the thrift frontend service via http protocol.|string|1.6.0
+kyuubi.frontend.thrift.http.bind.port|10010|Port of the machine on which to run the thrift frontend service via http protocol.|int|1.6.0
+kyuubi.frontend.thrift.http.compression.enabled|true|Enable thrift http compression via Jetty compression support|boolean|1.6.0
+kyuubi.frontend.thrift.http.cookie.auth.enabled|true|When true, Kyuubi in HTTP transport mode, will use cookie based authentication mechanism|boolean|1.6.0
+kyuubi.frontend.thrift.http.cookie.domain|&lt;undefined&gt;|Domain for the Kyuubi generated cookies|string|1.6.0
+kyuubi.frontend.thrift.http.cookie.is.httponly|true|HttpOnly attribute of the Kyuubi generated cookie.|boolean|1.6.0
+kyuubi.frontend.thrift.http.cookie.max.age|86400|Maximum age in seconds for server side cookie used by Kyuubi in HTTP mode.|int|1.6.0
+kyuubi.frontend.thrift.http.cookie.path|&lt;undefined&gt;|Path for the Kyuubi generated cookies|string|1.6.0
+kyuubi.frontend.thrift.http.max.idle.time|PT30M|Maximum idle time for a connection on the server when in HTTP mode.|duration|1.6.0
+kyuubi.frontend.thrift.http.path|cliservice|Path component of URL endpoint when in HTTP mode.|string|1.6.0
+kyuubi.frontend.thrift.http.request.header.size|6144|Request header size in bytes, when using HTTP transport mode. Jetty defaults used.|int|1.6.0
+kyuubi.frontend.thrift.http.response.header.size|6144|Response header size in bytes, when using HTTP transport mode. Jetty defaults used.|int|1.6.0
+kyuubi.frontend.thrift.http.ssl.keystore.password|&lt;undefined&gt;|SSL certificate keystore password.|string|1.6.0
+kyuubi.frontend.thrift.http.ssl.keystore.path|&lt;undefined&gt;|SSL certificate keystore location.|string|1.6.0
+kyuubi.frontend.thrift.http.ssl.protocol.blacklist|SSLv2,SSLv3|SSL Versions to disable when using HTTP transport mode.|string|1.6.0
+kyuubi.frontend.thrift.http.use.SSL|false|Set this to true for using SSL encryption in http mode.|boolean|1.6.0
+kyuubi.frontend.thrift.http.xsrf.filter.enabled|false|If enabled, Kyuubi will block any requests made to it over http if an X-XSRF-HEADER header is not present|boolean|1.6.0
+kyuubi.frontend.thrift.login.timeout|PT20S|Timeout for Thrift clients during login to the thrift frontend service.|duration|1.4.0
+kyuubi.frontend.thrift.max.message.size|104857600|Maximum message size in bytes a Kyuubi server will accept.|int|1.4.0
+kyuubi.frontend.thrift.max.worker.threads|999|Maximum number of threads in the of frontend worker thread pool for the thrift frontend service|int|1.4.0
+kyuubi.frontend.thrift.min.worker.threads|9|Minimum number of threads in the of frontend worker thread pool for the thrift frontend service|int|1.4.0
+kyuubi.frontend.thrift.worker.keepalive.time|PT1M|Keep-alive time (in milliseconds) for an idle worker thread|duration|1.4.0
+kyuubi.frontend.worker.keepalive.time|PT1M|(deprecated) Keep-alive time (in milliseconds) for an idle worker thread|duration|1.0.0
+
+
+### Ha
+
+Key | Default | Meaning | Type | Since
+--- | --- | --- | --- | ---
+kyuubi.ha.addresses||The connection string for the discovery ensemble|string|1.6.0
+kyuubi.ha.client.class|org.apache.kyuubi.ha.client.zookeeper.ZookeeperDiscoveryClient|Class name for service discovery client.<ul> <li>Zookeeper: org.apache.kyuubi.ha.client.zookeeper.ZookeeperDiscoveryClient</li> <li>Etcd: org.apache.kyuubi.ha.client.etcd.EtcdDiscoveryClient</li></ul>|string|1.6.0
+kyuubi.ha.etcd.lease.timeout|PT10S|Timeout for etcd keep alive lease. The kyuubi server will known unexpected loss of engine after up to this seconds.|duration|1.6.0
+kyuubi.ha.etcd.ssl.ca.path|&lt;undefined&gt;|Where the etcd CA certificate file is stored.|string|1.6.0
+kyuubi.ha.etcd.ssl.client.certificate.path|&lt;undefined&gt;|Where the etcd SSL certificate file is stored.|string|1.6.0
+kyuubi.ha.etcd.ssl.client.key.path|&lt;undefined&gt;|Where the etcd SSL key file is stored.|string|1.6.0
+kyuubi.ha.etcd.ssl.enabled|false|When set to true, will build a ssl secured etcd client.|boolean|1.6.0
+kyuubi.ha.namespace|kyuubi|The root directory for the service to deploy its instance uri|string|1.6.0
+kyuubi.ha.zookeeper.acl.enabled|false|Set to true if the zookeeper ensemble is kerberized|boolean|1.0.0
+kyuubi.ha.zookeeper.auth.digest|&lt;undefined&gt;|The digest auth string is used for zookeeper authentication, like: username:password.|string|1.3.2
+kyuubi.ha.zookeeper.auth.keytab|&lt;undefined&gt;|Location of Kyuubi server's keytab is used for zookeeper authentication.|string|1.3.2
+kyuubi.ha.zookeeper.auth.principal|&lt;undefined&gt;|Name of the Kerberos principal is used for zookeeper authentication.|string|1.3.2
+kyuubi.ha.zookeeper.auth.type|NONE|The type of zookeeper authentication, all candidates are <ul><li>NONE</li><li> KERBEROS</li><li> DIGEST</li></ul>|string|1.3.2
+kyuubi.ha.zookeeper.connection.base.retry.wait|1000|Initial amount of time to wait between retries to the zookeeper ensemble|int|1.0.0
+kyuubi.ha.zookeeper.connection.max.retries|3|Max retry times for connecting to the zookeeper ensemble|int|1.0.0
+kyuubi.ha.zookeeper.connection.max.retry.wait|30000|Max amount of time to wait between retries for BOUNDED_EXPONENTIAL_BACKOFF policy can reach, or max time until elapsed for UNTIL_ELAPSED policy to connect the zookeeper ensemble|int|1.0.0
+kyuubi.ha.zookeeper.connection.retry.policy|EXPONENTIAL_BACKOFF|The retry policy for connecting to the zookeeper ensemble, all candidates are: <ul><li>ONE_TIME</li><li> N_TIME</li><li> EXPONENTIAL_BACKOFF</li><li> BOUNDED_EXPONENTIAL_BACKOFF</li><li> UNTIL_ELAPSED</li></ul>|string|1.0.0
+kyuubi.ha.zookeeper.connection.timeout|15000|The timeout(ms) of creating the connection to the zookeeper ensemble|int|1.0.0
+kyuubi.ha.zookeeper.engine.auth.type|NONE|The type of zookeeper authentication for engine, all candidates are <ul><li>NONE</li><li> KERBEROS</li><li> DIGEST</li></ul>|string|1.3.2
+kyuubi.ha.zookeeper.namespace|kyuubi|(deprecated) The root directory for the service to deploy its instance uri|string|1.0.0
+kyuubi.ha.zookeeper.node.creation.timeout|PT2M|Timeout for creating zookeeper node|duration|1.2.0
+kyuubi.ha.zookeeper.publish.configs|false|When set to true, publish Kerberos configs to Zookeeper.Note that the Hive driver needs to be greater than 1.3 or 2.0 or apply HIVE-11581 patch.|boolean|1.4.0
+kyuubi.ha.zookeeper.quorum||(deprecated) The connection string for the zookeeper ensemble|string|1.0.0
+kyuubi.ha.zookeeper.session.timeout|60000|The timeout(ms) of a connected session to be idled|int|1.0.0
+
+
+### Kinit
+
+Key | Default | Meaning | Type | Since
+--- | --- | --- | --- | ---
+kyuubi.kinit.interval|PT1H|How often will Kyuubi server run `kinit -kt [keytab] [principal]` to renew the local Kerberos credentials cache|duration|1.0.0
+kyuubi.kinit.keytab|&lt;undefined&gt;|Location of Kyuubi server's keytab.|string|1.0.0
+kyuubi.kinit.max.attempts|10|How many times will `kinit` process retry|int|1.0.0
+kyuubi.kinit.principal|&lt;undefined&gt;|Name of the Kerberos principal.|string|1.0.0
+
+
+### Kubernetes
+
+Key | Default | Meaning | Type | Since
+--- | --- | --- | --- | ---
+kyuubi.kubernetes.context|&lt;undefined&gt;|The desired context from your kubernetes config file used to configure the K8S client for interacting with the cluster.|string|1.6.0
+
+
+### Metadata
+
+Key | Default | Meaning | Type | Since
+--- | --- | --- | --- | ---
+kyuubi.metadata.cleaner.enabled|true|Whether to clean the metadata periodically. If it is enabled, Kyuubi will clean the metadata that is in terminate state with max age limitation.|boolean|1.6.0
+kyuubi.metadata.cleaner.interval|PT30M|The interval to check and clean expired metadata.|duration|1.6.0
+kyuubi.metadata.max.age|PT72H|The maximum age of metadata, the metadata that exceeds the age will be cleaned.|duration|1.6.0
+kyuubi.metadata.recovery.threads|10|The number of threads for recovery from metadata store when Kyuubi server restarting.|int|1.6.0
+kyuubi.metadata.request.retry.interval|PT5S|The interval to check and trigger the metadata request retry tasks.|duration|1.6.0
+kyuubi.metadata.request.retry.queue.size|65536|The maximum queue size for buffering metadata requests in memory when the external metadata storage is down. Requests will be dropped if the queue exceeds.|int|1.6.0
+kyuubi.metadata.request.retry.threads|10|Number of threads in the metadata request retry manager thread pool. The metadata store might be unavailable sometimes and the requests will fail, to tolerant for this case and unblock the main thread, we support to retry the failed requests in async way.|int|1.6.0
+kyuubi.metadata.store.class|org.apache.kyuubi.server.metadata.jdbc.JDBCMetadataStore|Fully qualified class name for server metadata store.|string|1.6.0
+kyuubi.metadata.store.jdbc.database.schema.init|true|Whether to init the jdbc metadata store database schema.|boolean|1.6.0
+kyuubi.metadata.store.jdbc.database.type|DERBY|The database type for server jdbc metadata store.<ul> <li>DERBY: Apache Derby, jdbc driver `org.apache.derby.jdbc.AutoloadedDriver`.</li> <li>MYSQL: MySQL, jdbc driver `com.mysql.jdbc.Driver`.</li> <li>CUSTOM: User-defined database type, need to specify corresponding jdbc driver.</li> Note that: The jdbc datasource is powered by HiKariCP, for datasource properties, please specify them with prefix: kyuubi.metadata.store.jdbc.datasource. For e [...]
+kyuubi.metadata.store.jdbc.driver|&lt;undefined&gt;|JDBC driver class name for server jdbc metadata store.|string|1.6.0
+kyuubi.metadata.store.jdbc.password||The password for server jdbc metadata store.|string|1.6.0
+kyuubi.metadata.store.jdbc.url|jdbc:derby:memory:kyuubi_state_store_db;create=true|The jdbc url for server jdbc metadata store. By defaults, it is a DERBY in-memory database url, and the state information is not shared across kyuubi instances. To enable multiple kyuubi instances high available, please specify a production jdbc url.|string|1.6.0
+kyuubi.metadata.store.jdbc.user||The username for server jdbc metadata store.|string|1.6.0
+
+
+### Metrics
+
+Key | Default | Meaning | Type | Since
+--- | --- | --- | --- | ---
+kyuubi.metrics.console.interval|PT5S|How often should report metrics to console|duration|1.2.0
+kyuubi.metrics.enabled|true|Set to true to enable kyuubi metrics system|boolean|1.2.0
+kyuubi.metrics.json.interval|PT5S|How often should report metrics to json file|duration|1.2.0
+kyuubi.metrics.json.location|metrics|Where the json metrics file located|string|1.2.0
+kyuubi.metrics.prometheus.path|/metrics|URI context path of prometheus metrics HTTP server|string|1.2.0
+kyuubi.metrics.prometheus.port|10019|Prometheus metrics HTTP server port|int|1.2.0
+kyuubi.metrics.reporters|JSON|A comma separated list for all metrics reporters<ul> <li>CONSOLE - ConsoleReporter which outputs measurements to CONSOLE periodically.</li> <li>JMX - JmxReporter which listens for new metrics and exposes them as MBeans.</li>  <li>JSON - JsonReporter which outputs measurements to json file periodically.</li> <li>PROMETHEUS - PrometheusReporter which exposes metrics in prometheus format.</li> <li>SLF4J - Slf4jReporter which outputs measurements to system log p [...]
+kyuubi.metrics.slf4j.interval|PT5S|How often should report metrics to SLF4J logger|duration|1.2.0
+
+
+### Operation
+
+Key | Default | Meaning | Type | Since
+--- | --- | --- | --- | ---
+kyuubi.operation.idle.timeout|PT3H|Operation will be closed when it's not accessed for this duration of time|duration|1.0.0
+kyuubi.operation.interrupt.on.cancel|true|When true, all running tasks will be interrupted if one cancels a query. When false, all running tasks will remain until finished.|boolean|1.2.0
+kyuubi.operation.language|SQL|Choose a programing language for the following inputs <ul><li>SQL: (Default) Run all following statements as SQL queries.</li> <li>SCALA: Run all following input a scala codes</li></ul>|string|1.5.0
+kyuubi.operation.log.dir.root|server_operation_logs|Root directory for query operation log at server-side.|string|1.4.0
+kyuubi.operation.plan.only.excludes|ResetCommand,SetCommand,SetNamespaceCommand,UseStatement,SetCatalogAndNamespace|Comma-separated list of query plan names, in the form of simple class names, i.e, for `set abc=xyz`, the value will be `SetCommand`. For those auxiliary plans, such as `switch databases`, `set properties`, or `create temporary view` e.t.c, which are used for setup evaluating environments for analyzing actual queries, we can use this config to exclude them and let them take  [...]
+kyuubi.operation.plan.only.mode|NONE|Whether to perform the statement in a PARSE, ANALYZE, OPTIMIZE, PHYSICAL, EXECUTION only way without executing the query. When it is NONE, the statement will be fully executed|string|1.4.0
+kyuubi.operation.progress.enabled|false|Whether to enable the operation progress. When true, the operation progress will be returned in `GetOperationStatus`.|boolean|1.6.0
+kyuubi.operation.query.timeout|&lt;undefined&gt;|Timeout for query executions at server-side, take affect with client-side timeout(`java.sql.Statement.setQueryTimeout`) together, a running query will be cancelled automatically if timeout. It's off by default, which means only client-side take fully control whether the query should timeout or not. If set, client-side timeout capped at this point. To cancel the queries right away without waiting task to finish, consider enabling kyuubi.ope [...]
+kyuubi.operation.result.max.rows|0|Max rows of Spark query results. Rows that exceeds the limit would be ignored. By setting this value to 0 to disable the max rows limit.|int|1.6.0
+kyuubi.operation.scheduler.pool|&lt;undefined&gt;|The scheduler pool of job. Note that, this config should be used after change Spark config spark.scheduler.mode=FAIR.|string|1.1.1
+kyuubi.operation.spark.listener.enabled|true|When set to true, Spark engine registers a SQLOperationListener before executing the statement, logs a few summary statistics when each stage completes.|boolean|1.6.0
+kyuubi.operation.status.polling.timeout|PT5S|Timeout(ms) for long polling asynchronous running sql query's status|duration|1.0.0
+
+
+### Server
+
+Key | Default | Meaning | Type | Since
+--- | --- | --- | --- | ---
+kyuubi.server.limit.connections.per.ipaddress|&lt;undefined&gt;|Maximum kyuubi server connections per ipaddress. Any user exceeding this limit will not be allowed to connect.|int|1.6.0
+kyuubi.server.limit.connections.per.user|&lt;undefined&gt;|Maximum kyuubi server connections per user. Any user exceeding this limit will not be allowed to connect.|int|1.6.0
+kyuubi.server.limit.connections.per.user.ipaddress|&lt;undefined&gt;|Maximum kyuubi server connections per user:ipaddress combination. Any user-ipaddress exceeding this limit will not be allowed to connect.|int|1.6.0
+kyuubi.server.name|&lt;undefined&gt;|The name of Kyuubi Server.|string|1.5.0
+kyuubi.server.redaction.regex|&lt;undefined&gt;|Regex to decide which Kyuubi contain sensitive information. When this regex matches a property key or value, the value is redacted from the various logs.||1.6.0
+
+
+### Session
+
+Key | Default | Meaning | Type | Since
+--- | --- | --- | --- | ---
+kyuubi.session.check.interval|PT5M|The check interval for session timeout.|duration|1.0.0
+kyuubi.session.conf.advisor|&lt;undefined&gt;|A config advisor plugin for Kyuubi Server. This plugin can provide some custom configs for different user or session configs and overwrite the session configs before open a new session. This config value should be a class which is a child of 'org.apache.kyuubi.plugin.SessionConfAdvisor' which has zero-arg constructor.|string|1.5.0
+kyuubi.session.conf.ignore.list||A comma separated list of ignored keys. If the client connection contains any of them, the key and the corresponding value will be removed silently during engine bootstrap and connection setup. Note that this rule is for server-side protection defined via administrators to prevent some essential configs from tampering but will not forbid users to set dynamic configurations via SET syntax.|seq|1.2.0
+kyuubi.session.conf.restrict.list||A comma separated list of restricted keys. If the client connection contains any of them, the connection will be rejected explicitly during engine bootstrap and connection setup. Note that this rule is for server-side protection defined via administrators to prevent some essential configs from tampering but will not forbid users to set dynamic configurations via SET syntax.|seq|1.2.0
+kyuubi.session.engine.alive.probe.enabled|false|Whether to enable the engine alive probe, it true, we will create a companion thrift client that sends simple request to check whether the engine is keep alive.|boolean|1.6.0
+kyuubi.session.engine.alive.probe.interval|PT10S|The interval for engine alive probe.|duration|1.6.0
+kyuubi.session.engine.alive.timeout|PT2M|The timeout for engine alive. If there is no alive probe success in the last timeout window, the engine will be marked as no-alive.|duration|1.6.0
+kyuubi.session.engine.check.interval|PT1M|The check interval for engine timeout|duration|1.0.0
+kyuubi.session.engine.flink.main.resource|&lt;undefined&gt;|The package used to create Flink SQL engine remote job. If it is undefined, Kyuubi will use the default|string|1.4.0
+kyuubi.session.engine.flink.max.rows|1000000|Max rows of Flink query results. For batch queries, rows that exceeds the limit would be ignored. For streaming queries, the query would be canceled if the limit is reached.|int|1.5.0
+kyuubi.session.engine.hive.main.resource|&lt;undefined&gt;|The package used to create Hive engine remote job. If it is undefined, Kyuubi will use the default|string|1.6.0
+kyuubi.session.engine.idle.timeout|PT30M|engine timeout, the engine will self-terminate when it's not accessed for this duration. 0 or negative means not to self-terminate.|duration|1.0.0
+kyuubi.session.engine.initialize.timeout|PT3M|Timeout for starting the background engine, e.g. SparkSQLEngine.|duration|1.0.0
+kyuubi.session.engine.launch.async|true|When opening kyuubi session, whether to launch backend engine asynchronously. When true, the Kyuubi server will set up the connection with the client without delay as the backend engine will be created asynchronously.|boolean|1.4.0
+kyuubi.session.engine.log.timeout|PT24H|If we use Spark as the engine then the session submit log is the console output of spark-submit. We will retain the session submit log until over the config value.|duration|1.1.0
+kyuubi.session.engine.login.timeout|PT15S|The timeout of creating the connection to remote sql query engine|duration|1.0.0
+kyuubi.session.engine.share.level|USER|(deprecated) - Using kyuubi.engine.share.level instead|string|1.0.0
+kyuubi.session.engine.spark.main.resource|&lt;undefined&gt;|The package used to create Spark SQL engine remote application. If it is undefined, Kyuubi will use the default|string|1.0.0
+kyuubi.session.engine.spark.max.lifetime|PT0S|Max lifetime for spark engine, the engine will self-terminate when it reaches the end of life. 0 or negative means not to self-terminate.|duration|1.6.0
+kyuubi.session.engine.spark.progress.timeFormat|yyyy-MM-dd HH:mm:ss.SSS|The time format of the progress bar|string|1.6.0
+kyuubi.session.engine.spark.progress.update.interval|PT1S|Update period of progress bar.|duration|1.6.0
+kyuubi.session.engine.spark.showProgress|false|When true, show the progress bar in the spark engine log.|boolean|1.6.0
+kyuubi.session.engine.startup.error.max.size|8192|During engine bootstrapping, if error occurs, using this config to limit the length error message(characters).|int|1.1.0
+kyuubi.session.engine.startup.maxLogLines|10|The maximum number of engine log lines when errors occur during engine startup phase. Note that this max lines is for client-side to help track engine startup issue.|int|1.4.0
+kyuubi.session.engine.startup.waitCompletion|true|Whether to wait for completion after engine starts. If false, the startup process will be destroyed after the engine is started. Note that only use it when the driver is not running locally, such as yarn-cluster mode; Otherwise, the engine will be killed.|boolean|1.5.0
+kyuubi.session.engine.trino.connection.catalog|&lt;undefined&gt;|The default catalog that trino engine will connect to|string|1.5.0
+kyuubi.session.engine.trino.connection.url|&lt;undefined&gt;|The server url that trino engine will connect to|string|1.5.0
+kyuubi.session.engine.trino.main.resource|&lt;undefined&gt;|The package used to create Trino engine remote job. If it is undefined, Kyuubi will use the default|string|1.5.0
+kyuubi.session.engine.trino.showProgress|true|When true, show the progress bar and final info in the trino engine log.|boolean|1.6.0
+kyuubi.session.engine.trino.showProgress.debug|false|When true, show the progress debug info in the trino engine log.|boolean|1.6.0
+kyuubi.session.idle.timeout|PT6H|session idle timeout, it will be closed when it's not accessed for this duration|duration|1.2.0
+kyuubi.session.local.dir.allow.list||The local dir list that are allowed to access by the kyuubi session application. User might set some parameters such as `spark.files` and it will upload some local files when launching the kyuubi engine, if the local dir allow list is defined, kyuubi will check whether the path to upload is in the allow list. Note that, if it is empty, there is no limitation for that and please use absolute path list.|seq|1.6.0
+kyuubi.session.name|&lt;undefined&gt;|A human readable name of session and we use empty string by default. This name will be recorded in event. Note that, we only apply this value from session conf.|string|1.4.0
+kyuubi.session.timeout|PT6H|(deprecated)session timeout, it will be closed when it's not accessed for this duration|duration|1.0.0
+
+
+### Spnego
+
+Key | Default | Meaning | Type | Since
+--- | --- | --- | --- | ---
+kyuubi.spnego.keytab|&lt;undefined&gt;|Keytab file for SPNego principal|string|1.6.0
+kyuubi.spnego.principal|&lt;undefined&gt;|SPNego service principal, typical value would look like HTTP/_HOST@EXAMPLE.COM. SPNego service principal would be used when restful Kerberos security is enabled. This needs to be set only if SPNEGO is to be used in authentication.|string|1.6.0
+
+
+### Zookeeper
+
+Key | Default | Meaning | Type | Since
+--- | --- | --- | --- | ---
+kyuubi.zookeeper.embedded.client.port|2181|clientPort for the embedded zookeeper server to listen for client connections, a client here could be Kyuubi server, engine and JDBC client|int|1.2.0
+kyuubi.zookeeper.embedded.client.port.address|&lt;undefined&gt;|clientPortAddress for the embedded zookeeper server to|string|1.2.0
+kyuubi.zookeeper.embedded.data.dir|embedded_zookeeper|dataDir for the embedded zookeeper server where stores the in-memory database snapshots and, unless specified otherwise, the transaction log of updates to the database.|string|1.2.0
+kyuubi.zookeeper.embedded.data.log.dir|embedded_zookeeper|dataLogDir for the embedded zookeeper server where writes the transaction log .|string|1.2.0
+kyuubi.zookeeper.embedded.directory|embedded_zookeeper|The temporary directory for the embedded zookeeper server|string|1.0.0
+kyuubi.zookeeper.embedded.max.client.connections|120|maxClientCnxns for the embedded zookeeper server to limits the number of concurrent connections of a single client identified by IP address|int|1.2.0
+kyuubi.zookeeper.embedded.max.session.timeout|60000|maxSessionTimeout in milliseconds for the embedded zookeeper server will allow the client to negotiate. Defaults to 20 times the tickTime|int|1.2.0
+kyuubi.zookeeper.embedded.min.session.timeout|6000|minSessionTimeout in milliseconds for the embedded zookeeper server will allow the client to negotiate. Defaults to 2 times the tickTime|int|1.2.0
+kyuubi.zookeeper.embedded.port|2181|The port of the embedded zookeeper server|int|1.0.0
+kyuubi.zookeeper.embedded.tick.time|3000|tickTime in milliseconds for the embedded zookeeper server|int|1.2.0
+
+## Spark Configurations
+
+### Via spark-defaults.conf
+
+Setting them in `$SPARK_HOME/conf/spark-defaults.conf` supplies with default values for SQL engine application. Available properties can be found at Spark official online documentation for [Spark Configurations](http://spark.apache.org/docs/latest/configuration.html)
+
+### Via kyuubi-defaults.conf
+
+Setting them in `$KYUUBI_HOME/conf/kyuubi-defaults.conf` supplies with default values for SQL engine application too. These properties will override all settings in `$SPARK_HOME/conf/spark-defaults.conf`
+
+### Via JDBC Connection URL
+
+Setting them in the JDBC Connection URL supplies session-specific for each SQL engine. For example: ```jdbc:hive2://localhost:10009/default;#spark.sql.shuffle.partitions=2;spark.executor.memory=5g```
+
+- **Runtime SQL Configuration**
+
+  - For [Runtime SQL Configurations](http://spark.apache.org/docs/latest/configuration.html#runtime-sql-configuration), they will take affect every time
+
+- **Static SQL and Spark Core Configuration**
+
+  - For [Static SQL Configurations](http://spark.apache.org/docs/latest/configuration.html#static-sql-configuration) and other spark core configs, e.g. `spark.executor.memory`, they will take affect if there is no existing SQL engine application. Otherwise, they will just be ignored
+
+### Via SET Syntax
+
+Please refer to the Spark official online documentation for [SET Command](http://spark.apache.org/docs/latest/sql-ref-syntax-aux-conf-mgmt-set.html)
+
+## Flink Configurations
+
+### Via flink-conf.yaml
+
+Setting them in `$FLINK_HOME/conf/flink-conf.yaml` supplies with default values for SQL engine application. Available properties can be found at Flink official online documentation for [Flink Configurations](https://nightlies.apache.org/flink/flink-docs-stable/docs/deployment/config/)
+
+### Via kyuubi-defaults.conf
+
+Setting them in `$KYUUBI_HOME/conf/kyuubi-defaults.conf` supplies with default values for SQL engine application too. You can use properties with the additional prefix `flink.` to override settings in `$FLINK_HOME/conf/flink-conf.yaml`.
+
+For example:
+```
+flink.parallelism.default 2
+flink.taskmanager.memory.process.size 5g
+```
+
+The below options in `kyuubi-defaults.conf` will set `parallelism.default: 2` and `taskmanager.memory.process.size: 5g` into flink configurations.
+
+### Via JDBC Connection URL
+
+Setting them in the JDBC Connection URL supplies session-specific for each SQL engine. For example: ```jdbc:hive2://localhost:10009/default;#parallelism.default=2;taskmanager.memory.process.size=5g```
+
+### Via SET Statements
+
+Please refer to the Flink official online documentation for [SET Statements](https://nightlies.apache.org/flink/flink-docs-stable/docs/dev/table/sql/set/)
+
+## Logging
+
+Kyuubi uses [log4j](https://logging.apache.org/log4j/2.x/) for logging. You can configure it using `$KYUUBI_HOME/conf/log4j2.xml`.
+```bash
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one or more
+  ~ contributor license agreements.  See the NOTICE file distributed with
+  ~ this work for additional information regarding copyright ownership.
+  ~ The ASF licenses this file to You under the Apache License, Version 2.0
+  ~ (the "License"); you may not use this file except in compliance with
+  ~ the License.  You may obtain a copy of the License at
+  ~
+  ~     http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing, software
+  ~ distributed under the License is distributed on an "AS IS" BASIS,
+  ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  ~ See the License for the specific language governing permissions and
+  ~ limitations under the License.
+  -->
+
+<!-- Provide log4j2.xml.template to fix `ERROR Filters contains invalid attributes "onMatch", "onMismatch"`, see KYUUBI-2247 -->
+<!-- Extra logging related to initialization of Log4j.
+ Set to debug or trace if log4j initialization is failing. -->
+<Configuration status="INFO">
+    <Appenders>
+        <Console name="stdout" target="SYSTEM_OUT">
+            <PatternLayout pattern="%d{yyyy-MM-dd HH:mm:ss.SSS} %p %c: %m%n"/>
+            <Filters>
+                <RegexFilter regex=".*Thrift error occurred during processing of message.*" onMatch="DENY" onMismatch="NEUTRAL"/>
+            </Filters>
+        </Console>
+    </Appenders>
+    <Loggers>
+        <Root level="INFO">
+            <AppenderRef ref="stdout"/>
+        </Root>
+        <Logger name="org.apache.kyuubi.ctl.ServiceControlCli" level="error" additivity="false">
+            <AppenderRef ref="stdout"/>
+        </Logger>
+        <!--
+        <Logger name="org.apache.kyuubi.server.mysql.codec" level="trace" additivity="false">
+            <AppenderRef ref="stdout"/>
+        </Logger>
+        -->
+        <Logger name="org.apache.hive.beeline.KyuubiBeeLine" level="error" additivity="false">
+            <AppenderRef ref="stdout"/>
+        </Logger>
+    </Loggers>
+</Configuration>
+```
+
+## Other Configurations
+
+### Hadoop Configurations
+
+Specifying `HADOOP_CONF_DIR` to the directory contains hadoop configuration files or treating them as Spark properties with a `spark.hadoop.` prefix. Please refer to the Spark official online documentation for [Inheriting Hadoop Cluster Configuration](http://spark.apache.org/docs/latest/configuration.html#inheriting-hadoop-cluster-configuration). Also, please refer to the [Apache Hadoop](http://hadoop.apache.org)'s online documentation for an overview on how to configure Hadoop.
+
+### Hive Configurations
+
+These configurations are used for SQL engine application to talk to Hive MetaStore and could be configured in a `hive-site.xml`. Placed it in `$SPARK_HOME/conf` directory, or treating them as Spark properties with a `spark.hadoop.` prefix.
+
+## User Defaults
+
+In Kyuubi, we can configure user default settings to meet separate needs. These user defaults override system defaults, but will be overridden by those from [JDBC Connection URL](#via-jdbc-connection-url) or [Set Command](#via-set-syntax) if could be. They will take effect when creating the SQL engine application ONLY.
+User default settings are in the form of `___{username}___.{config key}`. There are three continuous underscores(`_`) at both sides of the `username` and a dot(`.`) that separates the config key and the prefix. For example:
+```bash
+# For system defaults
+spark.master=local
+spark.sql.adaptive.enabled=true
+# For a user named kent
+___kent___.spark.master=yarn
+___kent___.spark.sql.adaptive.enabled=false
+# For a user named bob
+___bob___.spark.master=spark://master:7077
+___bob___.spark.executor.memory=8g
+```
+
+In the above case, if there are related configurations from [JDBC Connection URL](#via-jdbc-connection-url), `kent` will run his SQL engine application on YARN and prefer the Spark AQE to be off, while `bob` will activate his SQL engine application on a Spark standalone cluster with 8g heap memory for each executor and obey the Spark AQE behavior of Kyuubi system default. On the other hand, for those users who do not have custom configurations will use system defaults.
diff --git a/content/docs/latest/_sources/deployment/spark/aqe.md.txt b/content/docs/latest/_sources/deployment/spark/aqe.md.txt
new file mode 100644
index 0000000..f85fcbf
--- /dev/null
+++ b/content/docs/latest/_sources/deployment/spark/aqe.md.txt
@@ -0,0 +1,264 @@
+<!--
+ - Licensed to the Apache Software Foundation (ASF) under one or more
+ - contributor license agreements.  See the NOTICE file distributed with
+ - this work for additional information regarding copyright ownership.
+ - The ASF licenses this file to You under the Apache License, Version 2.0
+ - (the "License"); you may not use this file except in compliance with
+ - the License.  You may obtain a copy of the License at
+ -
+ -   http://www.apache.org/licenses/LICENSE-2.0
+ -
+ - Unless required by applicable law or agreed to in writing, software
+ - distributed under the License is distributed on an "AS IS" BASIS,
+ - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ - See the License for the specific language governing permissions and
+ - limitations under the License.
+ -->
+
+
+# How To Use Spark Adaptive Query Execution (AQE) in Kyuubi
+
+## The Basics of AQE
+
+Spark Adaptive Query Execution (AQE) is a query re-optimization that occurs during query execution.
+
+In terms of technical architecture, the AQE is a framework of dynamic planning and replanning of queries based on runtime statistics,
+which supports a variety of optimizations such as,
+
+- Dynamically Switch Join Strategies
+- Dynamically Coalesce Shuffle Partitions
+- Dynamically Handle Skew Joins
+
+In Kyuubi, we strongly recommended that you turn on all capabilities of AQE by default for Kyuubi engines, no matter on what platform you run Kyuubi and Spark.
+
+### Dynamically Switch Join Strategies
+
+Spark supports several join strategies, among which `BroadcastHash Join` is usually the most performant when any join side fits well in memory. And for this reason, Spark plans a `BroadcastHash Join` if the estimated size of a join relation is less than the `spark.sql.autoBroadcastJoinThreshold`.
+
+```properties
+spark.sql.autoBroadcastJoinThreshold=10M
+```
+
+Without AQE, the estimated size of join relations comes from the statistics of the original table. It can go wrong in most real-world cases. For example, the join relation is a convergent but composite operation rather than a single table scan. In this case, Spark might not be able to switch the join-strategy to `BroadcastHash Join`.  While with AQE, we can runtime calculate the size of the composite operation accurately.  And then, Spark now can replan the join strategy unmistakably if  [...]
+
+<div align=center>
+
+![](../../imgs/spark/aqe_switch_join.png)
+
+</div>
+
+<p align=right>
+<em>
+<a href="https://databricks.com/blog/2020/05/29/adaptive-query-execution-speeding-up-spark-sql-at-runtime.html">[2] From Databricks Blog</a>
+</em>
+</p>
+
+What's more,  when `spark.sql.adaptive.localShuffleReader.enabled=true` and after converting `SortMerge Join` to `BroadcastHash Join`, Spark also does future optimize to reduce the network traffic by converting a regular shuffle to a localized shuffle.
+
+<div align=center>
+
+![](../../imgs/spark/localshufflereader.png)
+
+</div>
+
+As shown in the above fig, the local shuffle reader can read all necessary shuffle files from its local storage, actually without performing the shuffle across the network.
+
+The local shuffle reader optimization consists of avoiding shuffle when the `SortMerge Join` transforms to `BroadcastHash Join` after applying the AQE rules.
+
+### Dynamically Coalesce Shuffle Partitions
+
+Without this feature, Spark itself could be a small files maker sometimes, especially in a pure SQL way like Kyuubi does, for example,
+
+1. When `spark.sql.shuffle.partitions` is set too large compared to the total output size, there comes very small or empty files after a shuffle stage.
+2. When Spark performs a series of optimized `BroadcastHash Join` and `Union` together, the final output size for each partition might be reduced by the join conditions. However, the total final output file numbers get to explode.
+3. Some pipeline jobs with selective filters to produce temporary data.
+4. e.t.c
+
+Reading small files leads to very small partitions or tasks. Spark tasks will have worse I/O throughput and tend to suffer more from scheduling overhead and task setup overhead.
+
+<div align=center>
+
+![](../../imgs/spark/blog-adaptive-query-execution-2.png)
+
+</div>
+
+<p align=right>
+<em>
+<a href="https://databricks.com/blog/2020/05/29/adaptive-query-execution-speeding-up-spark-sql-at-runtime.html">[2] From Databricks Blog</a>
+</em>
+</p>
+
+Combining small partitions saves resources and improves cluster throughput. Spark provides several ways to handle small file issues, for example, adding an extra shuffle operation on the partition columns with the `distribute by` clause or using `HINT`[5]. In most scenarios, you need to have a good grasp of your data, Spark jobs, and configurations to apply these solutions case by case. Mostly, the daily used config - `spark.sql.shuffle.partitions` is data-dependent and unchangeable with [...]
+
+But with AQE, things become more comfortable for you as Spark will do the partition coalescing automatically.
+
+<div align=center>
+
+![](../../imgs/spark/blog-adaptive-query-execution-3.png)
+
+</div>
+<p align=right>
+<em>
+<a href="https://databricks.com/blog/2020/05/29/adaptive-query-execution-speeding-up-spark-sql-at-runtime.html">[2] From Databricks Blog</a>
+</em>
+</p>
+
+It can simplify the tuning of shuffle partition numbers when running Spark SQL queries. You do not need to set a proper shuffle partition number to fit your dataset.
+
+To enable this feature, we need to set the below two configs to true.
+
+```properties
+spark.sql.adaptive.enabled=true
+spark.sql.adaptive.coalescePartitions.enabled=true
+```
+
+#### Other Tips for Best Practises
+
+For further tuning our Spark jobs with this feature, we also need to be aware of these configs.
+
+```properties
+spark.sql.adaptive.advisoryPartitionSizeInBytes=128m
+spark.sql.adaptive.coalescePartitions.minPartitionNum=1
+spark.sql.adaptive.coalescePartitions.initialPartitionNum=200
+```
+
+##### How to set `spark.sql.adaptive.advisoryPartitionSizeInBytes`?
+
+It stands for the advisory size in bytes of the shuffle partition during adaptive query execution, which takes effect when Spark coalesces small shuffle partitions or splits skewed shuffle partition. The default value of `spark.sql.adaptive.advisoryPartitionSizeInBytes` is 64M.  Typically, if we are reading and writing data with HDFS, matching it with the block size of HDFS should be the best choice, i.e. 128MB or 256MB.
+
+Consequently, all blocks or partitions in Spark and files in HDFS are chopped up to 128MB/256MB chunks. And think about it, now all tasks for scans, sinks, and middle shuffle maps are dealing with mostly even-sized data partitions. It will make us much easier to set up executor resources or even one size to fit all.
+
+##### How to set `spark.sql.adaptive.coalescePartitions.minPartitionNum`?
+
+It stands for the suggested (not guaranteed) minimum number of shuffle partitions after coalescing. If not set, the default value is the default parallelism of the Spark application. The default parallelism is defined by `spark.default.parallelism` or else the total count of cores registered. I guess the motivation of this behavior made by the Spark community is to maximize the use of the resources and concurrency of the application.
+
+But there are always exceptions. Relating these two seemingly unrelated parameters can be somehow tricky for users. This config is optional by default which means users may not touch it in most real-world cases. But `spark.default.parallelism` has a long history and is well known then. If users set the default parallelism to an illegitimate high value unexpectedly, it could block AQE from coalescing partitions to a fair number. Another scenario that requires special attention is writing  [...]
+
+##### How to set `spark.sql.adaptive.coalescePartitions.initialPartitionNum`?
+
+It stands for the initial number of shuffle partitions before coalescing. By default, it equals to `spark.sql.shuffle.partitions(200)`. Firstly, it's better to set it explicitly rather than falling back to `spark.sql.shuffle.partitions`. Spark community suggests set a large number to it as Spark will dynamically coalesce shuffle partitions, which I cannot agree more.
+
+### Dynamically Handle Skew Joins
+
+Without AQE, the data skewness is very likely to occur for map-reduce computing models in the shuffle phase. Data skewness can cause Spark jobs to have one or more tailing tasks, severely downgrading queries' performance. This feature dynamically handles skew in `SortMerge Join` by splitting (and replicating if needed) skewed tasks into roughly evenly sized tasks. For example, The optimization will split oversized partitions into subpartitions and join them to the other join side's corre [...]
+
+<div align=center>
+
+![](../../imgs/spark/blog-adaptive-query-execution-6.png)
+
+</div>
+<p align=right>
+<em>
+<a href="https://databricks.com/blog/2020/05/29/adaptive-query-execution-speeding-up-spark-sql-at-runtime.html">[2] From Databricks Blog</a>
+</em>
+</p>
+
+To enable this feature, we need to set the below two configs to true.
+
+```properties
+spark.sql.adaptive.enabled=true
+spark.sql.adaptive.skewJoin.enabled=true
+```
+
+#### Other Tips for Best Practises
+
+For further tuning our Spark jobs with this feature, we also need to be aware of these configs.
+
+```properties
+spark.sql.adaptive.skewJoin.skewedPartitionFactor=5
+spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes=256M
+spark.sql.adaptive.advisoryPartitionSizeInBytes=64M
+```
+
+##### How to set `spark.sql.adaptive.skewJoin.skewedPartitionFactor` and `skewedPartitionThresholdInBytes`?
+
+Spark uses these two configs and the median(**not average**) partition size to detect whether a partition skew or not.
+
+```markdown
+partition size > skewedPartitionFactor * the median partition size && \
+skewedPartitionThresholdInBytes
+```
+
+As Spark splits skewed partitions targeting [spark.sql.adaptive.advisoryPartitionSizeInBytes](aqe.html#how-to-set-spark-sql-adaptive-advisorypartitionsizeinbytes), ideally `skewedPartitionThresholdInBytes` should be larger than `advisoryPartitionSizeInBytes`. In this case, anytime you increase `advisoryPartitionSizeInBytes`, you should also increase `skewedPartitionThresholdInBytes` if you tend to enable the feature.
+
+### Hidden Features
+
+#### DemoteBroadcastHashJoin
+
+Internally, Spark has an optimization rule that detects a join child with a high ratio of empty partitions and adds a no-broadcast-hash-join hint to avoid broadcasting it.
+
+```
+spark.sql.adaptive.nonEmptyPartitionRatioForBroadcastJoin=0.2
+```
+
+By default, if there are only less than 20% partitions of the dataset contain data, Spark will not broadcast the dataset.
+
+#### EliminateJoinToEmptyRelation
+
+This optimization rule detects and converts a Join to an empty LocalRelation.
+
+
+#### Disabling the Hidden Features
+
+We can exclude some of the AQE additional rules if performance regression or bug occurs. For example,
+
+```sql
+SET spark.sql.adaptive.optimizer.excludedRules=org.apache.spark.sql.execution.adaptive.DemoteBroadcastHashJoin
+```
+
+## Best Practices for Applying AQE to Kyuubi
+
+Kyuubi is a long-running service to make it easier for end-users to use Spark SQL without having much of Spark's basic knowledge. It is essential to have a basic configuration that works for most scenarios on the server-side.
... 233187 lines suppressed ...