You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kyuubi.apache.org by ya...@apache.org on 2023/01/16 10:34:12 UTC

[kyuubi] branch master updated: [KYUUBI #4161] [DOCS] Refine settings page with correction in grammar and spelling mistakes of config descriptions

This is an automated email from the ASF dual-hosted git repository.

yao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/kyuubi.git


The following commit(s) were added to refs/heads/master by this push:
     new 89c7435dc [KYUUBI #4161] [DOCS] Refine settings page with correction in grammar and spelling mistakes of config descriptions
89c7435dc is described below

commit 89c7435dcab5deadcfec007feb7142fcc56622d6
Author: liangbowen <li...@gf.com.cn>
AuthorDate: Mon Jan 16 18:34:01 2023 +0800

    [KYUUBI #4161] [DOCS] Refine settings page with correction in grammar and spelling mistakes of config descriptions
    
    ### _Why are the changes needed?_
    
    As Kyuubi graduated as top level project, the setting page will be more often requested and should be increasingly reliable and readable with less grammar and spelling mistakes.
    
    This PR is to
    - correct mistakes in grammar, spelling, abbreviation and terminology
    - with no config name or essential meanings changed
    
    ### _How was this patch tested?_
    - [ ] Add some test cases that check the changes thoroughly including negative and positive cases if possible
    
    - [ ] Add screenshots for manual tests if appropriate
    
    - [x] [Run test](https://kyuubi.apache.org/docs/latest/develop_tools/testing.html#running-tests) locally before make a pull request
    
    Closes #4161 from bowenliang123/conf-grammar.
    
    Closes #4161
    
    038edfbea [liangbowen] nit
    1ec073a4b [liangbowen] to JSON
    4f5259a32 [liangbowen] to Prometheus
    523855008 [liangbowen] to K8s
    fc7a3a81e [liangbowen] to AUTO-GENERATED
    da64f54fa [liangbowen] update
    d54f9a528 [liangbowen] fix `comma separated` to `comma-separated`
    f1d7cc1f1 [liangbowen] update
    d84208844 [liangbowen] update
    1b75f011c [liangbowen] correction of grammar and spelling mistakes
    
    Authored-by: liangbowen <li...@gf.com.cn>
    Signed-off-by: Kent Yao <ya...@apache.org>
---
 docs/deployment/settings.md                        | 262 +++++++++----------
 docs/extensions/engines/spark/functions.md         |   2 +-
 docs/monitor/metrics.md                            |   6 +-
 .../spark/udf/KyuubiDefinedFunctionSuite.scala     |   2 +-
 .../org/apache/kyuubi/config/KyuubiConf.scala      | 278 +++++++++++----------
 .../main/scala/org/apache/kyuubi/ctl/CtlConf.scala |   6 +-
 .../apache/kyuubi/ha/HighAvailabilityConf.scala    |  34 +--
 .../org/apache/kyuubi/metrics/MetricsConf.scala    |   8 +-
 .../metadata/jdbc/JDBCMetadataStoreConf.scala      |  21 +-
 .../kyuubi/config/AllKyuubiConfiguration.scala     |  10 +-
 .../apache/kyuubi/zookeeper/ZookeeperConf.scala    |  20 +-
 11 files changed, 329 insertions(+), 320 deletions(-)

diff --git a/docs/deployment/settings.md b/docs/deployment/settings.md
index 148158971..4c7a3b66a 100644
--- a/docs/deployment/settings.md
+++ b/docs/deployment/settings.md
@@ -15,7 +15,7 @@
  - limitations under the License.
  -->
 
-<!-- DO NOT MODIFY THIS FILE DIRECTLY, IT IS AUTO GENERATED BY [org.apache.kyuubi.config.AllKyuubiConfiguration] -->
+<!-- DO NOT MODIFY THIS FILE DIRECTLY, IT IS AUTO-GENERATED BY [org.apache.kyuubi.config.AllKyuubiConfiguration] -->
 
 
 # Introduction to the Kyuubi Configurations System
@@ -98,7 +98,7 @@ You can configure the environment variables in `$KYUUBI_HOME/conf/kyuubi-env.sh`
 # export KYUUBI_BEELINE_OPTS="-Xmx2g -XX:+UnlockDiagnosticVMOptions -XX:ParGCCardsPerStrideChunk=4096 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSConcurrentMTEnabled -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSClassUnloadingEnabled -XX:+CMSParallelRemarkEnabled -XX:+UseCondCardMark"
 ```
 
-For the environment variables that only needed to be transferred into engine side, you can set it with a Kyuubi configuration item formatted `kyuubi.engineEnv.VAR_NAME`. For example, with `kyuubi.engineEnv.SPARK_DRIVER_MEMORY=4g`, the environment variable `SPARK_DRIVER_MEMORY` with value `4g` would be transferred into engine side. With `kyuubi.engineEnv.SPARK_CONF_DIR=/apache/confs/spark/conf`, the value of `SPARK_CONF_DIR` in engine side is set to `/apache/confs/spark/conf`.
+For the environment variables that only needed to be transferred into engine side, you can set it with a Kyuubi configuration item formatted `kyuubi.engineEnv.VAR_NAME`. For example, with `kyuubi.engineEnv.SPARK_DRIVER_MEMORY=4g`, the environment variable `SPARK_DRIVER_MEMORY` with value `4g` would be transferred into engine side. With `kyuubi.engineEnv.SPARK_CONF_DIR=/apache/confs/spark/conf`, the value of `SPARK_CONF_DIR` on the engine side is set to `/apache/confs/spark/conf`.
 
 ## Kyuubi Configurations
 
@@ -136,7 +136,7 @@ You can configure the Kyuubi properties in `$KYUUBI_HOME/conf/kyuubi-defaults.co
 
 Key | Default | Meaning | Type | Since
 --- | --- | --- | --- | ---
-kyuubi.authentication|NONE|A comma separated list of client authentication types.<ul> <li>NOSASL: raw transport.</li> <li>NONE: no authentication check.</li> <li>KERBEROS: Kerberos/GSSAPI authentication.</li> <li>CUSTOM: User-defined authentication.</li> <li>JDBC: JDBC query authentication.</li> <li>LDAP: Lightweight Directory Access Protocol authentication.</li></ul> Note that: For KERBEROS, it is SASL/GSSAPI mechanism, and for NONE, CUSTOM and LDAP, they are all SASL/PLAIN mechanism. I [...]
+kyuubi.authentication|NONE|A comma-separated list of client authentication types.<ul> <li>NOSASL: raw transport.</li> <li>NONE: no authentication check.</li> <li>KERBEROS: Kerberos/GSSAPI authentication.</li> <li>CUSTOM: User-defined authentication.</li> <li>JDBC: JDBC query authentication.</li> <li>LDAP: Lightweight Directory Access Protocol authentication.</li></ul> Note that: For KERBEROS, it is SASL/GSSAPI mechanism, and for NONE, CUSTOM and LDAP, they are all SASL/PLAIN mechanisms.  [...]
 kyuubi.authentication.custom.class|&lt;undefined&gt;|User-defined authentication implementation of org.apache.kyuubi.service.authentication.PasswdAuthenticationProvider|string|1.3.0
 kyuubi.authentication.jdbc.driver.class|&lt;undefined&gt;|Driver class name for JDBC Authentication Provider.|string|1.6.0
 kyuubi.authentication.jdbc.password|&lt;undefined&gt;|Database password for JDBC Authentication Provider.|string|1.6.0
@@ -158,8 +158,8 @@ kyuubi.backend.engine.exec.pool.keepalive.time|PT1M|Time(ms) that an idle async
 kyuubi.backend.engine.exec.pool.shutdown.timeout|PT10S|Timeout(ms) for the operation execution thread pool to terminate in SQL engine applications|duration|1.0.0
 kyuubi.backend.engine.exec.pool.size|100|Number of threads in the operation execution thread pool of SQL engine applications|int|1.0.0
 kyuubi.backend.engine.exec.pool.wait.queue.size|100|Size of the wait queue for the operation execution thread pool in SQL engine applications|int|1.0.0
-kyuubi.backend.server.event.json.log.path|file:///tmp/kyuubi/events|The location of server events go for the builtin JSON logger|string|1.4.0
-kyuubi.backend.server.event.loggers||A comma separated list of server history loggers, where session/operation etc events go.<ul> <li>JSON: the events will be written to the location of kyuubi.backend.server.event.json.log.path</li> <li>JDBC: to be done</li> <li>CUSTOM: User-defined event handlers.</li></ul> Note that: Kyuubi supports custom event handlers with the Java SPI. To register a custom event handler, user need to implement a class which is a child of org.apache.kyuubi.events.ha [...]
+kyuubi.backend.server.event.json.log.path|file:///tmp/kyuubi/events|The location of server events go for the built-in JSON logger|string|1.4.0
+kyuubi.backend.server.event.loggers||A comma-separated list of server history loggers, where session/operation etc events go.<ul> <li>JSON: the events will be written to the location of kyuubi.backend.server.event.json.log.path</li> <li>JDBC: to be done</li> <li>CUSTOM: User-defined event handlers.</li></ul> Note that: Kyuubi supports custom event handlers with the Java SPI. To register a custom event handler, the user needs to implement a class which is a child of org.apache.kyuubi.even [...]
 kyuubi.backend.server.exec.pool.keepalive.time|PT1M|Time(ms) that an idle async thread of the operation execution thread pool will wait for a new task to arrive before terminating in Kyuubi server|duration|1.0.0
 kyuubi.backend.server.exec.pool.shutdown.timeout|PT10S|Timeout(ms) for the operation execution thread pool to terminate in Kyuubi server|duration|1.0.0
 kyuubi.backend.server.exec.pool.size|100|Number of threads in the operation execution thread pool of Kyuubi server|int|1.0.0
@@ -172,7 +172,7 @@ Key | Default | Meaning | Type | Since
 --- | --- | --- | --- | ---
 kyuubi.batch.application.check.interval|PT5S|The interval to check batch job application information.|duration|1.6.0
 kyuubi.batch.application.starvation.timeout|PT3M|Threshold above which to warn batch application may be starved.|duration|1.7.0
-kyuubi.batch.conf.ignore.list||A comma separated list of ignored keys for batch conf. If the batch conf contains any of them, the key and the corresponding value will be removed silently during batch job submission. Note that this rule is for server-side protection defined via administrators to prevent some essential configs from tampering. You can also pre-define some config for batch job submission with prefix: kyuubi.batchConf.[batchType]. For example, you can pre-define `spark.master [...]
+kyuubi.batch.conf.ignore.list||A comma-separated list of ignored keys for batch conf. If the batch conf contains any of them, the key and the corresponding value will be removed silently during batch job submission. Note that this rule is for server-side protection defined via administrators to prevent some essential configs from tampering. You can also pre-define some config for batch job submission with the prefix: kyuubi.batchConf.[batchType]. For example, you can pre-define `spark.ma [...]
 kyuubi.batch.session.idle.timeout|PT6H|Batch session idle timeout, it will be closed when it's not accessed for this duration|duration|1.6.2
 
 
@@ -184,10 +184,10 @@ kyuubi.credentials.check.interval|PT5M|The interval to check the expiration of c
 kyuubi.credentials.hadoopfs.enabled|true|Whether to renew Hadoop filesystem delegation tokens|boolean|1.4.0
 kyuubi.credentials.hadoopfs.uris||Extra Hadoop filesystem URIs for which to request delegation tokens. The filesystem that hosts fs.defaultFS does not need to be listed here.|seq|1.4.0
 kyuubi.credentials.hive.enabled|true|Whether to renew Hive metastore delegation token|boolean|1.4.0
-kyuubi.credentials.idle.timeout|PT6H|inactive users' credentials will be expired after a configured timeout|duration|1.6.0
+kyuubi.credentials.idle.timeout|PT6H|The inactive users' credentials will be expired after a configured timeout|duration|1.6.0
 kyuubi.credentials.renewal.interval|PT1H|How often Kyuubi renews one user's delegation tokens|duration|1.4.0
 kyuubi.credentials.renewal.retry.wait|PT1M|How long to wait before retrying to fetch new credentials after a failure.|duration|1.4.0
-kyuubi.credentials.update.wait.timeout|PT1M|How long to wait until credentials are ready.|duration|1.5.0
+kyuubi.credentials.update.wait.timeout|PT1M|How long to wait until the credentials are ready.|duration|1.5.0
 
 
 ### Ctl
@@ -197,11 +197,11 @@ Key | Default | Meaning | Type | Since
 kyuubi.ctl.batch.log.on.failure.timeout|PT10S|The timeout for fetching remaining batch logs if the batch failed.|duration|1.6.1
 kyuubi.ctl.batch.log.query.interval|PT3S|The interval for fetching batch logs.|duration|1.6.0
 kyuubi.ctl.rest.auth.schema|basic|The authentication schema. Valid values are: basic, spnego.|string|1.6.0
-kyuubi.ctl.rest.base.url|&lt;undefined&gt;|The REST API base URL, which contains the scheme (http:// or https://), host name, port number|string|1.6.0
-kyuubi.ctl.rest.connect.timeout|PT30S|The timeout[ms] for establishing the connection with the kyuubi server.A timeout value of zero is interpreted as an infinite timeout.|duration|1.6.0
+kyuubi.ctl.rest.base.url|&lt;undefined&gt;|The REST API base URL, which contains the scheme (http:// or https://), hostname, port number|string|1.6.0
+kyuubi.ctl.rest.connect.timeout|PT30S|The timeout[ms] for establishing the connection with the kyuubi server. A timeout value of zero is interpreted as an infinite timeout.|duration|1.6.0
 kyuubi.ctl.rest.request.attempt.wait|PT3S|How long to wait between attempts of ctl rest request.|duration|1.6.0
 kyuubi.ctl.rest.request.max.attempts|3|The max attempts number for ctl rest request.|int|1.6.0
-kyuubi.ctl.rest.socket.timeout|PT2M|The timeout[ms] for waiting for data packets after connection is established.A timeout value of zero is interpreted as an infinite timeout.|duration|1.6.0
+kyuubi.ctl.rest.socket.timeout|PT2M|The timeout[ms] for waiting for data packets after connection is established. A timeout value of zero is interpreted as an infinite timeout.|duration|1.6.0
 kyuubi.ctl.rest.spnego.host|&lt;undefined&gt;|When auth schema is spnego, need to config spnego host.|string|1.6.0
 
 
@@ -219,57 +219,57 @@ kyuubi.delegation.token.renew.interval|PT168H|unused yet|duration|1.0.0
 
 Key | Default | Meaning | Type | Since
 --- | --- | --- | --- | ---
-kyuubi.engine.connection.url.use.hostname|true|(deprecated) When true, engine register with hostname to zookeeper. When spark run on k8s with cluster mode, set to false to ensure that server can connect to engine|boolean|1.3.0
-kyuubi.engine.deregister.exception.classes||A comma separated list of exception classes. If there is any exception thrown, whose class matches the specified classes, the engine would deregister itself.|seq|1.2.0
-kyuubi.engine.deregister.exception.messages||A comma separated list of exception messages. If there is any exception thrown, whose message or stacktrace matches the specified message list, the engine would deregister itself.|seq|1.2.0
+kyuubi.engine.connection.url.use.hostname|true|(deprecated) When true, the engine registers with hostname to zookeeper. When Spark runs on K8s with cluster mode, set to false to ensure that server can connect to engine|boolean|1.3.0
+kyuubi.engine.deregister.exception.classes||A comma-separated list of exception classes. If there is any exception thrown, whose class matches the specified classes, the engine would deregister itself.|seq|1.2.0
+kyuubi.engine.deregister.exception.messages||A comma-separated list of exception messages. If there is any exception thrown, whose message or stacktrace matches the specified message list, the engine would deregister itself.|seq|1.2.0
 kyuubi.engine.deregister.exception.ttl|PT30M|Time to live(TTL) for exceptions pattern specified in kyuubi.engine.deregister.exception.classes and kyuubi.engine.deregister.exception.messages to deregister engines. Once the total error count hits the kyuubi.engine.deregister.job.max.failures within the TTL, an engine will deregister itself and wait for self-terminated. Otherwise, we suppose that the engine has recovered from temporary failures.|duration|1.2.0
 kyuubi.engine.deregister.job.max.failures|4|Number of failures of job before deregistering the engine.|int|1.2.0
-kyuubi.engine.event.json.log.path|file:///tmp/kyuubi/events|The location of all the engine events go for the builtin JSON logger.<ul><li>Local Path: start with 'file://'</li><li>HDFS Path: start with 'hdfs://'</li></ul>|string|1.3.0
-kyuubi.engine.event.loggers|SPARK|A comma separated list of engine history loggers, where engine/session/operation etc events go.<ul> <li>SPARK: the events will be written to the spark listener bus.</li> <li>JSON: the events will be written to the location of kyuubi.engine.event.json.log.path</li> <li>JDBC: to be done</li> <li>CUSTOM: User-defined event handlers.</li></ul> Note that: Kyuubi supports custom event handlers with the Java SPI. To register a custom event handler, user need to [...]
-kyuubi.engine.flink.extra.classpath|&lt;undefined&gt;|The extra classpath for the flink sql engine, for configuring location of hadoop client jars, etc|string|1.6.0
-kyuubi.engine.flink.java.options|&lt;undefined&gt;|The extra java options for the flink sql engine|string|1.6.0
-kyuubi.engine.flink.memory|1g|The heap memory for the flink sql engine|string|1.6.0
-kyuubi.engine.hive.event.loggers|JSON|A comma separated list of engine history loggers, where engine/session/operation etc events go.<ul> <li>JSON: the events will be written to the location of kyuubi.engine.event.json.log.path</li> <li>JDBC: to be done</li> <li>CUSTOM: to be done.</li></ul>|seq|1.7.0
-kyuubi.engine.hive.extra.classpath|&lt;undefined&gt;|The extra classpath for the hive query engine, for configuring location of hadoop client jars, etc|string|1.6.0
-kyuubi.engine.hive.java.options|&lt;undefined&gt;|The extra java options for the hive query engine|string|1.6.0
-kyuubi.engine.hive.memory|1g|The heap memory for the hive query engine|string|1.6.0
+kyuubi.engine.event.json.log.path|file:///tmp/kyuubi/events|The location where all the engine events go for the built-in JSON logger.<ul><li>Local Path: start with 'file://'</li><li>HDFS Path: start with 'hdfs://'</li></ul>|string|1.3.0
+kyuubi.engine.event.loggers|SPARK|A comma-separated list of engine history loggers, where engine/session/operation etc events go.<ul> <li>SPARK: the events will be written to the Spark listener bus.</li> <li>JSON: the events will be written to the location of kyuubi.engine.event.json.log.path</li> <li>JDBC: to be done</li> <li>CUSTOM: User-defined event handlers.</li></ul> Note that: Kyuubi supports custom event handlers with the Java SPI. To register a custom event handler, the user nee [...]
+kyuubi.engine.flink.extra.classpath|&lt;undefined&gt;|The extra classpath for the Flink SQL engine, for configuring the location of hadoop client jars, etc|string|1.6.0
+kyuubi.engine.flink.java.options|&lt;undefined&gt;|The extra Java options for the Flink SQL engine|string|1.6.0
+kyuubi.engine.flink.memory|1g|The heap memory for the Flink SQL engine|string|1.6.0
+kyuubi.engine.hive.event.loggers|JSON|A comma-separated list of engine history loggers, where engine/session/operation etc events go.<ul> <li>JSON: the events will be written to the location of kyuubi.engine.event.json.log.path</li> <li>JDBC: to be done</li> <li>CUSTOM: to be done.</li></ul>|seq|1.7.0
+kyuubi.engine.hive.extra.classpath|&lt;undefined&gt;|The extra classpath for the Hive query engine, for configuring location of the hadoop client jars and etc.|string|1.6.0
+kyuubi.engine.hive.java.options|&lt;undefined&gt;|The extra Java options for the Hive query engine|string|1.6.0
+kyuubi.engine.hive.memory|1g|The heap memory for the Hive query engine|string|1.6.0
 kyuubi.engine.initialize.sql|SHOW DATABASES|SemiColon-separated list of SQL statements to be initialized in the newly created engine before queries. i.e. use `SHOW DATABASES` to eagerly active HiveClient. This configuration can not be used in JDBC url due to the limitation of Beeline/JDBC driver.|seq|1.2.0
 kyuubi.engine.jdbc.connection.password|&lt;undefined&gt;|The password is used for connecting to server|string|1.6.0
 kyuubi.engine.jdbc.connection.properties||The additional properties are used for connecting to server|seq|1.6.0
-kyuubi.engine.jdbc.connection.provider|&lt;undefined&gt;|The connection provider is used for getting a connection from server|string|1.6.0
+kyuubi.engine.jdbc.connection.provider|&lt;undefined&gt;|The connection provider is used for getting a connection from the server|string|1.6.0
 kyuubi.engine.jdbc.connection.url|&lt;undefined&gt;|The server url that engine will connect to|string|1.6.0
 kyuubi.engine.jdbc.connection.user|&lt;undefined&gt;|The user is used for connecting to server|string|1.6.0
-kyuubi.engine.jdbc.driver.class|&lt;undefined&gt;|The driver class for jdbc engine connection|string|1.6.0
-kyuubi.engine.jdbc.extra.classpath|&lt;undefined&gt;|The extra classpath for the jdbc query engine, for configuring location of jdbc driver, etc|string|1.6.0
-kyuubi.engine.jdbc.java.options|&lt;undefined&gt;|The extra java options for the jdbc query engine|string|1.6.0
-kyuubi.engine.jdbc.memory|1g|The heap memory for the jdbc query engine|string|1.6.0
-kyuubi.engine.jdbc.type|&lt;undefined&gt;|The short name of jdbc type|string|1.6.0
+kyuubi.engine.jdbc.driver.class|&lt;undefined&gt;|The driver class for JDBC engine connection|string|1.6.0
+kyuubi.engine.jdbc.extra.classpath|&lt;undefined&gt;|The extra classpath for the JDBC query engine, for configuring the location of the JDBC driver and etc.|string|1.6.0
+kyuubi.engine.jdbc.java.options|&lt;undefined&gt;|The extra Java options for the JDBC query engine|string|1.6.0
+kyuubi.engine.jdbc.memory|1g|The heap memory for the JDBC query engine|string|1.6.0
+kyuubi.engine.jdbc.type|&lt;undefined&gt;|The short name of JDBC type|string|1.6.0
 kyuubi.engine.operation.convert.catalog.database.enabled|true|When set to true, The engine converts the JDBC methods of set/get Catalog and set/get Schema to the implementation of different engines|boolean|1.6.0
 kyuubi.engine.operation.log.dir.root|engine_operation_logs|Root directory for query operation log at engine-side.|string|1.4.0
-kyuubi.engine.pool.name|engine-pool|The name of engine pool.|string|1.5.0
+kyuubi.engine.pool.name|engine-pool|The name of the engine pool.|string|1.5.0
 kyuubi.engine.pool.selectPolicy|RANDOM|The select policy of an engine from the corresponding engine pool engine for a session. <ul><li>RANDOM - Randomly use the engine in the pool</li><li>POLLING - Polling use the engine in the pool</li></ul>|string|1.7.0
-kyuubi.engine.pool.size|-1|The size of engine pool. Note that, if the size is less than 1, the engine pool will not be enabled; otherwise, the size of the engine pool will be min(this, kyuubi.engine.pool.size.threshold).|int|1.4.0
-kyuubi.engine.pool.size.threshold|9|This parameter is introduced as a server-side parameter, and controls the upper limit of the engine pool.|int|1.4.0
+kyuubi.engine.pool.size|-1|The size of the engine pool. Note that, if the size is less than 1, the engine pool will not be enabled; otherwise, the size of the engine pool will be min(this, kyuubi.engine.pool.size.threshold).|int|1.4.0
+kyuubi.engine.pool.size.threshold|9|This parameter is introduced as a server-side parameter controlling the upper limit of the engine pool.|int|1.4.0
 kyuubi.engine.session.initialize.sql||SemiColon-separated list of SQL statements to be initialized in the newly created engine session before queries. This configuration can not be used in JDBC url due to the limitation of Beeline/JDBC driver.|seq|1.3.0
-kyuubi.engine.share.level|USER|Engines will be shared in different levels, available configs are: <ul> <li>CONNECTION: engine will not be shared but only used by the current client connection</li> <li>USER: engine will be shared by all sessions created by a unique username, see also kyuubi.engine.share.level.subdomain</li> <li>GROUP: engine will be shared by all sessions created by all users belong to the same primary group name. The engine will be launched by the group name as the effec [...]
+kyuubi.engine.share.level|USER|Engines will be shared in different levels, available configs are: <ul> <li>CONNECTION: engine will not be shared but only used by the current client connection</li> <li>USER: engine will be shared by all sessions created by a unique username, see also kyuubi.engine.share.level.subdomain</li> <li>GROUP: the engine will be shared by all sessions created by all users belong to the same primary group name. The engine will be launched by the group name as the e [...]
 kyuubi.engine.share.level.sub.domain|&lt;undefined&gt;|(deprecated) - Using kyuubi.engine.share.level.subdomain instead|string|1.2.0
-kyuubi.engine.share.level.subdomain|&lt;undefined&gt;|Allow end-users to create a subdomain for the share level of an engine. A subdomain is a case-insensitive string values that must be a valid zookeeper sub path. For example, for `USER` share level, an end-user can share a certain engine within a subdomain, not for all of its clients. End-users are free to create multiple engines in the `USER` share level. When disable engine pool, use 'default' if absent.|string|1.4.0
+kyuubi.engine.share.level.subdomain|&lt;undefined&gt;|Allow end-users to create a subdomain for the share level of an engine. A subdomain is a case-insensitive string values that must be a valid zookeeper subpath. For example, for the `USER` share level, an end-user can share a certain engine within a subdomain, not for all of its clients. End-users are free to create multiple engines in the `USER` share level. When disable engine pool, use 'default' if absent.|string|1.4.0
 kyuubi.engine.single.spark.session|false|When set to true, this engine is running in a single session mode. All the JDBC/ODBC connections share the temporary views, function registries, SQL configuration and the current database.|boolean|1.3.0
-kyuubi.engine.spark.event.loggers|SPARK|A comma separated list of engine loggers, where engine/session/operation etc events go.<ul> <li>SPARK: the events will be written to the spark listener bus.</li> <li>JSON: the events will be written to the location of kyuubi.engine.event.json.log.path</li> <li>JDBC: to be done</li> <li>CUSTOM: to be done.</li></ul>|seq|1.7.0
-kyuubi.engine.spark.python.env.archive|&lt;undefined&gt;|Portable python env archive used for Spark engine python language mode.|string|1.7.0
-kyuubi.engine.spark.python.env.archive.exec.path|bin/python|The python exec path under the python env archive.|string|1.7.0
-kyuubi.engine.spark.python.home.archive|&lt;undefined&gt;|Spark archive containing $SPARK_HOME/python directory, which is used to init session python worker for python language mode.|string|1.7.0
-kyuubi.engine.trino.event.loggers|JSON|A comma separated list of engine history loggers, where engine/session/operation etc events go.<ul> <li>JSON: the events will be written to the location of kyuubi.engine.event.json.log.path</li> <li>JDBC: to be done</li> <li>CUSTOM: to be done.</li></ul>|seq|1.7.0
-kyuubi.engine.trino.extra.classpath|&lt;undefined&gt;|The extra classpath for the trino query engine, for configuring other libs which may need by the trino engine |string|1.6.0
-kyuubi.engine.trino.java.options|&lt;undefined&gt;|The extra java options for the trino query engine|string|1.6.0
-kyuubi.engine.trino.memory|1g|The heap memory for the trino query engine|string|1.6.0
-kyuubi.engine.type|SPARK_SQL|Specify the detailed engine that supported by the Kyuubi. The engine type bindings to SESSION scope. This configuration is experimental. Currently, available configs are: <ul> <li>SPARK_SQL: specify this engine type will launch a Spark engine which can provide all the capacity of the Apache Spark. Note, it's a default engine type.</li> <li>FLINK_SQL: specify this engine type will launch a Flink engine which can provide all the capacity of the Apache Flink.</l [...]
+kyuubi.engine.spark.event.loggers|SPARK|A comma-separated list of engine loggers, where engine/session/operation etc events go.<ul> <li>SPARK: the events will be written to the Spark listener bus.</li> <li>JSON: the events will be written to the location of kyuubi.engine.event.json.log.path</li> <li>JDBC: to be done</li> <li>CUSTOM: to be done.</li></ul>|seq|1.7.0
+kyuubi.engine.spark.python.env.archive|&lt;undefined&gt;|Portable Python env archive used for Spark engine Python language mode.|string|1.7.0
+kyuubi.engine.spark.python.env.archive.exec.path|bin/python|The Python exec path under the Python env archive.|string|1.7.0
+kyuubi.engine.spark.python.home.archive|&lt;undefined&gt;|Spark archive containing $SPARK_HOME/python directory, which is used to init session Python worker for Python language mode.|string|1.7.0
+kyuubi.engine.trino.event.loggers|JSON|A comma-separated list of engine history loggers, where engine/session/operation etc events go.<ul> <li>JSON: the events will be written to the location of kyuubi.engine.event.json.log.path</li> <li>JDBC: to be done</li> <li>CUSTOM: to be done.</li></ul>|seq|1.7.0
+kyuubi.engine.trino.extra.classpath|&lt;undefined&gt;|The extra classpath for the Trino query engine, for configuring other libs which may need by the Trino engine |string|1.6.0
+kyuubi.engine.trino.java.options|&lt;undefined&gt;|The extra Java options for the Trino query engine|string|1.6.0
+kyuubi.engine.trino.memory|1g|The heap memory for the Trino query engine|string|1.6.0
+kyuubi.engine.type|SPARK_SQL|Specify the detailed engine supported by Kyuubi. The engine type bindings to SESSION scope. This configuration is experimental. Currently, available configs are: <ul> <li>SPARK_SQL: specify this engine type will launch a Spark engine which can provide all the capacity of the Apache Spark. Note, it's a default engine type.</li> <li>FLINK_SQL: specify this engine type will launch a Flink engine which can provide all the capacity of the Apache Flink.</li> <li>TR [...]
 kyuubi.engine.ui.retainedSessions|200|The number of SQL client sessions kept in the Kyuubi Query Engine web UI.|int|1.4.0
 kyuubi.engine.ui.retainedStatements|200|The number of statements kept in the Kyuubi Query Engine web UI.|int|1.4.0
 kyuubi.engine.ui.stop.enabled|true|When true, allows Kyuubi engine to be killed from the Spark Web UI.|boolean|1.3.0
-kyuubi.engine.user.isolated.spark.session|true|When set to false, if the engine is running in a group or server share level, all the JDBC/ODBC connections will be isolated against the user. Including: the temporary views, function registries, SQL configuration and the current database. Note that, it does not affect if the share level is connection or user.|boolean|1.6.0
-kyuubi.engine.user.isolated.spark.session.idle.interval|PT1M|The interval to check if the user isolated spark session is timeout.|duration|1.6.0
-kyuubi.engine.user.isolated.spark.session.idle.timeout|PT6H|If kyuubi.engine.user.isolated.spark.session is false, we will release the spark session if its corresponding user is inactive after this configured timeout.|duration|1.6.0
+kyuubi.engine.user.isolated.spark.session|true|When set to false, if the engine is running in a group or server share level, all the JDBC/ODBC connections will be isolated against the user. Including the temporary views, function registries, SQL configuration, and the current database. Note that, it does not affect if the share level is connection or user.|boolean|1.6.0
+kyuubi.engine.user.isolated.spark.session.idle.interval|PT1M|The interval to check if the user-isolated Spark session is timeout.|duration|1.6.0
+kyuubi.engine.user.isolated.spark.session.idle.timeout|PT6H|If kyuubi.engine.user.isolated.spark.session is false, we will release the Spark session if its corresponding user is inactive after this configured timeout.|duration|1.6.0
 
 
 ### Event
@@ -287,38 +287,38 @@ Key | Default | Meaning | Type | Since
 --- | --- | --- | --- | ---
 kyuubi.frontend.backoff.slot.length|PT0.1S|(deprecated) Time to back off during login to the thrift frontend service.|duration|1.0.0
 kyuubi.frontend.bind.host|&lt;undefined&gt;|Hostname or IP of the machine on which to run the frontend services.|string|1.0.0
-kyuubi.frontend.bind.port|10009|(deprecated) Port of the machine on which to run the thrift frontend service via binary protocol.|int|1.0.0
-kyuubi.frontend.connection.url.use.hostname|true|When true, frontend services prefer hostname, otherwise, ip address. Note that, the default value is set to `false` when engine running on Kubernetes to prevent potential network issue.|boolean|1.5.0
+kyuubi.frontend.bind.port|10009|(deprecated) Port of the machine on which to run the thrift frontend service via the binary protocol.|int|1.0.0
+kyuubi.frontend.connection.url.use.hostname|true|When true, frontend services prefer hostname, otherwise, ip address. Note that, the default value is set to `false` when engine running on Kubernetes to prevent potential network issues.|boolean|1.5.0
 kyuubi.frontend.login.timeout|PT20S|(deprecated) Timeout for Thrift clients during login to the thrift frontend service.|duration|1.0.0
 kyuubi.frontend.max.message.size|104857600|(deprecated) Maximum message size in bytes a Kyuubi server will accept.|int|1.0.0
-kyuubi.frontend.max.worker.threads|999|(deprecated) Maximum number of threads in the of frontend worker thread pool for the thrift frontend service|int|1.0.0
-kyuubi.frontend.min.worker.threads|9|(deprecated) Minimum number of threads in the of frontend worker thread pool for the thrift frontend service|int|1.0.0
+kyuubi.frontend.max.worker.threads|999|(deprecated) Maximum number of threads in the frontend worker thread pool for the thrift frontend service|int|1.0.0
+kyuubi.frontend.min.worker.threads|9|(deprecated) Minimum number of threads in the frontend worker thread pool for the thrift frontend service|int|1.0.0
 kyuubi.frontend.mysql.bind.host|&lt;undefined&gt;|Hostname or IP of the machine on which to run the MySQL frontend service.|string|1.4.0
 kyuubi.frontend.mysql.bind.port|3309|Port of the machine on which to run the MySQL frontend service.|int|1.4.0
 kyuubi.frontend.mysql.max.worker.threads|999|Maximum number of threads in the command execution thread pool for the MySQL frontend service|int|1.4.0
 kyuubi.frontend.mysql.min.worker.threads|9|Minimum number of threads in the command execution thread pool for the MySQL frontend service|int|1.4.0
 kyuubi.frontend.mysql.netty.worker.threads|&lt;undefined&gt;|Number of thread in the netty worker event loop of MySQL frontend service. Use min(cpu_cores, 8) in default.|int|1.4.0
 kyuubi.frontend.mysql.worker.keepalive.time|PT1M|Time(ms) that an idle async thread of the command execution thread pool will wait for a new task to arrive before terminating in MySQL frontend service|duration|1.4.0
-kyuubi.frontend.protocols|THRIFT_BINARY|A comma separated list for all frontend protocols <ul> <li>THRIFT_BINARY - HiveServer2 compatible thrift binary protocol.</li> <li>THRIFT_HTTP - HiveServer2 compatible thrift http protocol.</li> <li>REST - Kyuubi defined REST API(experimental).</li>  <li>MYSQL - MySQL compatible text protocol(experimental).</li>  <li>TRINO - Trino compatible http protocol(experimental).</li> </ul>|seq|1.4.0
-kyuubi.frontend.proxy.http.client.ip.header|X-Real-IP|The http header to record the real client ip address. If your server is behind a load balancer or other proxy, the server will see this load balancer or proxy IP address as the client IP address, to get around this common issue, most load balancers or proxies offer the ability to record the real remote IP address in an HTTP header that will be added to the request for other devices to use. Note that, because the header value can be sp [...]
+kyuubi.frontend.protocols|THRIFT_BINARY|A comma-separated list for all frontend protocols <ul> <li>THRIFT_BINARY - HiveServer2 compatible thrift binary protocol.</li> <li>THRIFT_HTTP - HiveServer2 compatible thrift http protocol.</li> <li>REST - Kyuubi defined REST API(experimental).</li>  <li>MYSQL - MySQL compatible text protocol(experimental).</li>  <li>TRINO - Trino compatible http protocol(experimental).</li> </ul>|seq|1.4.0
+kyuubi.frontend.proxy.http.client.ip.header|X-Real-IP|The HTTP header to record the real client IP address. If your server is behind a load balancer or other proxy, the server will see this load balancer or proxy IP address as the client IP address, to get around this common issue, most load balancers or proxies offer the ability to record the real remote IP address in an HTTP header that will be added to the request for other devices to use. Note that, because the header value can be sp [...]
 kyuubi.frontend.rest.bind.host|&lt;undefined&gt;|Hostname or IP of the machine on which to run the REST frontend service.|string|1.4.0
 kyuubi.frontend.rest.bind.port|10099|Port of the machine on which to run the REST frontend service.|int|1.4.0
-kyuubi.frontend.rest.max.worker.threads|999|Maximum number of threads in the of frontend worker thread pool for the rest frontend service|int|1.6.2
+kyuubi.frontend.rest.max.worker.threads|999|Maximum number of threads in the frontend worker thread pool for the rest frontend service|int|1.6.2
 kyuubi.frontend.ssl.keystore.algorithm|&lt;undefined&gt;|SSL certificate keystore algorithm.|string|1.7.0
 kyuubi.frontend.ssl.keystore.password|&lt;undefined&gt;|SSL certificate keystore password.|string|1.7.0
 kyuubi.frontend.ssl.keystore.path|&lt;undefined&gt;|SSL certificate keystore location.|string|1.7.0
 kyuubi.frontend.ssl.keystore.type|&lt;undefined&gt;|SSL certificate keystore type.|string|1.7.0
 kyuubi.frontend.thrift.backoff.slot.length|PT0.1S|Time to back off during login to the thrift frontend service.|duration|1.4.0
-kyuubi.frontend.thrift.binary.bind.host|&lt;undefined&gt;|Hostname or IP of the machine on which to run the thrift frontend service via binary protocol.|string|1.4.0
-kyuubi.frontend.thrift.binary.bind.port|10009|Port of the machine on which to run the thrift frontend service via binary protocol.|int|1.4.0
+kyuubi.frontend.thrift.binary.bind.host|&lt;undefined&gt;|Hostname or IP of the machine on which to run the thrift frontend service via the binary protocol.|string|1.4.0
+kyuubi.frontend.thrift.binary.bind.port|10009|Port of the machine on which to run the thrift frontend service via the binary protocol.|int|1.4.0
 kyuubi.frontend.thrift.binary.ssl.disallowed.protocols|SSLv2,SSLv3|SSL versions to disallow for Kyuubi thrift binary frontend.|seq|1.7.0
 kyuubi.frontend.thrift.binary.ssl.enabled|false|Set this to true for using SSL encryption in thrift binary frontend server.|boolean|1.7.0
-kyuubi.frontend.thrift.binary.ssl.include.ciphersuites||A comma separated list of include SSL cipher suite names for thrift binary frontend.|seq|1.7.0
+kyuubi.frontend.thrift.binary.ssl.include.ciphersuites||A comma-separated list of include SSL cipher suite names for thrift binary frontend.|seq|1.7.0
 kyuubi.frontend.thrift.http.allow.user.substitution|true|Allow alternate user to be specified as part of open connection request when using HTTP transport mode.|boolean|1.6.0
 kyuubi.frontend.thrift.http.bind.host|&lt;undefined&gt;|Hostname or IP of the machine on which to run the thrift frontend service via http protocol.|string|1.6.0
 kyuubi.frontend.thrift.http.bind.port|10010|Port of the machine on which to run the thrift frontend service via http protocol.|int|1.6.0
 kyuubi.frontend.thrift.http.compression.enabled|true|Enable thrift http compression via Jetty compression support|boolean|1.6.0
-kyuubi.frontend.thrift.http.cookie.auth.enabled|true|When true, Kyuubi in HTTP transport mode, will use cookie based authentication mechanism|boolean|1.6.0
+kyuubi.frontend.thrift.http.cookie.auth.enabled|true|When true, Kyuubi in HTTP transport mode, will use cookie-based authentication mechanism|boolean|1.6.0
 kyuubi.frontend.thrift.http.cookie.domain|&lt;undefined&gt;|Domain for the Kyuubi generated cookies|string|1.6.0
 kyuubi.frontend.thrift.http.cookie.is.httponly|true|HttpOnly attribute of the Kyuubi generated cookie.|boolean|1.6.0
 kyuubi.frontend.thrift.http.cookie.max.age|86400|Maximum age in seconds for server side cookie used by Kyuubi in HTTP mode.|int|1.6.0
@@ -327,20 +327,20 @@ kyuubi.frontend.thrift.http.max.idle.time|PT30M|Maximum idle time for a connecti
 kyuubi.frontend.thrift.http.path|cliservice|Path component of URL endpoint when in HTTP mode.|string|1.6.0
 kyuubi.frontend.thrift.http.request.header.size|6144|Request header size in bytes, when using HTTP transport mode. Jetty defaults used.|int|1.6.0
 kyuubi.frontend.thrift.http.response.header.size|6144|Response header size in bytes, when using HTTP transport mode. Jetty defaults used.|int|1.6.0
-kyuubi.frontend.thrift.http.ssl.exclude.ciphersuites||A comma separated list of exclude SSL cipher suite names for thrift http frontend.|seq|1.7.0
+kyuubi.frontend.thrift.http.ssl.exclude.ciphersuites||A comma-separated list of exclude SSL cipher suite names for thrift http frontend.|seq|1.7.0
 kyuubi.frontend.thrift.http.ssl.keystore.password|&lt;undefined&gt;|SSL certificate keystore password.|string|1.6.0
 kyuubi.frontend.thrift.http.ssl.keystore.path|&lt;undefined&gt;|SSL certificate keystore location.|string|1.6.0
 kyuubi.frontend.thrift.http.ssl.protocol.blacklist|SSLv2,SSLv3|SSL Versions to disable when using HTTP transport mode.|seq|1.6.0
 kyuubi.frontend.thrift.http.use.SSL|false|Set this to true for using SSL encryption in http mode.|boolean|1.6.0
-kyuubi.frontend.thrift.http.xsrf.filter.enabled|false|If enabled, Kyuubi will block any requests made to it over http if an X-XSRF-HEADER header is not present|boolean|1.6.0
+kyuubi.frontend.thrift.http.xsrf.filter.enabled|false|If enabled, Kyuubi will block any requests made to it over HTTP if an X-XSRF-HEADER header is not present|boolean|1.6.0
 kyuubi.frontend.thrift.login.timeout|PT20S|Timeout for Thrift clients during login to the thrift frontend service.|duration|1.4.0
 kyuubi.frontend.thrift.max.message.size|104857600|Maximum message size in bytes a Kyuubi server will accept.|int|1.4.0
-kyuubi.frontend.thrift.max.worker.threads|999|Maximum number of threads in the of frontend worker thread pool for the thrift frontend service|int|1.4.0
-kyuubi.frontend.thrift.min.worker.threads|9|Minimum number of threads in the of frontend worker thread pool for the thrift frontend service|int|1.4.0
+kyuubi.frontend.thrift.max.worker.threads|999|Maximum number of threads in the frontend worker thread pool for the thrift frontend service|int|1.4.0
+kyuubi.frontend.thrift.min.worker.threads|9|Minimum number of threads in the frontend worker thread pool for the thrift frontend service|int|1.4.0
 kyuubi.frontend.thrift.worker.keepalive.time|PT1M|Keep-alive time (in milliseconds) for an idle worker thread|duration|1.4.0
 kyuubi.frontend.trino.bind.host|&lt;undefined&gt;|Hostname or IP of the machine on which to run the TRINO frontend service.|string|1.7.0
 kyuubi.frontend.trino.bind.port|10999|Port of the machine on which to run the TRINO frontend service.|int|1.7.0
-kyuubi.frontend.trino.max.worker.threads|999|Maximum number of threads in the of frontend worker thread pool for the trino frontend service|int|1.7.0
+kyuubi.frontend.trino.max.worker.threads|999|Maximum number of threads in the frontend worker thread pool for the Trino frontend service|int|1.7.0
 kyuubi.frontend.worker.keepalive.time|PT1M|(deprecated) Keep-alive time (in milliseconds) for an idle worker thread|duration|1.0.0
 
 
@@ -350,27 +350,27 @@ Key | Default | Meaning | Type | Since
 --- | --- | --- | --- | ---
 kyuubi.ha.addresses||The connection string for the discovery ensemble|string|1.6.0
 kyuubi.ha.client.class|org.apache.kyuubi.ha.client.zookeeper.ZookeeperDiscoveryClient|Class name for service discovery client.<ul> <li>Zookeeper: org.apache.kyuubi.ha.client.zookeeper.ZookeeperDiscoveryClient</li> <li>Etcd: org.apache.kyuubi.ha.client.etcd.EtcdDiscoveryClient</li></ul>|string|1.6.0
-kyuubi.ha.etcd.lease.timeout|PT10S|Timeout for etcd keep alive lease. The kyuubi server will known unexpected loss of engine after up to this seconds.|duration|1.6.0
+kyuubi.ha.etcd.lease.timeout|PT10S|Timeout for etcd keep alive lease. The kyuubi server will know the unexpected loss of engine after up to this seconds.|duration|1.6.0
 kyuubi.ha.etcd.ssl.ca.path|&lt;undefined&gt;|Where the etcd CA certificate file is stored.|string|1.6.0
 kyuubi.ha.etcd.ssl.client.certificate.path|&lt;undefined&gt;|Where the etcd SSL certificate file is stored.|string|1.6.0
 kyuubi.ha.etcd.ssl.client.key.path|&lt;undefined&gt;|Where the etcd SSL key file is stored.|string|1.6.0
-kyuubi.ha.etcd.ssl.enabled|false|When set to true, will build a ssl secured etcd client.|boolean|1.6.0
+kyuubi.ha.etcd.ssl.enabled|false|When set to true, will build an SSL secured etcd client.|boolean|1.6.0
 kyuubi.ha.namespace|kyuubi|The root directory for the service to deploy its instance uri|string|1.6.0
-kyuubi.ha.zookeeper.acl.enabled|false|Set to true if the zookeeper ensemble is kerberized|boolean|1.0.0
-kyuubi.ha.zookeeper.auth.digest|&lt;undefined&gt;|The digest auth string is used for zookeeper authentication, like: username:password.|string|1.3.2
-kyuubi.ha.zookeeper.auth.keytab|&lt;undefined&gt;|Location of Kyuubi server's keytab is used for zookeeper authentication.|string|1.3.2
-kyuubi.ha.zookeeper.auth.principal|&lt;undefined&gt;|Name of the Kerberos principal is used for zookeeper authentication.|string|1.3.2
-kyuubi.ha.zookeeper.auth.type|NONE|The type of zookeeper authentication, all candidates are <ul><li>NONE</li><li> KERBEROS</li><li> DIGEST</li></ul>|string|1.3.2
-kyuubi.ha.zookeeper.connection.base.retry.wait|1000|Initial amount of time to wait between retries to the zookeeper ensemble|int|1.0.0
-kyuubi.ha.zookeeper.connection.max.retries|3|Max retry times for connecting to the zookeeper ensemble|int|1.0.0
+kyuubi.ha.zookeeper.acl.enabled|false|Set to true if the ZooKeeper ensemble is kerberized|boolean|1.0.0
+kyuubi.ha.zookeeper.auth.digest|&lt;undefined&gt;|The digest auth string is used for ZooKeeper authentication, like: username:password.|string|1.3.2
+kyuubi.ha.zookeeper.auth.keytab|&lt;undefined&gt;|Location of the Kyuubi server's keytab is used for ZooKeeper authentication.|string|1.3.2
+kyuubi.ha.zookeeper.auth.principal|&lt;undefined&gt;|Name of the Kerberos principal is used for ZooKeeper authentication.|string|1.3.2
+kyuubi.ha.zookeeper.auth.type|NONE|The type of ZooKeeper authentication, all candidates are <ul><li>NONE</li><li> KERBEROS</li><li> DIGEST</li></ul>|string|1.3.2
+kyuubi.ha.zookeeper.connection.base.retry.wait|1000|Initial amount of time to wait between retries to the ZooKeeper ensemble|int|1.0.0
+kyuubi.ha.zookeeper.connection.max.retries|3|Max retry times for connecting to the ZooKeeper ensemble|int|1.0.0
 kyuubi.ha.zookeeper.connection.max.retry.wait|30000|Max amount of time to wait between retries for BOUNDED_EXPONENTIAL_BACKOFF policy can reach, or max time until elapsed for UNTIL_ELAPSED policy to connect the zookeeper ensemble|int|1.0.0
-kyuubi.ha.zookeeper.connection.retry.policy|EXPONENTIAL_BACKOFF|The retry policy for connecting to the zookeeper ensemble, all candidates are: <ul><li>ONE_TIME</li><li> N_TIME</li><li> EXPONENTIAL_BACKOFF</li><li> BOUNDED_EXPONENTIAL_BACKOFF</li><li> UNTIL_ELAPSED</li></ul>|string|1.0.0
-kyuubi.ha.zookeeper.connection.timeout|15000|The timeout(ms) of creating the connection to the zookeeper ensemble|int|1.0.0
-kyuubi.ha.zookeeper.engine.auth.type|NONE|The type of zookeeper authentication for engine, all candidates are <ul><li>NONE</li><li> KERBEROS</li><li> DIGEST</li></ul>|string|1.3.2
+kyuubi.ha.zookeeper.connection.retry.policy|EXPONENTIAL_BACKOFF|The retry policy for connecting to the ZooKeeper ensemble, all candidates are: <ul><li>ONE_TIME</li><li> N_TIME</li><li> EXPONENTIAL_BACKOFF</li><li> BOUNDED_EXPONENTIAL_BACKOFF</li><li> UNTIL_ELAPSED</li></ul>|string|1.0.0
+kyuubi.ha.zookeeper.connection.timeout|15000|The timeout(ms) of creating the connection to the ZooKeeper ensemble|int|1.0.0
+kyuubi.ha.zookeeper.engine.auth.type|NONE|The type of ZooKeeper authentication for the engine, all candidates are <ul><li>NONE</li><li> KERBEROS</li><li> DIGEST</li></ul>|string|1.3.2
 kyuubi.ha.zookeeper.namespace|kyuubi|(deprecated) The root directory for the service to deploy its instance uri|string|1.0.0
-kyuubi.ha.zookeeper.node.creation.timeout|PT2M|Timeout for creating zookeeper node|duration|1.2.0
-kyuubi.ha.zookeeper.publish.configs|false|When set to true, publish Kerberos configs to Zookeeper.Note that the Hive driver needs to be greater than 1.3 or 2.0 or apply HIVE-11581 patch.|boolean|1.4.0
-kyuubi.ha.zookeeper.quorum||(deprecated) The connection string for the zookeeper ensemble|string|1.0.0
+kyuubi.ha.zookeeper.node.creation.timeout|PT2M|Timeout for creating ZooKeeper node|duration|1.2.0
+kyuubi.ha.zookeeper.publish.configs|false|When set to true, publish Kerberos configs to Zookeeper. Note that the Hive driver needs to be greater than 1.3 or 2.0 or apply HIVE-11581 patch.|boolean|1.4.0
+kyuubi.ha.zookeeper.quorum||(deprecated) The connection string for the ZooKeeper ensemble|string|1.0.0
 kyuubi.ha.zookeeper.session.timeout|60000|The timeout(ms) of a connected session to be idled|int|1.0.0
 
 
@@ -378,7 +378,7 @@ kyuubi.ha.zookeeper.session.timeout|60000|The timeout(ms) of a connected session
 
 Key | Default | Meaning | Type | Since
 --- | --- | --- | --- | ---
-kyuubi.kinit.interval|PT1H|How often will Kyuubi server run `kinit -kt [keytab] [principal]` to renew the local Kerberos credentials cache|duration|1.0.0
+kyuubi.kinit.interval|PT1H|How often will the Kyuubi server run `kinit -kt [keytab] [principal]` to renew the local Kerberos credentials cache|duration|1.0.0
 kyuubi.kinit.keytab|&lt;undefined&gt;|Location of Kyuubi server's keytab.|string|1.0.0
 kyuubi.kinit.max.attempts|10|How many times will `kinit` process retry|int|1.0.0
 kyuubi.kinit.principal|&lt;undefined&gt;|Name of the Kerberos principal.|string|1.0.0
@@ -391,9 +391,9 @@ Key | Default | Meaning | Type | Since
 kyuubi.kubernetes.authenticate.caCertFile|&lt;undefined&gt;|Path to the CA cert file for connecting to the Kubernetes API server over TLS from the kyuubi. Specify this as a path as opposed to a URI (i.e. do not provide a scheme)|string|1.7.0
 kyuubi.kubernetes.authenticate.clientCertFile|&lt;undefined&gt;|Path to the client cert file for connecting to the Kubernetes API server over TLS from the kyuubi. Specify this as a path as opposed to a URI (i.e. do not provide a scheme)|string|1.7.0
 kyuubi.kubernetes.authenticate.clientKeyFile|&lt;undefined&gt;|Path to the client key file for connecting to the Kubernetes API server over TLS from the kyuubi. Specify this as a path as opposed to a URI (i.e. do not provide a scheme)|string|1.7.0
-kyuubi.kubernetes.authenticate.oauthToken|&lt;undefined&gt;|The OAuth token to use when authenticating against the Kubernetes API server. Note that unlike the other authentication options, this must be the exact string value of the token to use for the authentication.|string|1.7.0
+kyuubi.kubernetes.authenticate.oauthToken|&lt;undefined&gt;|The OAuth token to use when authenticating against the Kubernetes API server. Note that unlike, the other authentication options, this must be the exact string value of the token to use for the authentication.|string|1.7.0
 kyuubi.kubernetes.authenticate.oauthTokenFile|&lt;undefined&gt;|Path to the file containing the OAuth token to use when authenticating against the Kubernetes API server. Specify this as a path as opposed to a URI (i.e. do not provide a scheme)|string|1.7.0
-kyuubi.kubernetes.context|&lt;undefined&gt;|The desired context from your kubernetes config file used to configure the K8S client for interacting with the cluster.|string|1.6.0
+kyuubi.kubernetes.context|&lt;undefined&gt;|The desired context from your kubernetes config file used to configure the K8s client for interacting with the cluster.|string|1.6.0
 kyuubi.kubernetes.master.address|&lt;undefined&gt;|The internal Kubernetes master (API server) address to be used for kyuubi.|string|1.7.0
 kyuubi.kubernetes.namespace|default|The namespace that will be used for running the kyuubi pods and find engines.|string|1.7.0
 kyuubi.kubernetes.trust.certificates|false|If set to true then client can submit to kubernetes cluster only with token|boolean|1.7.0
@@ -403,20 +403,20 @@ kyuubi.kubernetes.trust.certificates|false|If set to true then client can submit
 
 Key | Default | Meaning | Type | Since
 --- | --- | --- | --- | ---
-kyuubi.metadata.cleaner.enabled|true|Whether to clean the metadata periodically. If it is enabled, Kyuubi will clean the metadata that is in terminate state with max age limitation.|boolean|1.6.0
+kyuubi.metadata.cleaner.enabled|true|Whether to clean the metadata periodically. If it is enabled, Kyuubi will clean the metadata that is in the terminate state with max age limitation.|boolean|1.6.0
 kyuubi.metadata.cleaner.interval|PT30M|The interval to check and clean expired metadata.|duration|1.6.0
-kyuubi.metadata.max.age|PT72H|The maximum age of metadata, the metadata that exceeds the age will be cleaned.|duration|1.6.0
-kyuubi.metadata.recovery.threads|10|The number of threads for recovery from metadata store when Kyuubi server restarting.|int|1.6.0
+kyuubi.metadata.max.age|PT72H|The maximum age of metadata, the metadata exceeding the age will be cleaned.|duration|1.6.0
+kyuubi.metadata.recovery.threads|10|The number of threads for recovery from the metadata store when the Kyuubi server restarts.|int|1.6.0
 kyuubi.metadata.request.retry.interval|PT5S|The interval to check and trigger the metadata request retry tasks.|duration|1.6.0
 kyuubi.metadata.request.retry.queue.size|65536|The maximum queue size for buffering metadata requests in memory when the external metadata storage is down. Requests will be dropped if the queue exceeds.|int|1.6.0
-kyuubi.metadata.request.retry.threads|10|Number of threads in the metadata request retry manager thread pool. The metadata store might be unavailable sometimes and the requests will fail, to tolerant for this case and unblock the main thread, we support to retry the failed requests in async way.|int|1.6.0
+kyuubi.metadata.request.retry.threads|10|Number of threads in the metadata request retry manager thread pool. The metadata store might be unavailable sometimes and the requests will fail, tolerant for this case and unblock the main thread, we support retrying the failed requests in an async way.|int|1.6.0
 kyuubi.metadata.store.class|org.apache.kyuubi.server.metadata.jdbc.JDBCMetadataStore|Fully qualified class name for server metadata store.|string|1.6.0
-kyuubi.metadata.store.jdbc.database.schema.init|true|Whether to init the jdbc metadata store database schema.|boolean|1.6.0
-kyuubi.metadata.store.jdbc.database.type|DERBY|The database type for server jdbc metadata store.<ul> <li>DERBY: Apache Derby, jdbc driver `org.apache.derby.jdbc.AutoloadedDriver`.</li> <li>MYSQL: MySQL, jdbc driver `com.mysql.jdbc.Driver`.</li> <li>CUSTOM: User-defined database type, need to specify corresponding jdbc driver.</li> Note that: The jdbc datasource is powered by HiKariCP, for datasource properties, please specify them with prefix: kyuubi.metadata.store.jdbc.datasource. For e [...]
+kyuubi.metadata.store.jdbc.database.schema.init|true|Whether to init the JDBC metadata store database schema.|boolean|1.6.0
+kyuubi.metadata.store.jdbc.database.type|DERBY|The database type for server jdbc metadata store.<ul> <li>DERBY: Apache Derby, JDBC driver `org.apache.derby.jdbc.AutoloadedDriver`.</li> <li>MYSQL: MySQL, JDBC driver `com.mysql.jdbc.Driver`.</li> <li>CUSTOM: User-defined database type, need to specify corresponding JDBC driver.</li> Note that: The JDBC datasource is powered by HiKariCP, for datasource properties, please specify them with the prefix: kyuubi.metadata.store.jdbc.datasource. F [...]
 kyuubi.metadata.store.jdbc.driver|&lt;undefined&gt;|JDBC driver class name for server jdbc metadata store.|string|1.6.0
-kyuubi.metadata.store.jdbc.password||The password for server jdbc metadata store.|string|1.6.0
-kyuubi.metadata.store.jdbc.url|jdbc:derby:memory:kyuubi_state_store_db;create=true|The jdbc url for server jdbc metadata store. By defaults, it is a DERBY in-memory database url, and the state information is not shared across kyuubi instances. To enable multiple kyuubi instances high available, please specify a production jdbc url.|string|1.6.0
-kyuubi.metadata.store.jdbc.user||The username for server jdbc metadata store.|string|1.6.0
+kyuubi.metadata.store.jdbc.password||The password for server JDBC metadata store.|string|1.6.0
+kyuubi.metadata.store.jdbc.url|jdbc:derby:memory:kyuubi_state_store_db;create=true|The JDBC url for server JDBC metadata store. By default, it is a DERBY in-memory database url, and the state information is not shared across kyuubi instances. To enable high availability for multiple kyuubi instances, please specify a production JDBC url.|string|1.6.0
+kyuubi.metadata.store.jdbc.user||The username for server JDBC metadata store.|string|1.6.0
 
 
 ### Metrics
@@ -425,11 +425,11 @@ Key | Default | Meaning | Type | Since
 --- | --- | --- | --- | ---
 kyuubi.metrics.console.interval|PT5S|How often should report metrics to console|duration|1.2.0
 kyuubi.metrics.enabled|true|Set to true to enable kyuubi metrics system|boolean|1.2.0
-kyuubi.metrics.json.interval|PT5S|How often should report metrics to json file|duration|1.2.0
-kyuubi.metrics.json.location|metrics|Where the json metrics file located|string|1.2.0
+kyuubi.metrics.json.interval|PT5S|How often should report metrics to JSON file|duration|1.2.0
+kyuubi.metrics.json.location|metrics|Where the JSON metrics file located|string|1.2.0
 kyuubi.metrics.prometheus.path|/metrics|URI context path of prometheus metrics HTTP server|string|1.2.0
 kyuubi.metrics.prometheus.port|10019|Prometheus metrics HTTP server port|int|1.2.0
-kyuubi.metrics.reporters|JSON|A comma separated list for all metrics reporters<ul> <li>CONSOLE - ConsoleReporter which outputs measurements to CONSOLE periodically.</li> <li>JMX - JmxReporter which listens for new metrics and exposes them as MBeans.</li>  <li>JSON - JsonReporter which outputs measurements to json file periodically.</li> <li>PROMETHEUS - PrometheusReporter which exposes metrics in prometheus format.</li> <li>SLF4J - Slf4jReporter which outputs measurements to system log p [...]
+kyuubi.metrics.reporters|JSON|A comma-separated list for all metrics reporters<ul> <li>CONSOLE - ConsoleReporter which outputs measurements to CONSOLE periodically.</li> <li>JMX - JmxReporter which listens for new metrics and exposes them as MBeans.</li>  <li>JSON - JsonReporter which outputs measurements to json file periodically.</li> <li>PROMETHEUS - PrometheusReporter which exposes metrics in Prometheus format.</li> <li>SLF4J - Slf4jReporter which outputs measurements to system log p [...]
 kyuubi.metrics.slf4j.interval|PT5S|How often should report metrics to SLF4J logger|duration|1.2.0
 
 
@@ -441,15 +441,15 @@ kyuubi.operation.idle.timeout|PT3H|Operation will be closed when it's not access
 kyuubi.operation.interrupt.on.cancel|true|When true, all running tasks will be interrupted if one cancels a query. When false, all running tasks will remain until finished.|boolean|1.2.0
 kyuubi.operation.language|SQL|Choose a programing language for the following inputs <ul><li>SQL: (Default) Run all following statements as SQL queries.</li> <li>SCALA: Run all following input a scala codes</li></ul>|string|1.5.0
 kyuubi.operation.log.dir.root|server_operation_logs|Root directory for query operation log at server-side.|string|1.4.0
-kyuubi.operation.plan.only.excludes|ResetCommand,SetCommand,SetNamespaceCommand,UseStatement,SetCatalogAndNamespace|Comma-separated list of query plan names, in the form of simple class names, i.e, for `set abc=xyz`, the value will be `SetCommand`. For those auxiliary plans, such as `switch databases`, `set properties`, or `create temporary view` e.t.c, which are used for setup evaluating environments for analyzing actual queries, we can use this config to exclude them and let them take  [...]
+kyuubi.operation.plan.only.excludes|ResetCommand,SetCommand,SetNamespaceCommand,UseStatement,SetCatalogAndNamespace|Comma-separated list of query plan names, in the form of simple class names, i.e, for `SET abc=xyz`, the value will be `SetCommand`. For those auxiliary plans, such as `switch databases`, `set properties`, or `create temporary view` etc., which are used for setup evaluating environments for analyzing actual queries, we can use this config to exclude them and let them take e [...]
 kyuubi.operation.plan.only.mode|none|Configures the statement performed mode, The value can be 'parse', 'analyze', 'optimize', 'optimize_with_stats', 'physical', 'execution', or 'none', when it is 'none', indicate to the statement will be fully executed, otherwise only way without executing the query. different engines currently support different modes, the Spark engine supports all modes, and the Flink engine supports 'parse', 'physical', and 'execution', other engines do not support pl [...]
-kyuubi.operation.plan.only.output.style|plain|Configures the planOnly output style, The value can be 'plain' and 'json', default value is 'plain', this configuration supports only the output styles of the Spark engine|string|1.7.0
+kyuubi.operation.plan.only.output.style|plain|Configures the planOnly output style. The value can be 'plain' or 'json', and the default value is 'plain'. This configuration supports only the output styles of the Spark engine|string|1.7.0
 kyuubi.operation.progress.enabled|false|Whether to enable the operation progress. When true, the operation progress will be returned in `GetOperationStatus`.|boolean|1.6.0
-kyuubi.operation.query.timeout|&lt;undefined&gt;|Timeout for query executions at server-side, take affect with client-side timeout(`java.sql.Statement.setQueryTimeout`) together, a running query will be cancelled automatically if timeout. It's off by default, which means only client-side take fully control whether the query should timeout or not. If set, client-side timeout capped at this point. To cancel the queries right away without waiting task to finish, consider enabling kyuubi.ope [...]
+kyuubi.operation.query.timeout|&lt;undefined&gt;|Timeout for query executions at server-side, take effect with client-side timeout(`java.sql.Statement.setQueryTimeout`) together, a running query will be cancelled automatically if timeout. It's off by default, which means only client-side take full control of whether the query should timeout or not. If set, client-side timeout is capped at this point. To cancel the queries right away without waiting for task to finish, consider enabling k [...]
 kyuubi.operation.result.format|thrift|Specify the result format, available configs are: <ul> <li>THRIFT: the result will convert to TRow at the engine driver side. </li> <li>ARROW: the result will be encoded as Arrow at the executor side before collecting by the driver, and deserialized at the client side. note that it only takes effect for kyuubi-hive-jdbc clients now.</li></ul>|string|1.7.0
-kyuubi.operation.result.max.rows|0|Max rows of Spark query results. Rows that exceeds the limit would be ignored. By setting this value to 0 to disable the max rows limit.|int|1.6.0
-kyuubi.operation.scheduler.pool|&lt;undefined&gt;|The scheduler pool of job. Note that, this config should be used after change Spark config spark.scheduler.mode=FAIR.|string|1.1.1
-kyuubi.operation.spark.listener.enabled|true|When set to true, Spark engine registers a SQLOperationListener before executing the statement, logs a few summary statistics when each stage completes.|boolean|1.6.0
+kyuubi.operation.result.max.rows|0|Max rows of Spark query results. Rows exceeding the limit would be ignored. By setting this value to 0 to disable the max rows limit.|int|1.6.0
+kyuubi.operation.scheduler.pool|&lt;undefined&gt;|The scheduler pool of job. Note that, this config should be used after changing Spark config spark.scheduler.mode=FAIR.|string|1.1.1
+kyuubi.operation.spark.listener.enabled|true|When set to true, Spark engine registers an SQLOperationListener before executing the statement, logging a few summary statistics when each stage completes.|boolean|1.6.0
 kyuubi.operation.status.polling.timeout|PT5S|Timeout(ms) for long polling asynchronous running sql query's status|duration|1.0.0
 
 
@@ -474,45 +474,45 @@ kyuubi.server.redaction.regex|&lt;undefined&gt;|Regex to decide which Kyuubi con
 Key | Default | Meaning | Type | Since
 --- | --- | --- | --- | ---
 kyuubi.session.check.interval|PT5M|The check interval for session timeout.|duration|1.0.0
-kyuubi.session.conf.advisor|&lt;undefined&gt;|A config advisor plugin for Kyuubi Server. This plugin can provide some custom configs for different user or session configs and overwrite the session configs before open a new session. This config value should be a class which is a child of 'org.apache.kyuubi.plugin.SessionConfAdvisor' which has zero-arg constructor.|string|1.5.0
+kyuubi.session.conf.advisor|&lt;undefined&gt;|A config advisor plugin for Kyuubi Server. This plugin can provide some custom configs for different users or session configs and overwrite the session configs before opening a new session. This config value should be a subclass of `org.apache.kyuubi.plugin.SessionConfAdvisor` which has a zero-arg constructor.|string|1.5.0
 kyuubi.session.conf.file.reload.interval|PT10M|When `FileSessionConfAdvisor` is used, this configuration defines the expired time of `$KYUUBI_CONF_DIR/kyuubi-session-<profile>.conf` in the cache. After exceeding this value, the file will be reloaded.|duration|1.7.0
-kyuubi.session.conf.ignore.list||A comma separated list of ignored keys. If the client connection contains any of them, the key and the corresponding value will be removed silently during engine bootstrap and connection setup. Note that this rule is for server-side protection defined via administrators to prevent some essential configs from tampering but will not forbid users to set dynamic configurations via SET syntax.|seq|1.2.0
-kyuubi.session.conf.profile|&lt;undefined&gt;|Specify a profile to load session-level configurations from `$KYUUBI_CONF_DIR/kyuubi-session-<profile>.conf`. This configuration will be ignored if the file does not exist. This configuration only has effect when `kyuubi.session.conf.advisor` is set as `org.apache.kyuubi.session.FileSessionConfAdvisor`.|string|1.7.0
-kyuubi.session.conf.restrict.list||A comma separated list of restricted keys. If the client connection contains any of them, the connection will be rejected explicitly during engine bootstrap and connection setup. Note that this rule is for server-side protection defined via administrators to prevent some essential configs from tampering but will not forbid users to set dynamic configurations via SET syntax.|seq|1.2.0
-kyuubi.session.engine.alive.probe.enabled|false|Whether to enable the engine alive probe, it true, we will create a companion thrift client that sends simple request to check whether the engine is keep alive.|boolean|1.6.0
+kyuubi.session.conf.ignore.list||A comma-separated list of ignored keys. If the client connection contains any of them, the key and the corresponding value will be removed silently during engine bootstrap and connection setup. Note that this rule is for server-side protection defined via administrators to prevent some essential configs from tampering but will not forbid users to set dynamic configurations via SET syntax.|seq|1.2.0
+kyuubi.session.conf.profile|&lt;undefined&gt;|Specify a profile to load session-level configurations from `$KYUUBI_CONF_DIR/kyuubi-session-<profile>.conf`. This configuration will be ignored if the file does not exist. This configuration only takes effect when `kyuubi.session.conf.advisor` is set as `org.apache.kyuubi.session.FileSessionConfAdvisor`.|string|1.7.0
+kyuubi.session.conf.restrict.list||A comma-separated list of restricted keys. If the client connection contains any of them, the connection will be rejected explicitly during engine bootstrap and connection setup. Note that this rule is for server-side protection defined via administrators to prevent some essential configs from tampering but will not forbid users to set dynamic configurations via SET syntax.|seq|1.2.0
+kyuubi.session.engine.alive.probe.enabled|false|Whether to enable the engine alive probe, it true, we will create a companion thrift client that keeps sending simple requests to check whether the engine is alive.|boolean|1.6.0
 kyuubi.session.engine.alive.probe.interval|PT10S|The interval for engine alive probe.|duration|1.6.0
 kyuubi.session.engine.alive.timeout|PT2M|The timeout for engine alive. If there is no alive probe success in the last timeout window, the engine will be marked as no-alive.|duration|1.6.0
 kyuubi.session.engine.check.interval|PT1M|The check interval for engine timeout|duration|1.0.0
 kyuubi.session.engine.flink.main.resource|&lt;undefined&gt;|The package used to create Flink SQL engine remote job. If it is undefined, Kyuubi will use the default|string|1.4.0
-kyuubi.session.engine.flink.max.rows|1000000|Max rows of Flink query results. For batch queries, rows that exceeds the limit would be ignored. For streaming queries, the query would be canceled if the limit is reached.|int|1.5.0
+kyuubi.session.engine.flink.max.rows|1000000|Max rows of Flink query results. For batch queries, rows exceeding the limit would be ignored. For streaming queries, the query would be canceled if the limit is reached.|int|1.5.0
 kyuubi.session.engine.hive.main.resource|&lt;undefined&gt;|The package used to create Hive engine remote job. If it is undefined, Kyuubi will use the default|string|1.6.0
 kyuubi.session.engine.idle.timeout|PT30M|engine timeout, the engine will self-terminate when it's not accessed for this duration. 0 or negative means not to self-terminate.|duration|1.0.0
 kyuubi.session.engine.initialize.timeout|PT3M|Timeout for starting the background engine, e.g. SparkSQLEngine.|duration|1.0.0
-kyuubi.session.engine.launch.async|true|When opening kyuubi session, whether to launch backend engine asynchronously. When true, the Kyuubi server will set up the connection with the client without delay as the backend engine will be created asynchronously.|boolean|1.4.0
+kyuubi.session.engine.launch.async|true|When opening kyuubi session, whether to launch the backend engine asynchronously. When true, the Kyuubi server will set up the connection with the client without delay as the backend engine will be created asynchronously.|boolean|1.4.0
 kyuubi.session.engine.log.timeout|PT24H|If we use Spark as the engine then the session submit log is the console output of spark-submit. We will retain the session submit log until over the config value.|duration|1.1.0
 kyuubi.session.engine.login.timeout|PT15S|The timeout of creating the connection to remote sql query engine|duration|1.0.0
 kyuubi.session.engine.open.max.attempts|9|The number of times an open engine will retry when encountering a special error.|int|1.7.0
-kyuubi.session.engine.open.retry.wait|PT10S|How long to wait before retrying to open engine after a failure.|duration|1.7.0
+kyuubi.session.engine.open.retry.wait|PT10S|How long to wait before retrying to open the engine after failure.|duration|1.7.0
 kyuubi.session.engine.share.level|USER|(deprecated) - Using kyuubi.engine.share.level instead|string|1.0.0
 kyuubi.session.engine.spark.main.resource|&lt;undefined&gt;|The package used to create Spark SQL engine remote application. If it is undefined, Kyuubi will use the default|string|1.0.0
-kyuubi.session.engine.spark.max.lifetime|PT0S|Max lifetime for spark engine, the engine will self-terminate when it reaches the end of life. 0 or negative means not to self-terminate.|duration|1.6.0
+kyuubi.session.engine.spark.max.lifetime|PT0S|Max lifetime for Spark engine, the engine will self-terminate when it reaches the end of life. 0 or negative means not to self-terminate.|duration|1.6.0
 kyuubi.session.engine.spark.progress.timeFormat|yyyy-MM-dd HH:mm:ss.SSS|The time format of the progress bar|string|1.6.0
 kyuubi.session.engine.spark.progress.update.interval|PT1S|Update period of progress bar.|duration|1.6.0
-kyuubi.session.engine.spark.showProgress|false|When true, show the progress bar in the spark engine log.|boolean|1.6.0
-kyuubi.session.engine.startup.error.max.size|8192|During engine bootstrapping, if error occurs, using this config to limit the length error message(characters).|int|1.1.0
-kyuubi.session.engine.startup.maxLogLines|10|The maximum number of engine log lines when errors occur during engine startup phase. Note that this max lines is for client-side to help track engine startup issue.|int|1.4.0
-kyuubi.session.engine.startup.waitCompletion|true|Whether to wait for completion after engine starts. If false, the startup process will be destroyed after the engine is started. Note that only use it when the driver is not running locally, such as yarn-cluster mode; Otherwise, the engine will be killed.|boolean|1.5.0
-kyuubi.session.engine.trino.connection.catalog|&lt;undefined&gt;|The default catalog that trino engine will connect to|string|1.5.0
-kyuubi.session.engine.trino.connection.url|&lt;undefined&gt;|The server url that trino engine will connect to|string|1.5.0
+kyuubi.session.engine.spark.showProgress|false|When true, show the progress bar in the Spark's engine log.|boolean|1.6.0
+kyuubi.session.engine.startup.error.max.size|8192|During engine bootstrapping, if anderror occurs, using this config to limit the length of error message(characters).|int|1.1.0
+kyuubi.session.engine.startup.maxLogLines|10|The maximum number of engine log lines when errors occur during the engine startup phase. Note that this config effects on client-side to help track engine startup issues.|int|1.4.0
+kyuubi.session.engine.startup.waitCompletion|true|Whether to wait for completion after the engine starts. If false, the startup process will be destroyed after the engine is started. Note that only use it when the driver is not running locally, such as in yarn-cluster mode; Otherwise, the engine will be killed.|boolean|1.5.0
+kyuubi.session.engine.trino.connection.catalog|&lt;undefined&gt;|The default catalog that Trino engine will connect to|string|1.5.0
+kyuubi.session.engine.trino.connection.url|&lt;undefined&gt;|The server url that Trino engine will connect to|string|1.5.0
 kyuubi.session.engine.trino.main.resource|&lt;undefined&gt;|The package used to create Trino engine remote job. If it is undefined, Kyuubi will use the default|string|1.5.0
-kyuubi.session.engine.trino.showProgress|true|When true, show the progress bar and final info in the trino engine log.|boolean|1.6.0
-kyuubi.session.engine.trino.showProgress.debug|false|When true, show the progress debug info in the trino engine log.|boolean|1.6.0
-kyuubi.session.group.provider|hadoop|A group provider plugin for Kyuubi Server. This plugin can provide primary group and groups information for different user or session configs. This config value should be a class which is a child of 'org.apache.kyuubi.plugin.GroupProvider' which has zero-arg constructor. Kyuubi provides the following built-in implementations: <li>hadoop: delegate the user group mapping to hadoop UserGroupInformation.</li>|string|1.7.0
+kyuubi.session.engine.trino.showProgress|true|When true, show the progress bar and final info in the Trino engine log.|boolean|1.6.0
+kyuubi.session.engine.trino.showProgress.debug|false|When true, show the progress debug info in the Trino engine log.|boolean|1.6.0
+kyuubi.session.group.provider|hadoop|A group provider plugin for Kyuubi Server. This plugin can provide primary group and groups information for different users or session configs. This config value should be a subclass of `org.apache.kyuubi.plugin.GroupProvider` which has a zero-arg constructor. Kyuubi provides the following built-in implementations: <li>hadoop: delegate the user group mapping to hadoop UserGroupInformation.</li>|string|1.7.0
 kyuubi.session.idle.timeout|PT6H|session idle timeout, it will be closed when it's not accessed for this duration|duration|1.2.0
-kyuubi.session.local.dir.allow.list||The local dir list that are allowed to access by the kyuubi session application. User might set some parameters such as `spark.files` and it will upload some local files when launching the kyuubi engine, if the local dir allow list is defined, kyuubi will check whether the path to upload is in the allow list. Note that, if it is empty, there is no limitation for that and please use absolute path list.|seq|1.6.0
-kyuubi.session.name|&lt;undefined&gt;|A human readable name of session and we use empty string by default. This name will be recorded in event. Note that, we only apply this value from session conf.|string|1.4.0
+kyuubi.session.local.dir.allow.list||The local dir list that are allowed to access by the kyuubi session application.  End-users might set some parameters such as `spark.files` and it will  upload some local files when launching the kyuubi engine, if the local dir allow list is defined, kyuubi will check whether the path to upload is in the allow list. Note that, if it is empty, there is no limitation for that. And please use absolute paths.|seq|1.6.0
+kyuubi.session.name|&lt;undefined&gt;|A human readable name of the session and we use empty string by default. This name will be recorded in the event. Note that, we only apply this value from session conf.|string|1.4.0
 kyuubi.session.timeout|PT6H|(deprecated)session timeout, it will be closed when it's not accessed for this duration|duration|1.0.0
-kyuubi.session.user.sign.enabled|false|Whether to verify the integrity of session user name on engine side, e.g. Authz plugin in Spark.|boolean|1.7.0
+kyuubi.session.user.sign.enabled|false|Whether to verify the integrity of session user name on the engine side, e.g. Authz plugin in Spark.|boolean|1.7.0
 
 
 ### Spnego
@@ -527,16 +527,16 @@ kyuubi.spnego.principal|&lt;undefined&gt;|SPNego service principal, typical valu
 
 Key | Default | Meaning | Type | Since
 --- | --- | --- | --- | ---
-kyuubi.zookeeper.embedded.client.port|2181|clientPort for the embedded zookeeper server to listen for client connections, a client here could be Kyuubi server, engine and JDBC client|int|1.2.0
-kyuubi.zookeeper.embedded.client.port.address|&lt;undefined&gt;|clientPortAddress for the embedded zookeeper server to|string|1.2.0
+kyuubi.zookeeper.embedded.client.port|2181|clientPort for the embedded ZooKeeper server to listen for client connections, a client here could be Kyuubi server, engine, and JDBC client|int|1.2.0
+kyuubi.zookeeper.embedded.client.port.address|&lt;undefined&gt;|clientPortAddress for the embedded ZooKeeper server to|string|1.2.0
 kyuubi.zookeeper.embedded.data.dir|embedded_zookeeper|dataDir for the embedded zookeeper server where stores the in-memory database snapshots and, unless specified otherwise, the transaction log of updates to the database.|string|1.2.0
-kyuubi.zookeeper.embedded.data.log.dir|embedded_zookeeper|dataLogDir for the embedded zookeeper server where writes the transaction log .|string|1.2.0
-kyuubi.zookeeper.embedded.directory|embedded_zookeeper|The temporary directory for the embedded zookeeper server|string|1.0.0
-kyuubi.zookeeper.embedded.max.client.connections|120|maxClientCnxns for the embedded zookeeper server to limits the number of concurrent connections of a single client identified by IP address|int|1.2.0
-kyuubi.zookeeper.embedded.max.session.timeout|60000|maxSessionTimeout in milliseconds for the embedded zookeeper server will allow the client to negotiate. Defaults to 20 times the tickTime|int|1.2.0
-kyuubi.zookeeper.embedded.min.session.timeout|6000|minSessionTimeout in milliseconds for the embedded zookeeper server will allow the client to negotiate. Defaults to 2 times the tickTime|int|1.2.0
-kyuubi.zookeeper.embedded.port|2181|The port of the embedded zookeeper server|int|1.0.0
-kyuubi.zookeeper.embedded.tick.time|3000|tickTime in milliseconds for the embedded zookeeper server|int|1.2.0
+kyuubi.zookeeper.embedded.data.log.dir|embedded_zookeeper|dataLogDir for the embedded ZooKeeper server where writes the transaction log .|string|1.2.0
+kyuubi.zookeeper.embedded.directory|embedded_zookeeper|The temporary directory for the embedded ZooKeeper server|string|1.0.0
+kyuubi.zookeeper.embedded.max.client.connections|120|maxClientCnxns for the embedded ZooKeeper server to limit the number of concurrent connections of a single client identified by IP address|int|1.2.0
+kyuubi.zookeeper.embedded.max.session.timeout|60000|maxSessionTimeout in milliseconds for the embedded ZooKeeper server will allow the client to negotiate. Defaults to 20 times the tickTime|int|1.2.0
+kyuubi.zookeeper.embedded.min.session.timeout|6000|minSessionTimeout in milliseconds for the embedded ZooKeeper server will allow the client to negotiate. Defaults to 2 times the tickTime|int|1.2.0
+kyuubi.zookeeper.embedded.port|2181|The port of the embedded ZooKeeper server|int|1.0.0
+kyuubi.zookeeper.embedded.tick.time|3000|tickTime in milliseconds for the embedded ZooKeeper server|int|1.2.0
 
 ## Spark Configurations
 
@@ -558,7 +558,7 @@ Setting them in the JDBC Connection URL supplies session-specific for each SQL e
 
 - **Static SQL and Spark Core Configuration**
 
-  - For [Static SQL Configurations](http://spark.apache.org/docs/latest/configuration.html#static-sql-configuration) and other spark core configs, e.g. `spark.executor.memory`, they will take affect if there is no existing SQL engine application. Otherwise, they will just be ignored
+  - For [Static SQL Configurations](http://spark.apache.org/docs/latest/configuration.html#static-sql-configuration) and other spark core configs, e.g. `spark.executor.memory`, they will take effect if there is no existing SQL engine application. Otherwise, they will just be ignored
 
 ### Via SET Syntax
 
@@ -662,11 +662,11 @@ Kyuubi uses [log4j](https://logging.apache.org/log4j/2.x/) for logging. You can
 
 ### Hadoop Configurations
 
-Specifying `HADOOP_CONF_DIR` to the directory contains hadoop configuration files or treating them as Spark properties with a `spark.hadoop.` prefix. Please refer to the Spark official online documentation for [Inheriting Hadoop Cluster Configuration](http://spark.apache.org/docs/latest/configuration.html#inheriting-hadoop-cluster-configuration). Also, please refer to the [Apache Hadoop](http://hadoop.apache.org)'s online documentation for an overview on how to configure Hadoop.
+Specifying `HADOOP_CONF_DIR` to the directory containing Hadoop configuration files or treating them as Spark properties with a `spark.hadoop.` prefix. Please refer to the Spark official online documentation for [Inheriting Hadoop Cluster Configuration](http://spark.apache.org/docs/latest/configuration.html#inheriting-hadoop-cluster-configuration). Also, please refer to the [Apache Hadoop](http://hadoop.apache.org)'s online documentation for an overview on how to configure Hadoop.
 
 ### Hive Configurations
 
-These configurations are used for SQL engine application to talk to Hive MetaStore and could be configured in a `hive-site.xml`. Placed it in `$SPARK_HOME/conf` directory, or treating them as Spark properties with a `spark.hadoop.` prefix.
+These configurations are used for SQL engine application to talk to Hive MetaStore and could be configured in a `hive-site.xml`. Placed it in `$SPARK_HOME/conf` directory, or treat them as Spark properties with a `spark.hadoop.` prefix.
 
 ## User Defaults
 
diff --git a/docs/extensions/engines/spark/functions.md b/docs/extensions/engines/spark/functions.md
index b467a3abb..b2a353b88 100644
--- a/docs/extensions/engines/spark/functions.md
+++ b/docs/extensions/engines/spark/functions.md
@@ -15,7 +15,7 @@
  - limitations under the License.
  -->
 
-<!-- DO NOT MODIFY THIS FILE DIRECTLY, IT IS AUTO GENERATED BY [org.apache.kyuubi.engine.spark.udf.KyuubiDefinedFunctionSuite] -->
+<!-- DO NOT MODIFY THIS FILE DIRECTLY, IT IS AUTO-GENERATED BY [org.apache.kyuubi.engine.spark.udf.KyuubiDefinedFunctionSuite] -->
 
 # Auxiliary SQL Functions
 
diff --git a/docs/monitor/metrics.md b/docs/monitor/metrics.md
index 0a27cf43a..2f2ac1405 100644
--- a/docs/monitor/metrics.md
+++ b/docs/monitor/metrics.md
@@ -28,10 +28,10 @@ The metrics system is configured via `$KYUUBI_HOME/conf/kyuubi-defaults.conf`.
 Key | Default | Meaning | Type | Since
 --- | --- | --- | --- | ---
 `kyuubi.metrics.enabled`|<div style='width: 65pt;word-wrap: break-word;white-space: normal'>true</div>|<div style='width: 170pt;word-wrap: break-word;white-space: normal'>Set to true to enable kyuubi metrics system</div>|<div style='width: 30pt'>boolean</div>|<div style='width: 20pt'>1.2.0</div>
-`kyuubi.metrics.reporters`|<div style='width: 65pt;word-wrap: break-word;white-space: normal'>JSON</div>|<div style='width: 170pt;word-wrap: break-word;white-space: normal'>A comma separated list for all metrics reporters<ul> <li>CONSOLE - ConsoleReporter which outputs measurements to CONSOLE periodically.</li> <li>JMX - JmxReporter which listens for new metrics and exposes them as MBeans.</li>  <li>JSON - JsonReporter which outputs measurements to json file periodically.</li> <li>PROMET [...]
+`kyuubi.metrics.reporters`|<div style='width: 65pt;word-wrap: break-word;white-space: normal'>JSON</div>|<div style='width: 170pt;word-wrap: break-word;white-space: normal'>A comma-separated list for all metrics reporters<ul> <li>CONSOLE - ConsoleReporter which outputs measurements to CONSOLE periodically.</li> <li>JMX - JmxReporter which listens for new metrics and exposes them as MBeans.</li>  <li>JSON - JsonReporter which outputs measurements to json file periodically.</li> <li>PROMET [...]
 `kyuubi.metrics.console.interval`|<div style='width: 65pt;word-wrap: break-word;white-space: normal'>PT5S</div>|<div style='width: 170pt;word-wrap: break-word;white-space: normal'>How often should report metrics to console</div>|<div style='width: 30pt'>duration</div>|<div style='width: 20pt'>1.2.0</div>
-`kyuubi.metrics.json.interval`|<div style='width: 65pt;word-wrap: break-word;white-space: normal'>PT5S</div>|<div style='width: 170pt;word-wrap: break-word;white-space: normal'>How often should report metrics to json file</div>|<div style='width: 30pt'>duration</div>|<div style='width: 20pt'>1.2.0</div>
-`kyuubi.metrics.json.location`|<div style='width: 65pt;word-wrap: break-word;white-space: normal'>metrics</div>|<div style='width: 170pt;word-wrap: break-word;white-space: normal'>Where the json metrics file located</div>|<div style='width: 30pt'>string</div>|<div style='width: 20pt'>1.2.0</div>
+`kyuubi.metrics.json.interval`|<div style='width: 65pt;word-wrap: break-word;white-space: normal'>PT5S</div>|<div style='width: 170pt;word-wrap: break-word;white-space: normal'>How often should report metrics to JSON file</div>|<div style='width: 30pt'>duration</div>|<div style='width: 20pt'>1.2.0</div>
+`kyuubi.metrics.json.location`|<div style='width: 65pt;word-wrap: break-word;white-space: normal'>metrics</div>|<div style='width: 170pt;word-wrap: break-word;white-space: normal'>Where the JSON metrics file located</div>|<div style='width: 30pt'>string</div>|<div style='width: 20pt'>1.2.0</div>
 `kyuubi.metrics.prometheus.path`|<div style='width: 65pt;word-wrap: break-word;white-space: normal'>/metrics</div>|<div style='width: 170pt;word-wrap: break-word;white-space: normal'>URI context path of prometheus metrics HTTP server</div>|<div style='width: 30pt'>string</div>|<div style='width: 20pt'>1.2.0</div>
 `kyuubi.metrics.prometheus.port`|<div style='width: 65pt;word-wrap: break-word;white-space: normal'>10019</div>|<div style='width: 170pt;word-wrap: break-word;white-space: normal'>Prometheus metrics HTTP server port</div>|<div style='width: 30pt'>int</div>|<div style='width: 20pt'>1.2.0</div>
 `kyuubi.metrics.slf4j.interval`|<div style='width: 65pt;word-wrap: break-word;white-space: normal'>PT5S</div>|<div style='width: 170pt;word-wrap: break-word;white-space: normal'>How often should report metrics to SLF4J logger</div>|<div style='width: 30pt'>duration</div>|<div style='width: 20pt'>1.2.0</div>
diff --git a/externals/kyuubi-spark-sql-engine/src/test/scala/org/apache/kyuubi/engine/spark/udf/KyuubiDefinedFunctionSuite.scala b/externals/kyuubi-spark-sql-engine/src/test/scala/org/apache/kyuubi/engine/spark/udf/KyuubiDefinedFunctionSuite.scala
index 4d38bc363..4b1d982b7 100644
--- a/externals/kyuubi-spark-sql-engine/src/test/scala/org/apache/kyuubi/engine/spark/udf/KyuubiDefinedFunctionSuite.scala
+++ b/externals/kyuubi-spark-sql-engine/src/test/scala/org/apache/kyuubi/engine/spark/udf/KyuubiDefinedFunctionSuite.scala
@@ -66,7 +66,7 @@ class KyuubiDefinedFunctionSuite extends KyuubiFunSuite {
     newOutput += " - limitations under the License."
     newOutput += " -->"
     newOutput += ""
-    newOutput += "<!-- DO NOT MODIFY THIS FILE DIRECTLY, IT IS AUTO GENERATED BY" +
+    newOutput += "<!-- DO NOT MODIFY THIS FILE DIRECTLY, IT IS AUTO-GENERATED BY" +
       " [org.apache.kyuubi.engine.spark.udf.KyuubiDefinedFunctionSuite] -->"
     newOutput += ""
     newOutput += "# Auxiliary SQL Functions"
diff --git a/kyuubi-common/src/main/scala/org/apache/kyuubi/config/KyuubiConf.scala b/kyuubi-common/src/main/scala/org/apache/kyuubi/config/KyuubiConf.scala
index 0e50e132e..c22ab42a4 100644
--- a/kyuubi-common/src/main/scala/org/apache/kyuubi/config/KyuubiConf.scala
+++ b/kyuubi-common/src/main/scala/org/apache/kyuubi/config/KyuubiConf.scala
@@ -284,7 +284,7 @@ object KyuubiConf {
     .createOptional
 
   val KINIT_INTERVAL: ConfigEntry[Long] = buildConf("kyuubi.kinit.interval")
-    .doc("How often will Kyuubi server run `kinit -kt [keytab] [principal]` to renew the" +
+    .doc("How often will the Kyuubi server run `kinit -kt [keytab] [principal]` to renew the" +
       " local Kerberos credentials cache")
     .version("1.0.0")
     .serverOnly
@@ -320,7 +320,7 @@ object KyuubiConf {
 
   val CREDENTIALS_UPDATE_WAIT_TIMEOUT: ConfigEntry[Long] =
     buildConf("kyuubi.credentials.update.wait.timeout")
-      .doc("How long to wait until credentials are ready.")
+      .doc("How long to wait until the credentials are ready.")
       .version("1.5.0")
       .timeConf
       .checkValue(t => t > 0, "must be positive integer")
@@ -336,7 +336,7 @@ object KyuubiConf {
 
   val CREDENTIALS_IDLE_TIMEOUT: ConfigEntry[Long] =
     buildConf("kyuubi.credentials.idle.timeout")
-      .doc("inactive users' credentials will be expired after a configured timeout")
+      .doc("The inactive users' credentials will be expired after a configured timeout")
       .version("1.6.0")
       .timeConf
       .checkValue(_ >= Duration.ofSeconds(3).toMillis, "Minimum 3 seconds")
@@ -376,7 +376,7 @@ object KyuubiConf {
 
   val FRONTEND_PROTOCOLS: ConfigEntry[Seq[String]] =
     buildConf("kyuubi.frontend.protocols")
-      .doc("A comma separated list for all frontend protocols " +
+      .doc("A comma-separated list for all frontend protocols " +
         "<ul>" +
         " <li>THRIFT_BINARY - HiveServer2 compatible thrift binary protocol.</li>" +
         " <li>THRIFT_HTTP - HiveServer2 compatible thrift http protocol.</li>" +
@@ -403,7 +403,7 @@ object KyuubiConf {
   val FRONTEND_THRIFT_BINARY_BIND_HOST: ConfigEntry[Option[String]] =
     buildConf("kyuubi.frontend.thrift.binary.bind.host")
       .doc("Hostname or IP of the machine on which to run the thrift frontend service " +
-        "via binary protocol.")
+        "via the binary protocol.")
       .version("1.4.0")
       .serverOnly
       .fallbackConf(FRONTEND_BIND_HOST)
@@ -454,7 +454,7 @@ object KyuubiConf {
 
   val FRONTEND_THRIFT_BINARY_SSL_INCLUDE_CIPHER_SUITES: ConfigEntry[Seq[String]] =
     buildConf("kyuubi.frontend.thrift.binary.ssl.include.ciphersuites")
-      .doc("A comma separated list of include SSL cipher suite names for thrift binary frontend.")
+      .doc("A comma-separated list of include SSL cipher suite names for thrift binary frontend.")
       .version("1.7.0")
       .stringConf
       .toSequence()
@@ -463,7 +463,7 @@ object KyuubiConf {
   @deprecated("using kyuubi.frontend.thrift.binary.bind.port instead", "1.4.0")
   val FRONTEND_BIND_PORT: ConfigEntry[Int] = buildConf("kyuubi.frontend.bind.port")
     .doc("(deprecated) Port of the machine on which to run the thrift frontend service " +
-      "via binary protocol.")
+      "via the binary protocol.")
     .version("1.0.0")
     .serverOnly
     .intConf
@@ -472,7 +472,8 @@ object KyuubiConf {
 
   val FRONTEND_THRIFT_BINARY_BIND_PORT: ConfigEntry[Int] =
     buildConf("kyuubi.frontend.thrift.binary.bind.port")
-      .doc("Port of the machine on which to run the thrift frontend service via binary protocol.")
+      .doc("Port of the machine on which to run the thrift frontend service " +
+        "via the binary protocol.")
       .version("1.4.0")
       .serverOnly
       .fallbackConf(FRONTEND_BIND_PORT)
@@ -496,7 +497,7 @@ object KyuubiConf {
 
   val FRONTEND_MIN_WORKER_THREADS: ConfigEntry[Int] =
     buildConf("kyuubi.frontend.min.worker.threads")
-      .doc("(deprecated) Minimum number of threads in the of frontend worker thread pool for " +
+      .doc("(deprecated) Minimum number of threads in the frontend worker thread pool for " +
         "the thrift frontend service")
       .version("1.0.0")
       .intConf
@@ -504,14 +505,14 @@ object KyuubiConf {
 
   val FRONTEND_THRIFT_MIN_WORKER_THREADS: ConfigEntry[Int] =
     buildConf("kyuubi.frontend.thrift.min.worker.threads")
-      .doc("Minimum number of threads in the of frontend worker thread pool for the thrift " +
+      .doc("Minimum number of threads in the frontend worker thread pool for the thrift " +
         "frontend service")
       .version("1.4.0")
       .fallbackConf(FRONTEND_MIN_WORKER_THREADS)
 
   val FRONTEND_MAX_WORKER_THREADS: ConfigEntry[Int] =
     buildConf("kyuubi.frontend.max.worker.threads")
-      .doc("(deprecated) Maximum number of threads in the of frontend worker thread pool for " +
+      .doc("(deprecated) Maximum number of threads in the frontend worker thread pool for " +
         "the thrift frontend service")
       .version("1.0.0")
       .intConf
@@ -519,14 +520,14 @@ object KyuubiConf {
 
   val FRONTEND_THRIFT_MAX_WORKER_THREADS: ConfigEntry[Int] =
     buildConf("kyuubi.frontend.thrift.max.worker.threads")
-      .doc("Maximum number of threads in the of frontend worker thread pool for the thrift " +
+      .doc("Maximum number of threads in the frontend worker thread pool for the thrift " +
         "frontend service")
       .version("1.4.0")
       .fallbackConf(FRONTEND_MAX_WORKER_THREADS)
 
   val FRONTEND_REST_MAX_WORKER_THREADS: ConfigEntry[Int] =
     buildConf("kyuubi.frontend.rest.max.worker.threads")
-      .doc("Maximum number of threads in the of frontend worker thread pool for the rest " +
+      .doc("Maximum number of threads in the frontend worker thread pool for the rest " +
         "frontend service")
       .version("1.6.2")
       .fallbackConf(FRONTEND_MAX_WORKER_THREADS)
@@ -624,7 +625,7 @@ object KyuubiConf {
   val FRONTEND_THRIFT_HTTP_COOKIE_AUTH_ENABLED: ConfigEntry[Boolean] =
     buildConf("kyuubi.frontend.thrift.http.cookie.auth.enabled")
       .doc("When true, Kyuubi in HTTP transport mode, " +
-        "will use cookie based authentication mechanism")
+        "will use cookie-based authentication mechanism")
       .version("1.6.0")
       .booleanConf
       .createWithDefault(true)
@@ -659,7 +660,7 @@ object KyuubiConf {
 
   val FRONTEND_THRIFT_HTTP_XSRF_FILTER_ENABLED: ConfigEntry[Boolean] =
     buildConf("kyuubi.frontend.thrift.http.xsrf.filter.enabled")
-      .doc("If enabled, Kyuubi will block any requests made to it over http " +
+      .doc("If enabled, Kyuubi will block any requests made to it over HTTP " +
         "if an X-XSRF-HEADER header is not present")
       .version("1.6.0")
       .booleanConf
@@ -699,7 +700,7 @@ object KyuubiConf {
 
   val FRONTEND_THRIFT_HTTP_SSL_EXCLUDE_CIPHER_SUITES: ConfigEntry[Seq[String]] =
     buildConf("kyuubi.frontend.thrift.http.ssl.exclude.ciphersuites")
-      .doc("A comma separated list of exclude SSL cipher suite names for thrift http frontend.")
+      .doc("A comma-separated list of exclude SSL cipher suite names for thrift http frontend.")
       .version("1.7.0")
       .stringConf
       .toSequence()
@@ -715,18 +716,18 @@ object KyuubiConf {
 
   val FRONTEND_PROXY_HTTP_CLIENT_IP_HEADER: ConfigEntry[String] =
     buildConf("kyuubi.frontend.proxy.http.client.ip.header")
-      .doc("The http header to record the real client ip address. If your server is behind a load" +
+      .doc("The HTTP header to record the real client IP address. If your server is behind a load" +
         " balancer or other proxy, the server will see this load balancer or proxy IP address as" +
         " the client IP address, to get around this common issue, most load balancers or proxies" +
         " offer the ability to record the real remote IP address in an HTTP header that will be" +
         " added to the request for other devices to use. Note that, because the header value can" +
-        " be specified to any ip address, so it will not be used for authentication.")
+        " be specified to any IP address, so it will not be used for authentication.")
       .version("1.6.0")
       .stringConf
       .createWithDefault("X-Real-IP")
 
   val AUTHENTICATION_METHOD: ConfigEntry[Seq[String]] = buildConf("kyuubi.authentication")
-    .doc("A comma separated list of client authentication types.<ul>" +
+    .doc("A comma-separated list of client authentication types.<ul>" +
       " <li>NOSASL: raw transport.</li>" +
       " <li>NONE: no authentication check.</li>" +
       " <li>KERBEROS: Kerberos/GSSAPI authentication.</li>" +
@@ -735,9 +736,9 @@ object KyuubiConf {
       " <li>LDAP: Lightweight Directory Access Protocol authentication.</li>" +
       "</ul>" +
       " Note that: For KERBEROS, it is SASL/GSSAPI mechanism," +
-      " and for NONE, CUSTOM and LDAP, they are all SASL/PLAIN mechanism." +
+      " and for NONE, CUSTOM and LDAP, they are all SASL/PLAIN mechanisms." +
       " If only NOSASL is specified, the authentication will be NOSASL." +
-      " For SASL authentication, KERBEROS and PLAIN auth type are supported at the same time," +
+      " For SASL authentication, KERBEROS and PLAIN auth types are supported at the same time," +
       " and only the first specified PLAIN auth type is valid.")
     .version("1.0.0")
     .serverOnly
@@ -954,14 +955,14 @@ object KyuubiConf {
 
   val FRONTEND_TRINO_MAX_WORKER_THREADS: ConfigEntry[Int] =
     buildConf("kyuubi.frontend.trino.max.worker.threads")
-      .doc("Maximum number of threads in the of frontend worker thread pool for the trino " +
+      .doc("Maximum number of threads in the frontend worker thread pool for the Trino " +
         "frontend service")
       .version("1.7.0")
       .fallbackConf(FRONTEND_MAX_WORKER_THREADS)
 
   val KUBERNETES_CONTEXT: OptionalConfigEntry[String] =
     buildConf("kyuubi.kubernetes.context")
-      .doc("The desired context from your kubernetes config file used to configure the K8S " +
+      .doc("The desired context from your kubernetes config file used to configure the K8s " +
         "client for interacting with the cluster.")
       .version("1.6.0")
       .stringConf
@@ -993,8 +994,8 @@ object KyuubiConf {
   val KUBERNETES_AUTHENTICATE_OAUTH_TOKEN: OptionalConfigEntry[String] =
     buildConf("kyuubi.kubernetes.authenticate.oauthToken")
       .doc("The OAuth token to use when authenticating against the Kubernetes API server. " +
-        "Note that unlike the other authentication options, this must be the exact string value " +
-        "of the token to use for the authentication.")
+        "Note that unlike, the other authentication options, this must be the exact string value" +
+        " of the token to use for the authentication.")
       .version("1.7.0")
       .stringConf
       .createOptional
@@ -1039,8 +1040,8 @@ object KyuubiConf {
 
   val ENGINE_ERROR_MAX_SIZE: ConfigEntry[Int] =
     buildConf("kyuubi.session.engine.startup.error.max.size")
-      .doc("During engine bootstrapping, if error occurs, using this config to limit the length" +
-        " error message(characters).")
+      .doc("During engine bootstrapping, if anderror occurs, using this config to limit" +
+        " the length of error message(characters).")
       .version("1.1.0")
       .intConf
       .checkValue(v => v >= 200 && v <= 8192, s"must in [200, 8192]")
@@ -1067,7 +1068,7 @@ object KyuubiConf {
       .doc("Specify a profile to load session-level configurations from " +
         "`$KYUUBI_CONF_DIR/kyuubi-session-<profile>.conf`. " +
         "This configuration will be ignored if the file does not exist. " +
-        "This configuration only has effect when `kyuubi.session.conf.advisor` " +
+        "This configuration only takes effect when `kyuubi.session.conf.advisor` " +
         "is set as `org.apache.kyuubi.session.FileSessionConfAdvisor`.")
       .version("1.7.0")
       .stringConf
@@ -1084,7 +1085,7 @@ object KyuubiConf {
 
   val ENGINE_SPARK_MAX_LIFETIME: ConfigEntry[Long] =
     buildConf("kyuubi.session.engine.spark.max.lifetime")
-      .doc("Max lifetime for spark engine, the engine will self-terminate when it reaches the" +
+      .doc("Max lifetime for Spark engine, the engine will self-terminate when it reaches the" +
         " end of life. 0 or negative means not to self-terminate.")
       .version("1.6.0")
       .timeConf
@@ -1100,7 +1101,7 @@ object KyuubiConf {
 
   val ENGINE_FLINK_MAX_ROWS: ConfigEntry[Int] =
     buildConf("kyuubi.session.engine.flink.max.rows")
-      .doc("Max rows of Flink query results. For batch queries, rows that exceeds the limit " +
+      .doc("Max rows of Flink query results. For batch queries, rows exceeding the limit " +
         "would be ignored. For streaming queries, the query would be canceled if the limit " +
         "is reached.")
       .version("1.5.0")
@@ -1117,28 +1118,28 @@ object KyuubiConf {
 
   val ENGINE_TRINO_CONNECTION_URL: OptionalConfigEntry[String] =
     buildConf("kyuubi.session.engine.trino.connection.url")
-      .doc("The server url that trino engine will connect to")
+      .doc("The server url that Trino engine will connect to")
       .version("1.5.0")
       .stringConf
       .createOptional
 
   val ENGINE_TRINO_CONNECTION_CATALOG: OptionalConfigEntry[String] =
     buildConf("kyuubi.session.engine.trino.connection.catalog")
-      .doc("The default catalog that trino engine will connect to")
+      .doc("The default catalog that Trino engine will connect to")
       .version("1.5.0")
       .stringConf
       .createOptional
 
   val ENGINE_TRINO_SHOW_PROGRESS: ConfigEntry[Boolean] =
     buildConf("kyuubi.session.engine.trino.showProgress")
-      .doc("When true, show the progress bar and final info in the trino engine log.")
+      .doc("When true, show the progress bar and final info in the Trino engine log.")
       .version("1.6.0")
       .booleanConf
       .createWithDefault(true)
 
   val ENGINE_TRINO_SHOW_PROGRESS_DEBUG: ConfigEntry[Boolean] =
     buildConf("kyuubi.session.engine.trino.showProgress.debug")
-      .doc("When true, show the progress debug info in the trino engine log.")
+      .doc("When true, show the progress debug info in the Trino engine log.")
       .version("1.6.0")
       .booleanConf
       .createWithDefault(false)
@@ -1160,7 +1161,7 @@ object KyuubiConf {
   val ENGINE_ALIVE_PROBE_ENABLED: ConfigEntry[Boolean] =
     buildConf("kyuubi.session.engine.alive.probe.enabled")
       .doc("Whether to enable the engine alive probe, it true, we will create a companion thrift" +
-        " client that sends simple request to check whether the engine is keep alive.")
+        " client that keeps sending simple requests to check whether the engine is alive.")
       .version("1.6.0")
       .booleanConf
       .createWithDefault(false)
@@ -1189,7 +1190,7 @@ object KyuubiConf {
 
   val ENGINE_OPEN_RETRY_WAIT: ConfigEntry[Long] =
     buildConf("kyuubi.session.engine.open.retry.wait")
-      .doc("How long to wait before retrying to open engine after a failure.")
+      .doc("How long to wait before retrying to open the engine after failure.")
       .version("1.7.0")
       .timeConf
       .createWithDefault(Duration.ofSeconds(10).toMillis)
@@ -1241,7 +1242,7 @@ object KyuubiConf {
 
   val SESSION_CONF_IGNORE_LIST: ConfigEntry[Seq[String]] =
     buildConf("kyuubi.session.conf.ignore.list")
-      .doc("A comma separated list of ignored keys. If the client connection contains any of" +
+      .doc("A comma-separated list of ignored keys. If the client connection contains any of" +
         " them, the key and the corresponding value will be removed silently during engine" +
         " bootstrap and connection setup." +
         " Note that this rule is for server-side protection defined via administrators to" +
@@ -1254,7 +1255,7 @@ object KyuubiConf {
 
   val SESSION_CONF_RESTRICT_LIST: ConfigEntry[Seq[String]] =
     buildConf("kyuubi.session.conf.restrict.list")
-      .doc("A comma separated list of restricted keys. If the client connection contains any of" +
+      .doc("A comma-separated list of restricted keys. If the client connection contains any of" +
         " them, the connection will be rejected explicitly during engine bootstrap and connection" +
         " setup." +
         " Note that this rule is for server-side protection defined via administrators to" +
@@ -1268,15 +1269,16 @@ object KyuubiConf {
   val SESSION_USER_SIGN_ENABLED: ConfigEntry[Boolean] =
     buildConf("kyuubi.session.user.sign.enabled")
       .doc("Whether to verify the integrity of session user name" +
-        " on engine side, e.g. Authz plugin in Spark.")
+        " on the engine side, e.g. Authz plugin in Spark.")
       .version("1.7.0")
       .booleanConf
       .createWithDefault(false)
 
   val SESSION_ENGINE_STARTUP_MAX_LOG_LINES: ConfigEntry[Int] =
     buildConf("kyuubi.session.engine.startup.maxLogLines")
-      .doc("The maximum number of engine log lines when errors occur during engine startup phase." +
-        " Note that this max lines is for client-side to help track engine startup issue.")
+      .doc("The maximum number of engine log lines when errors occur during the engine" +
+        " startup phase. Note that this config effects on client-side to" +
+        " help track engine startup issues.")
       .version("1.4.0")
       .intConf
       .checkValue(_ > 0, "the maximum must be positive integer.")
@@ -1284,17 +1286,17 @@ object KyuubiConf {
 
   val SESSION_ENGINE_STARTUP_WAIT_COMPLETION: ConfigEntry[Boolean] =
     buildConf("kyuubi.session.engine.startup.waitCompletion")
-      .doc("Whether to wait for completion after engine starts." +
+      .doc("Whether to wait for completion after the engine starts." +
         " If false, the startup process will be destroyed after the engine is started." +
         " Note that only use it when the driver is not running locally," +
-        " such as yarn-cluster mode; Otherwise, the engine will be killed.")
+        " such as in yarn-cluster mode; Otherwise, the engine will be killed.")
       .version("1.5.0")
       .booleanConf
       .createWithDefault(true)
 
   val SESSION_ENGINE_LAUNCH_ASYNC: ConfigEntry[Boolean] =
     buildConf("kyuubi.session.engine.launch.async")
-      .doc("When opening kyuubi session, whether to launch backend engine asynchronously." +
+      .doc("When opening kyuubi session, whether to launch the backend engine asynchronously." +
         " When true, the Kyuubi server will set up the connection with the client without delay" +
         " as the backend engine will be created asynchronously.")
       .version("1.4.0")
@@ -1303,11 +1305,12 @@ object KyuubiConf {
 
   val SESSION_LOCAL_DIR_ALLOW_LIST: ConfigEntry[Seq[String]] =
     buildConf("kyuubi.session.local.dir.allow.list")
-      .doc("The local dir list that are allowed to access by the kyuubi session application. User" +
-        " might set some parameters such as `spark.files` and it will upload some local files" +
-        " when launching the kyuubi engine, if the local dir allow list is defined, kyuubi will" +
+      .doc("The local dir list that are allowed to access by the kyuubi session application. " +
+        " End-users might set some parameters such as `spark.files` and it will " +
+        " upload some local files when launching the kyuubi engine," +
+        " if the local dir allow list is defined, kyuubi will" +
         " check whether the path to upload is in the allow list. Note that, if it is empty, there" +
-        " is no limitation for that and please use absolute path list.")
+        " is no limitation for that. And please use absolute paths.")
       .version("1.6.0")
       .serverOnly
       .stringConf
@@ -1332,14 +1335,14 @@ object KyuubiConf {
 
   val BATCH_CONF_IGNORE_LIST: ConfigEntry[Seq[String]] =
     buildConf("kyuubi.batch.conf.ignore.list")
-      .doc("A comma separated list of ignored keys for batch conf. If the batch conf contains" +
+      .doc("A comma-separated list of ignored keys for batch conf. If the batch conf contains" +
         " any of them, the key and the corresponding value will be removed silently during batch" +
         " job submission." +
         " Note that this rule is for server-side protection defined via administrators to" +
         " prevent some essential configs from tampering." +
-        " You can also pre-define some config for batch job submission with prefix:" +
+        " You can also pre-define some config for batch job submission with the prefix:" +
         " kyuubi.batchConf.[batchType]. For example, you can pre-define `spark.master`" +
-        " for spark batch job with key `kyuubi.batchConf.spark.spark.master`.")
+        " for the Spark batch job with key `kyuubi.batchConf.spark.spark.master`.")
       .version("1.6.0")
       .stringConf
       .toSequence()
@@ -1403,14 +1406,14 @@ object KyuubiConf {
   val METADATA_CLEANER_ENABLED: ConfigEntry[Boolean] =
     buildConf("kyuubi.metadata.cleaner.enabled")
       .doc("Whether to clean the metadata periodically. If it is enabled, Kyuubi will clean the" +
-        " metadata that is in terminate state with max age limitation.")
+        " metadata that is in the terminate state with max age limitation.")
       .version("1.6.0")
       .booleanConf
       .createWithDefault(true)
 
   val METADATA_MAX_AGE: ConfigEntry[Long] =
     buildConf("kyuubi.metadata.max.age")
-      .doc("The maximum age of metadata, the metadata that exceeds the age will be cleaned.")
+      .doc("The maximum age of metadata, the metadata exceeding the age will be cleaned.")
       .version("1.6.0")
       .timeConf
       .createWithDefault(Duration.ofDays(3).toMillis)
@@ -1424,7 +1427,8 @@ object KyuubiConf {
 
   val METADATA_RECOVERY_THREADS: ConfigEntry[Int] =
     buildConf("kyuubi.metadata.recovery.threads")
-      .doc("The number of threads for recovery from metadata store when Kyuubi server restarting.")
+      .doc("The number of threads for recovery from the metadata store " +
+        "when the Kyuubi server restarts.")
       .version("1.6.0")
       .intConf
       .createWithDefault(10)
@@ -1432,8 +1436,9 @@ object KyuubiConf {
   val METADATA_REQUEST_RETRY_THREADS: ConfigEntry[Int] =
     buildConf("kyuubi.metadata.request.retry.threads")
       .doc("Number of threads in the metadata request retry manager thread pool. The metadata" +
-        " store might be unavailable sometimes and the requests will fail, to tolerant for this" +
-        " case and unblock the main thread, we support to retry the failed requests in async way.")
+        " store might be unavailable sometimes and the requests will fail, tolerant for this" +
+        " case and unblock the main thread, we support retrying the failed requests" +
+        " in an async way.")
       .version("1.6.0")
       .intConf
       .createWithDefault(10)
@@ -1514,12 +1519,13 @@ object KyuubiConf {
 
   val OPERATION_QUERY_TIMEOUT: OptionalConfigEntry[Long] =
     buildConf("kyuubi.operation.query.timeout")
-      .doc("Timeout for query executions at server-side, take affect with client-side timeout(" +
+      .doc("Timeout for query executions at server-side, take effect with client-side timeout(" +
         "`java.sql.Statement.setQueryTimeout`) together, a running query will be cancelled" +
-        " automatically if timeout. It's off by default, which means only client-side take fully" +
-        " control whether the query should timeout or not. If set, client-side timeout capped at" +
-        " this point. To cancel the queries right away without waiting task to finish, consider" +
-        s" enabling ${OPERATION_FORCE_CANCEL.key} together.")
+        " automatically if timeout. It's off by default, which means only client-side take full" +
+        " control of whether the query should timeout or not." +
+        " If set, client-side timeout is capped at this point." +
+        " To cancel the queries right away without waiting for task to finish," +
+        s" consider enabling ${OPERATION_FORCE_CANCEL.key} together.")
       .version("1.2.0")
       .timeConf
       .checkValue(_ >= 1000, "must >= 1s if set")
@@ -1549,7 +1555,7 @@ object KyuubiConf {
 
   val OPERATION_RESULT_MAX_ROWS: ConfigEntry[Int] =
     buildConf("kyuubi.operation.result.max.rows")
-      .doc("Max rows of Spark query results. Rows that exceeds the limit would be ignored. " +
+      .doc("Max rows of Spark query results. Rows exceeding the limit would be ignored. " +
         "By setting this value to 0 to disable the max rows limit.")
       .version("1.6.0")
       .intConf
@@ -1591,8 +1597,8 @@ object KyuubiConf {
   val ENGINE_SHARE_LEVEL_SUBDOMAIN: ConfigEntry[Option[String]] =
     buildConf("kyuubi.engine.share.level.subdomain")
       .doc("Allow end-users to create a subdomain for the share level of an engine. A" +
-        " subdomain is a case-insensitive string values that must be a valid zookeeper sub path." +
-        " For example, for `USER` share level, an end-user can share a certain engine within" +
+        " subdomain is a case-insensitive string values that must be a valid zookeeper subpath." +
+        " For example, for the `USER` share level, an end-user can share a certain engine within" +
         " a subdomain, not for all of its clients. End-users are free to create multiple" +
         " engines in the `USER` share level. When disable engine pool, use 'default' if absent.")
       .version("1.4.0")
@@ -1602,7 +1608,7 @@ object KyuubiConf {
   val ENGINE_CONNECTION_URL_USE_HOSTNAME: ConfigEntry[Boolean] =
     buildConf("kyuubi.engine.connection.url.use.hostname")
       .doc("(deprecated) " +
-        "When true, engine register with hostname to zookeeper. When spark run on k8s" +
+        "When true, the engine registers with hostname to zookeeper. When Spark runs on K8s" +
         " with cluster mode, set to false to ensure that server can connect to engine")
       .version("1.3.0")
       .booleanConf
@@ -1612,7 +1618,7 @@ object KyuubiConf {
     buildConf("kyuubi.frontend.connection.url.use.hostname")
       .doc("When true, frontend services prefer hostname, otherwise, ip address. Note that, " +
         "the default value is set to `false` when engine running on Kubernetes to prevent " +
-        "potential network issue.")
+        "potential network issues.")
       .version("1.5.0")
       .fallbackConf(ENGINE_CONNECTION_URL_USE_HOSTNAME)
 
@@ -1622,10 +1628,11 @@ object KyuubiConf {
       " connection</li>" +
       " <li>USER: engine will be shared by all sessions created by a unique username," +
       s" see also ${ENGINE_SHARE_LEVEL_SUBDOMAIN.key}</li>" +
-      " <li>GROUP: engine will be shared by all sessions created by all users belong to the same" +
-      " primary group name. The engine will be launched by the group name as the effective" +
-      " username, so here the group name is kind of special user who is able to visit the" +
-      " compute resources/data of a team. It follows the" +
+      " <li>GROUP: the engine will be shared by all sessions created" +
+      " by all users belong to the same primary group name." +
+      " The engine will be launched by the group name as the effective" +
+      " username, so here the group name is in value of special user who is able to visit the" +
+      " computing resources/data of the team. It follows the" +
       " [Hadoop GroupsMapping](https://reurl.cc/xE61Y5) to map user to a primary group. If the" +
       " primary group is not found, it fallback to the USER level." +
       " <li>SERVER: the App will be shared by Kyuubi servers</li></ul>")
@@ -1633,7 +1640,7 @@ object KyuubiConf {
     .fallbackConf(LEGACY_ENGINE_SHARE_LEVEL)
 
   val ENGINE_TYPE: ConfigEntry[String] = buildConf("kyuubi.engine.type")
-    .doc("Specify the detailed engine that supported by the Kyuubi. The engine type bindings to" +
+    .doc("Specify the detailed engine supported by Kyuubi. The engine type bindings to" +
       " SESSION scope. This configuration is experimental. Currently, available configs are: <ul>" +
       " <li>SPARK_SQL: specify this engine type will launch a Spark engine which can provide" +
       " all the capacity of the Apache Spark. Note, it's a default engine type.</li>" +
@@ -1644,7 +1651,7 @@ object KyuubiConf {
       " <li>HIVE_SQL: specify this engine type will launch a Hive engine which can provide" +
       " all the capacity of the Hive Server2.</li>" +
       " <li>JDBC: specify this engine type will launch a JDBC engine which can provide" +
-      " a mysql protocol connector, for now we only support Doris dialect.</li>" +
+      " a MySQL protocol connector, for now we only support Doris dialect.</li>" +
       "</ul>")
     .version("1.4.0")
     .stringConf
@@ -1662,22 +1669,22 @@ object KyuubiConf {
       .createWithDefault(false)
 
   val ENGINE_POOL_NAME: ConfigEntry[String] = buildConf("kyuubi.engine.pool.name")
-    .doc("The name of engine pool.")
+    .doc("The name of the engine pool.")
     .version("1.5.0")
     .stringConf
     .checkValue(validZookeeperSubPath.matcher(_).matches(), "must be valid zookeeper sub path.")
     .createWithDefault("engine-pool")
 
   val ENGINE_POOL_SIZE_THRESHOLD: ConfigEntry[Int] = buildConf("kyuubi.engine.pool.size.threshold")
-    .doc("This parameter is introduced as a server-side parameter, " +
-      "and controls the upper limit of the engine pool.")
+    .doc("This parameter is introduced as a server-side parameter " +
+      "controlling the upper limit of the engine pool.")
     .version("1.4.0")
     .intConf
     .checkValue(s => s > 0 && s < 33, "Invalid engine pool threshold, it should be in [1, 32]")
     .createWithDefault(9)
 
   val ENGINE_POOL_SIZE: ConfigEntry[Int] = buildConf("kyuubi.engine.pool.size")
-    .doc("The size of engine pool. Note that, " +
+    .doc("The size of the engine pool. Note that, " +
       "if the size is less than 1, the engine pool will not be enabled; " +
       "otherwise, the size of the engine pool will be " +
       s"min(this, ${ENGINE_POOL_SIZE_THRESHOLD.key}).")
@@ -1720,7 +1727,7 @@ object KyuubiConf {
 
   val ENGINE_DEREGISTER_EXCEPTION_CLASSES: ConfigEntry[Seq[String]] =
     buildConf("kyuubi.engine.deregister.exception.classes")
-      .doc("A comma separated list of exception classes. If there is any exception thrown," +
+      .doc("A comma-separated list of exception classes. If there is any exception thrown," +
         " whose class matches the specified classes, the engine would deregister itself.")
       .version("1.2.0")
       .stringConf
@@ -1729,7 +1736,7 @@ object KyuubiConf {
 
   val ENGINE_DEREGISTER_EXCEPTION_MESSAGES: ConfigEntry[Seq[String]] =
     buildConf("kyuubi.engine.deregister.exception.messages")
-      .doc("A comma separated list of exception messages. If there is any exception thrown," +
+      .doc("A comma-separated list of exception messages. If there is any exception thrown," +
         " whose message or stacktrace matches the specified message list, the engine would" +
         " deregister itself.")
       .version("1.2.0")
@@ -1760,8 +1767,8 @@ object KyuubiConf {
 
   val OPERATION_SCHEDULER_POOL: OptionalConfigEntry[String] =
     buildConf("kyuubi.operation.scheduler.pool")
-      .doc("The scheduler pool of job. Note that, this config should be used after change Spark " +
-        "config spark.scheduler.mode=FAIR.")
+      .doc("The scheduler pool of job. Note that, this config should be used after changing " +
+        "Spark config spark.scheduler.mode=FAIR.")
       .version("1.1.1")
       .stringConf
       .createOptional
@@ -1778,8 +1785,8 @@ object KyuubiConf {
   val ENGINE_USER_ISOLATED_SPARK_SESSION: ConfigEntry[Boolean] =
     buildConf("kyuubi.engine.user.isolated.spark.session")
       .doc("When set to false, if the engine is running in a group or server share level, " +
-        "all the JDBC/ODBC connections will be isolated against the user. Including: " +
-        "the temporary views, function registries, SQL configuration and the current database. " +
+        "all the JDBC/ODBC connections will be isolated against the user. Including " +
+        "the temporary views, function registries, SQL configuration, and the current database. " +
         "Note that, it does not affect if the share level is connection or user.")
       .version("1.6.0")
       .booleanConf
@@ -1788,21 +1795,21 @@ object KyuubiConf {
   val ENGINE_USER_ISOLATED_SPARK_SESSION_IDLE_TIMEOUT: ConfigEntry[Long] =
     buildConf("kyuubi.engine.user.isolated.spark.session.idle.timeout")
       .doc(s"If ${ENGINE_USER_ISOLATED_SPARK_SESSION.key} is false, we will release the " +
-        s"spark session if its corresponding user is inactive after this configured timeout.")
+        s"Spark session if its corresponding user is inactive after this configured timeout.")
       .version("1.6.0")
       .timeConf
       .createWithDefault(Duration.ofHours(6).toMillis)
 
   val ENGINE_USER_ISOLATED_SPARK_SESSION_IDLE_INTERVAL: ConfigEntry[Long] =
     buildConf("kyuubi.engine.user.isolated.spark.session.idle.interval")
-      .doc(s"The interval to check if the user isolated spark session is timeout.")
+      .doc(s"The interval to check if the user-isolated Spark session is timeout.")
       .version("1.6.0")
       .timeConf
       .createWithDefault(Duration.ofMinutes(1).toMillis)
 
   val SERVER_EVENT_JSON_LOG_PATH: ConfigEntry[String] =
     buildConf("kyuubi.backend.server.event.json.log.path")
-      .doc("The location of server events go for the builtin JSON logger")
+      .doc("The location of server events go for the built-in JSON logger")
       .version("1.4.0")
       .serverOnly
       .stringConf
@@ -1810,7 +1817,7 @@ object KyuubiConf {
 
   val ENGINE_EVENT_JSON_LOG_PATH: ConfigEntry[String] =
     buildConf("kyuubi.engine.event.json.log.path")
-      .doc("The location of all the engine events go for the builtin JSON logger.<ul>" +
+      .doc("The location where all the engine events go for the built-in JSON logger.<ul>" +
         "<li>Local Path: start with 'file://'</li>" +
         "<li>HDFS Path: start with 'hdfs://'</li></ul>")
       .version("1.3.0")
@@ -1819,7 +1826,7 @@ object KyuubiConf {
 
   val SERVER_EVENT_LOGGERS: ConfigEntry[Seq[String]] =
     buildConf("kyuubi.backend.server.event.loggers")
-      .doc("A comma separated list of server history loggers, where session/operation etc" +
+      .doc("A comma-separated list of server history loggers, where session/operation etc" +
         " events go.<ul>" +
         s" <li>JSON: the events will be written to the location of" +
         s" ${SERVER_EVENT_JSON_LOG_PATH.key}</li>" +
@@ -1827,9 +1834,9 @@ object KyuubiConf {
         s" <li>CUSTOM: User-defined event handlers.</li></ul>" +
         " Note that: Kyuubi supports custom event handlers with the Java SPI." +
         " To register a custom event handler," +
-        " user need to implement a class" +
+        " the user needs to implement a class" +
         " which is a child of org.apache.kyuubi.events.handler.CustomEventHandlerProvider" +
-        " which has zero-arg constructor.")
+        " which has a zero-arg constructor.")
       .version("1.4.0")
       .serverOnly
       .stringConf
@@ -1841,18 +1848,18 @@ object KyuubiConf {
   @deprecated("using kyuubi.engine.spark.event.loggers instead", "1.6.0")
   val ENGINE_EVENT_LOGGERS: ConfigEntry[Seq[String]] =
     buildConf("kyuubi.engine.event.loggers")
-      .doc("A comma separated list of engine history loggers, where engine/session/operation etc" +
+      .doc("A comma-separated list of engine history loggers, where engine/session/operation etc" +
         " events go.<ul>" +
-        " <li>SPARK: the events will be written to the spark listener bus.</li>" +
+        " <li>SPARK: the events will be written to the Spark listener bus.</li>" +
         " <li>JSON: the events will be written to the location of" +
         s" ${ENGINE_EVENT_JSON_LOG_PATH.key}</li>" +
         " <li>JDBC: to be done</li>" +
         " <li>CUSTOM: User-defined event handlers.</li></ul>" +
         " Note that: Kyuubi supports custom event handlers with the Java SPI." +
         " To register a custom event handler," +
-        " user need to implement a class" +
-        " which is a child of org.apache.kyuubi.events.handler.CustomEventHandlerProvider" +
-        " which has zero-arg constructor.")
+        " the user needs to implement a subclass" +
+        " of `org.apache.kyuubi.events.handler.CustomEventHandlerProvider`" +
+        " which has a zero-arg constructor.")
       .version("1.3.0")
       .stringConf
       .transform(_.toUpperCase(Locale.ROOT))
@@ -1914,7 +1921,7 @@ object KyuubiConf {
     buildConf("kyuubi.engine.security.secret.provider")
       .internal
       .doc("The class used to manage the internal security secret. This class must be a " +
-        "subclass of EngineSecuritySecretProvider.")
+        "subclass of `EngineSecuritySecretProvider`.")
       .version("1.5.0")
       .stringConf
       .createWithDefault(
@@ -1956,8 +1963,8 @@ object KyuubiConf {
 
   val SESSION_NAME: OptionalConfigEntry[String] =
     buildConf("kyuubi.session.name")
-      .doc("A human readable name of session and we use empty string by default. " +
-        "This name will be recorded in event. Note that, we only apply this value from " +
+      .doc("A human readable name of the session and we use empty string by default. " +
+        "This name will be recorded in the event. Note that, we only apply this value from " +
         "session conf.")
       .version("1.4.0")
       .stringConf
@@ -1991,8 +1998,9 @@ object KyuubiConf {
 
   val OPERATION_PLAN_ONLY_OUT_STYLE: ConfigEntry[String] =
     buildConf("kyuubi.operation.plan.only.output.style")
-      .doc("Configures the planOnly output style, The value can be 'plain' and 'json', default " +
-        "value is 'plain', this configuration supports only the output styles of the Spark engine")
+      .doc("Configures the planOnly output style. The value can be 'plain' or 'json', and " +
+        "the default value is 'plain'. This configuration supports only the output styles " +
+        "of the Spark engine")
       .version("1.7.0")
       .stringConf
       .transform(_.toUpperCase(Locale.ROOT))
@@ -2005,8 +2013,8 @@ object KyuubiConf {
   val OPERATION_PLAN_ONLY_EXCLUDES: ConfigEntry[Seq[String]] =
     buildConf("kyuubi.operation.plan.only.excludes")
       .doc("Comma-separated list of query plan names, in the form of simple class names, i.e, " +
-        "for `set abc=xyz`, the value will be `SetCommand`. For those auxiliary plans, such as " +
-        "`switch databases`, `set properties`, or `create temporary view` e.t.c, " +
+        "for `SET abc=xyz`, the value will be `SetCommand`. For those auxiliary plans, such as " +
+        "`switch databases`, `set properties`, or `create temporary view` etc., " +
         "which are used for setup evaluating environments for analyzing actual queries, " +
         "we can use this config to exclude them and let them take effect. " +
         s"See also ${OPERATION_PLAN_ONLY_MODE.key}.")
@@ -2049,9 +2057,9 @@ object KyuubiConf {
   val SESSION_CONF_ADVISOR: OptionalConfigEntry[String] =
     buildConf("kyuubi.session.conf.advisor")
       .doc("A config advisor plugin for Kyuubi Server. This plugin can provide some custom " +
-        "configs for different user or session configs and overwrite the session configs before " +
-        "open a new session. This config value should be a class which is a child of " +
-        "'org.apache.kyuubi.plugin.SessionConfAdvisor' which has zero-arg constructor.")
+        "configs for different users or session configs and overwrite the session configs before " +
+        "opening a new session. This config value should be a subclass of " +
+        "`org.apache.kyuubi.plugin.SessionConfAdvisor` which has a zero-arg constructor.")
       .version("1.5.0")
       .stringConf
       .createOptional
@@ -2059,9 +2067,9 @@ object KyuubiConf {
   val GROUP_PROVIDER: ConfigEntry[String] =
     buildConf("kyuubi.session.group.provider")
       .doc("A group provider plugin for Kyuubi Server. This plugin can provide primary group " +
-        "and groups information for different user or session configs. This config value " +
-        "should be a class which is a child of 'org.apache.kyuubi.plugin.GroupProvider' which " +
-        "has zero-arg constructor. Kyuubi provides the following built-in implementations: " +
+        "and groups information for different users or session configs. This config value " +
+        "should be a subclass of `org.apache.kyuubi.plugin.GroupProvider` which " +
+        "has a zero-arg constructor. Kyuubi provides the following built-in implementations: " +
         "<li>hadoop: delegate the user group mapping to hadoop UserGroupInformation.</li>")
       .version("1.7.0")
       .stringConf
@@ -2091,7 +2099,7 @@ object KyuubiConf {
 
   val ENGINE_SPARK_SHOW_PROGRESS: ConfigEntry[Boolean] =
     buildConf("kyuubi.session.engine.spark.showProgress")
-      .doc("When true, show the progress bar in the spark engine log.")
+      .doc("When true, show the progress bar in the Spark's engine log.")
       .version("1.6.0")
       .booleanConf
       .createWithDefault(false)
@@ -2113,65 +2121,65 @@ object KyuubiConf {
 
   val ENGINE_TRINO_MEMORY: ConfigEntry[String] =
     buildConf("kyuubi.engine.trino.memory")
-      .doc("The heap memory for the trino query engine")
+      .doc("The heap memory for the Trino query engine")
       .version("1.6.0")
       .stringConf
       .createWithDefault("1g")
 
   val ENGINE_TRINO_JAVA_OPTIONS: OptionalConfigEntry[String] =
     buildConf("kyuubi.engine.trino.java.options")
-      .doc("The extra java options for the trino query engine")
+      .doc("The extra Java options for the Trino query engine")
       .version("1.6.0")
       .stringConf
       .createOptional
 
   val ENGINE_TRINO_EXTRA_CLASSPATH: OptionalConfigEntry[String] =
     buildConf("kyuubi.engine.trino.extra.classpath")
-      .doc("The extra classpath for the trino query engine, " +
-        "for configuring other libs which may need by the trino engine ")
+      .doc("The extra classpath for the Trino query engine, " +
+        "for configuring other libs which may need by the Trino engine ")
       .version("1.6.0")
       .stringConf
       .createOptional
 
   val ENGINE_HIVE_MEMORY: ConfigEntry[String] =
     buildConf("kyuubi.engine.hive.memory")
-      .doc("The heap memory for the hive query engine")
+      .doc("The heap memory for the Hive query engine")
       .version("1.6.0")
       .stringConf
       .createWithDefault("1g")
 
   val ENGINE_HIVE_JAVA_OPTIONS: OptionalConfigEntry[String] =
     buildConf("kyuubi.engine.hive.java.options")
-      .doc("The extra java options for the hive query engine")
+      .doc("The extra Java options for the Hive query engine")
       .version("1.6.0")
       .stringConf
       .createOptional
 
   val ENGINE_HIVE_EXTRA_CLASSPATH: OptionalConfigEntry[String] =
     buildConf("kyuubi.engine.hive.extra.classpath")
-      .doc("The extra classpath for the hive query engine, for configuring location" +
-        " of hadoop client jars, etc")
+      .doc("The extra classpath for the Hive query engine, for configuring location" +
+        " of the hadoop client jars and etc.")
       .version("1.6.0")
       .stringConf
       .createOptional
 
   val ENGINE_FLINK_MEMORY: ConfigEntry[String] =
     buildConf("kyuubi.engine.flink.memory")
-      .doc("The heap memory for the flink sql engine")
+      .doc("The heap memory for the Flink SQL engine")
       .version("1.6.0")
       .stringConf
       .createWithDefault("1g")
 
   val ENGINE_FLINK_JAVA_OPTIONS: OptionalConfigEntry[String] =
     buildConf("kyuubi.engine.flink.java.options")
-      .doc("The extra java options for the flink sql engine")
+      .doc("The extra Java options for the Flink SQL engine")
       .version("1.6.0")
       .stringConf
       .createOptional
 
   val ENGINE_FLINK_EXTRA_CLASSPATH: OptionalConfigEntry[String] =
     buildConf("kyuubi.engine.flink.extra.classpath")
-      .doc("The extra classpath for the flink sql engine, for configuring location" +
+      .doc("The extra classpath for the Flink SQL engine, for configuring the location" +
         " of hadoop client jars, etc")
       .version("1.6.0")
       .stringConf
@@ -2258,15 +2266,15 @@ object KyuubiConf {
 
   val OPERATION_SPARK_LISTENER_ENABLED: ConfigEntry[Boolean] =
     buildConf("kyuubi.operation.spark.listener.enabled")
-      .doc("When set to true, Spark engine registers a SQLOperationListener before executing " +
-        "the statement, logs a few summary statistics when each stage completes.")
+      .doc("When set to true, Spark engine registers an SQLOperationListener before executing " +
+        "the statement, logging a few summary statistics when each stage completes.")
       .version("1.6.0")
       .booleanConf
       .createWithDefault(true)
 
   val ENGINE_JDBC_DRIVER_CLASS: OptionalConfigEntry[String] =
     buildConf("kyuubi.engine.jdbc.driver.class")
-      .doc("The driver class for jdbc engine connection")
+      .doc("The driver class for JDBC engine connection")
       .version("1.6.0")
       .stringConf
       .createOptional
@@ -2302,14 +2310,14 @@ object KyuubiConf {
 
   val ENGINE_JDBC_CONNECTION_PROVIDER: OptionalConfigEntry[String] =
     buildConf("kyuubi.engine.jdbc.connection.provider")
-      .doc("The connection provider is used for getting a connection from server")
+      .doc("The connection provider is used for getting a connection from the server")
       .version("1.6.0")
       .stringConf
       .createOptional
 
   val ENGINE_JDBC_SHORT_NAME: OptionalConfigEntry[String] =
     buildConf("kyuubi.engine.jdbc.type")
-      .doc("The short name of jdbc type")
+      .doc("The short name of JDBC type")
       .version("1.6.0")
       .stringConf
       .createOptional
@@ -2395,31 +2403,31 @@ object KyuubiConf {
 
   val ENGINE_JDBC_MEMORY: ConfigEntry[String] =
     buildConf("kyuubi.engine.jdbc.memory")
-      .doc("The heap memory for the jdbc query engine")
+      .doc("The heap memory for the JDBC query engine")
       .version("1.6.0")
       .stringConf
       .createWithDefault("1g")
 
   val ENGINE_JDBC_JAVA_OPTIONS: OptionalConfigEntry[String] =
     buildConf("kyuubi.engine.jdbc.java.options")
-      .doc("The extra java options for the jdbc query engine")
+      .doc("The extra Java options for the JDBC query engine")
       .version("1.6.0")
       .stringConf
       .createOptional
 
   val ENGINE_JDBC_EXTRA_CLASSPATH: OptionalConfigEntry[String] =
     buildConf("kyuubi.engine.jdbc.extra.classpath")
-      .doc("The extra classpath for the jdbc query engine, for configuring location" +
-        " of jdbc driver, etc")
+      .doc("The extra classpath for the JDBC query engine, for configuring the location" +
+        " of the JDBC driver and etc.")
       .version("1.6.0")
       .stringConf
       .createOptional
 
   val ENGINE_SPARK_EVENT_LOGGERS: ConfigEntry[Seq[String]] =
     buildConf("kyuubi.engine.spark.event.loggers")
-      .doc("A comma separated list of engine loggers, where engine/session/operation etc" +
+      .doc("A comma-separated list of engine loggers, where engine/session/operation etc" +
         " events go.<ul>" +
-        " <li>SPARK: the events will be written to the spark listener bus.</li>" +
+        " <li>SPARK: the events will be written to the Spark listener bus.</li>" +
         " <li>JSON: the events will be written to the location of" +
         s" ${ENGINE_EVENT_JSON_LOG_PATH.key}</li>" +
         " <li>JDBC: to be done</li>" +
@@ -2430,28 +2438,28 @@ object KyuubiConf {
   val ENGINE_SPARK_PYTHON_HOME_ARCHIVE: OptionalConfigEntry[String] =
     buildConf("kyuubi.engine.spark.python.home.archive")
       .doc("Spark archive containing $SPARK_HOME/python directory, which is used to init session" +
-        " python worker for python language mode.")
+        " Python worker for Python language mode.")
       .version("1.7.0")
       .stringConf
       .createOptional
 
   val ENGINE_SPARK_PYTHON_ENV_ARCHIVE: OptionalConfigEntry[String] =
     buildConf("kyuubi.engine.spark.python.env.archive")
-      .doc("Portable python env archive used for Spark engine python language mode.")
+      .doc("Portable Python env archive used for Spark engine Python language mode.")
       .version("1.7.0")
       .stringConf
       .createOptional
 
   val ENGINE_SPARK_PYTHON_ENV_ARCHIVE_EXEC_PATH: ConfigEntry[String] =
     buildConf("kyuubi.engine.spark.python.env.archive.exec.path")
-      .doc("The python exec path under the python env archive.")
+      .doc("The Python exec path under the Python env archive.")
       .version("1.7.0")
       .stringConf
       .createWithDefault("bin/python")
 
   val ENGINE_HIVE_EVENT_LOGGERS: ConfigEntry[Seq[String]] =
     buildConf("kyuubi.engine.hive.event.loggers")
-      .doc("A comma separated list of engine history loggers, where engine/session/operation etc" +
+      .doc("A comma-separated list of engine history loggers, where engine/session/operation etc" +
         " events go.<ul>" +
         " <li>JSON: the events will be written to the location of" +
         s" ${ENGINE_EVENT_JSON_LOG_PATH.key}</li>" +
@@ -2468,7 +2476,7 @@ object KyuubiConf {
 
   val ENGINE_TRINO_EVENT_LOGGERS: ConfigEntry[Seq[String]] =
     buildConf("kyuubi.engine.trino.event.loggers")
-      .doc("A comma separated list of engine history loggers, where engine/session/operation etc" +
+      .doc("A comma-separated list of engine history loggers, where engine/session/operation etc" +
         " events go.<ul>" +
         " <li>JSON: the events will be written to the location of" +
         s" ${ENGINE_EVENT_JSON_LOG_PATH.key}</li>" +
diff --git a/kyuubi-ctl/src/main/scala/org/apache/kyuubi/ctl/CtlConf.scala b/kyuubi-ctl/src/main/scala/org/apache/kyuubi/ctl/CtlConf.scala
index 08fbd7342..f299a5a88 100644
--- a/kyuubi-ctl/src/main/scala/org/apache/kyuubi/ctl/CtlConf.scala
+++ b/kyuubi-ctl/src/main/scala/org/apache/kyuubi/ctl/CtlConf.scala
@@ -28,7 +28,7 @@ object CtlConf {
   val CTL_REST_CLIENT_BASE_URL: OptionalConfigEntry[String] =
     buildConf("kyuubi.ctl.rest.base.url")
       .doc("The REST API base URL, " +
-        "which contains the scheme (http:// or https://), host name, port number")
+        "which contains the scheme (http:// or https://), hostname, port number")
       .version("1.6.0")
       .stringConf
       .createOptional
@@ -49,7 +49,7 @@ object CtlConf {
 
   val CTL_REST_CLIENT_CONNECT_TIMEOUT: ConfigEntry[Long] =
     buildConf("kyuubi.ctl.rest.connect.timeout")
-      .doc("The timeout[ms] for establishing the connection with the kyuubi server." +
+      .doc("The timeout[ms] for establishing the connection with the kyuubi server. " +
         "A timeout value of zero is interpreted as an infinite timeout.")
       .version("1.6.0")
       .timeConf
@@ -58,7 +58,7 @@ object CtlConf {
 
   val CTL_REST_CLIENT_SOCKET_TIMEOUT: ConfigEntry[Long] =
     buildConf("kyuubi.ctl.rest.socket.timeout")
-      .doc("The timeout[ms] for waiting for data packets after connection is established." +
+      .doc("The timeout[ms] for waiting for data packets after connection is established. " +
         "A timeout value of zero is interpreted as an infinite timeout.")
       .version("1.6.0")
       .timeConf
diff --git a/kyuubi-ha/src/main/scala/org/apache/kyuubi/ha/HighAvailabilityConf.scala b/kyuubi-ha/src/main/scala/org/apache/kyuubi/ha/HighAvailabilityConf.scala
index d33dccf98..6052e31f5 100644
--- a/kyuubi-ha/src/main/scala/org/apache/kyuubi/ha/HighAvailabilityConf.scala
+++ b/kyuubi-ha/src/main/scala/org/apache/kyuubi/ha/HighAvailabilityConf.scala
@@ -31,7 +31,7 @@ object HighAvailabilityConf {
 
   @deprecated("using kyuubi.ha.addresses instead", "1.6.0")
   val HA_ZK_QUORUM: ConfigEntry[String] = buildConf("kyuubi.ha.zookeeper.quorum")
-    .doc("(deprecated) The connection string for the zookeeper ensemble")
+    .doc("(deprecated) The connection string for the ZooKeeper ensemble")
     .version("1.0.0")
     .stringConf
     .createWithDefault("")
@@ -69,14 +69,14 @@ object HighAvailabilityConf {
     "1.3.2")
   val HA_ZK_ACL_ENABLED: ConfigEntry[Boolean] =
     buildConf("kyuubi.ha.zookeeper.acl.enabled")
-      .doc("Set to true if the zookeeper ensemble is kerberized")
+      .doc("Set to true if the ZooKeeper ensemble is kerberized")
       .version("1.0.0")
       .booleanConf
       .createWithDefault(UserGroupInformation.isSecurityEnabled)
 
   val HA_ZK_AUTH_TYPE: ConfigEntry[String] =
     buildConf("kyuubi.ha.zookeeper.auth.type")
-      .doc("The type of zookeeper authentication, all candidates are " +
+      .doc("The type of ZooKeeper authentication, all candidates are " +
         s"${AuthTypes.values.mkString("<ul><li>", "</li><li> ", "</li></ul>")}")
       .version("1.3.2")
       .stringConf
@@ -85,7 +85,7 @@ object HighAvailabilityConf {
 
   val HA_ZK_ENGINE_AUTH_TYPE: ConfigEntry[String] =
     buildConf("kyuubi.ha.zookeeper.engine.auth.type")
-      .doc("The type of zookeeper authentication for engine, all candidates are " +
+      .doc("The type of ZooKeeper authentication for the engine, all candidates are " +
         s"${AuthTypes.values.mkString("<ul><li>", "</li><li> ", "</li></ul>")}")
       .version("1.3.2")
       .stringConf
@@ -94,31 +94,31 @@ object HighAvailabilityConf {
 
   val HA_ZK_AUTH_PRINCIPAL: ConfigEntry[Option[String]] =
     buildConf("kyuubi.ha.zookeeper.auth.principal")
-      .doc("Name of the Kerberos principal is used for zookeeper authentication.")
+      .doc("Name of the Kerberos principal is used for ZooKeeper authentication.")
       .version("1.3.2")
       .fallbackConf(KyuubiConf.SERVER_PRINCIPAL)
 
   val HA_ZK_AUTH_KEYTAB: ConfigEntry[Option[String]] = buildConf("kyuubi.ha.zookeeper.auth.keytab")
-    .doc("Location of Kyuubi server's keytab is used for zookeeper authentication.")
+    .doc("Location of the Kyuubi server's keytab is used for ZooKeeper authentication.")
     .version("1.3.2")
     .fallbackConf(KyuubiConf.SERVER_KEYTAB)
 
   val HA_ZK_AUTH_DIGEST: OptionalConfigEntry[String] = buildConf("kyuubi.ha.zookeeper.auth.digest")
-    .doc("The digest auth string is used for zookeeper authentication, like: username:password.")
+    .doc("The digest auth string is used for ZooKeeper authentication, like: username:password.")
     .version("1.3.2")
     .stringConf
     .createOptional
 
   val HA_ZK_CONN_MAX_RETRIES: ConfigEntry[Int] =
     buildConf("kyuubi.ha.zookeeper.connection.max.retries")
-      .doc("Max retry times for connecting to the zookeeper ensemble")
+      .doc("Max retry times for connecting to the ZooKeeper ensemble")
       .version("1.0.0")
       .intConf
       .createWithDefault(3)
 
   val HA_ZK_CONN_BASE_RETRY_WAIT: ConfigEntry[Int] =
     buildConf("kyuubi.ha.zookeeper.connection.base.retry.wait")
-      .doc("Initial amount of time to wait between retries to the zookeeper ensemble")
+      .doc("Initial amount of time to wait between retries to the ZooKeeper ensemble")
       .version("1.0.0")
       .intConf
       .createWithDefault(1000)
@@ -133,7 +133,7 @@ object HighAvailabilityConf {
       .createWithDefault(30 * 1000)
 
   val HA_ZK_CONN_TIMEOUT: ConfigEntry[Int] = buildConf("kyuubi.ha.zookeeper.connection.timeout")
-    .doc("The timeout(ms) of creating the connection to the zookeeper ensemble")
+    .doc("The timeout(ms) of creating the connection to the ZooKeeper ensemble")
     .version("1.0.0")
     .intConf
     .createWithDefault(15 * 1000)
@@ -146,7 +146,7 @@ object HighAvailabilityConf {
 
   val HA_ZK_CONN_RETRY_POLICY: ConfigEntry[String] =
     buildConf("kyuubi.ha.zookeeper.connection.retry.policy")
-      .doc("The retry policy for connecting to the zookeeper ensemble, all candidates are:" +
+      .doc("The retry policy for connecting to the ZooKeeper ensemble, all candidates are:" +
         s" ${RetryPolicies.values.mkString("<ul><li>", "</li><li> ", "</li></ul>")}")
       .version("1.0.0")
       .stringConf
@@ -155,7 +155,7 @@ object HighAvailabilityConf {
 
   val HA_ZK_NODE_TIMEOUT: ConfigEntry[Long] =
     buildConf("kyuubi.ha.zookeeper.node.creation.timeout")
-      .doc("Timeout for creating zookeeper node")
+      .doc("Timeout for creating ZooKeeper node")
       .version("1.2.0")
       .timeConf
       .checkValue(_ > 0, "Must be positive")
@@ -163,7 +163,7 @@ object HighAvailabilityConf {
 
   val HA_ENGINE_REF_ID: OptionalConfigEntry[String] =
     buildConf("kyuubi.ha.engine.ref.id")
-      .doc("The engine reference id will be attached to zookeeper node when engine started, " +
+      .doc("The engine reference id will be attached to ZooKeeper node when engine started, " +
         "and the kyuubi server will check it cyclically.")
       .internal
       .version("1.3.2")
@@ -172,7 +172,7 @@ object HighAvailabilityConf {
 
   val HA_ZK_PUBLISH_CONFIGS: ConfigEntry[Boolean] =
     buildConf("kyuubi.ha.zookeeper.publish.configs")
-      .doc("When set to true, publish Kerberos configs to Zookeeper." +
+      .doc("When set to true, publish Kerberos configs to Zookeeper. " +
         "Note that the Hive driver needs to be greater than 1.3 or 2.0 or apply HIVE-11581 patch.")
       .version("1.4.0")
       .booleanConf
@@ -189,8 +189,8 @@ object HighAvailabilityConf {
 
   val HA_ETCD_LEASE_TIMEOUT: ConfigEntry[Long] =
     buildConf("kyuubi.ha.etcd.lease.timeout")
-      .doc("Timeout for etcd keep alive lease. The kyuubi server will known " +
-        "unexpected loss of engine after up to this seconds.")
+      .doc("Timeout for etcd keep alive lease. The kyuubi server will know " +
+        "the unexpected loss of engine after up to this seconds.")
       .version("1.6.0")
       .timeConf
       .checkValue(_ > 0, "Must be positive")
@@ -198,7 +198,7 @@ object HighAvailabilityConf {
 
   val HA_ETCD_SSL_ENABLED: ConfigEntry[Boolean] =
     buildConf("kyuubi.ha.etcd.ssl.enabled")
-      .doc("When set to true, will build a ssl secured etcd client.")
+      .doc("When set to true, will build an SSL secured etcd client.")
       .version("1.6.0")
       .booleanConf
       .createWithDefault(false)
diff --git a/kyuubi-metrics/src/main/scala/org/apache/kyuubi/metrics/MetricsConf.scala b/kyuubi-metrics/src/main/scala/org/apache/kyuubi/metrics/MetricsConf.scala
index cacc15b18..daa221b78 100644
--- a/kyuubi-metrics/src/main/scala/org/apache/kyuubi/metrics/MetricsConf.scala
+++ b/kyuubi-metrics/src/main/scala/org/apache/kyuubi/metrics/MetricsConf.scala
@@ -34,12 +34,12 @@ object MetricsConf {
       .createWithDefault(true)
 
   val METRICS_REPORTERS: ConfigEntry[Seq[String]] = buildConf("kyuubi.metrics.reporters")
-    .doc("A comma separated list for all metrics reporters" +
+    .doc("A comma-separated list for all metrics reporters" +
       "<ul>" +
       " <li>CONSOLE - ConsoleReporter which outputs measurements to CONSOLE periodically.</li>" +
       " <li>JMX - JmxReporter which listens for new metrics and exposes them as MBeans.</li> " +
       " <li>JSON - JsonReporter which outputs measurements to json file periodically.</li>" +
-      " <li>PROMETHEUS - PrometheusReporter which exposes metrics in prometheus format.</li>" +
+      " <li>PROMETHEUS - PrometheusReporter which exposes metrics in Prometheus format.</li>" +
       " <li>SLF4J - Slf4jReporter which outputs measurements to system log periodically.</li>" +
       "</ul>")
     .version("1.2.0")
@@ -58,13 +58,13 @@ object MetricsConf {
     .createWithDefault(Duration.ofSeconds(5).toMillis)
 
   val METRICS_JSON_LOCATION: ConfigEntry[String] = buildConf("kyuubi.metrics.json.location")
-    .doc("Where the json metrics file located")
+    .doc("Where the JSON metrics file located")
     .version("1.2.0")
     .stringConf
     .createWithDefault("metrics")
 
   val METRICS_JSON_INTERVAL: ConfigEntry[Long] = buildConf("kyuubi.metrics.json.interval")
-    .doc("How often should report metrics to json file")
+    .doc("How often should report metrics to JSON file")
     .version("1.2.0")
     .timeConf
     .createWithDefault(Duration.ofSeconds(5).toMillis)
diff --git a/kyuubi-server/src/main/scala/org/apache/kyuubi/server/metadata/jdbc/JDBCMetadataStoreConf.scala b/kyuubi-server/src/main/scala/org/apache/kyuubi/server/metadata/jdbc/JDBCMetadataStoreConf.scala
index 27b9bc58e..84067b8d0 100644
--- a/kyuubi-server/src/main/scala/org/apache/kyuubi/server/metadata/jdbc/JDBCMetadataStoreConf.scala
+++ b/kyuubi-server/src/main/scala/org/apache/kyuubi/server/metadata/jdbc/JDBCMetadataStoreConf.scala
@@ -38,11 +38,11 @@ object JDBCMetadataStoreConf {
   val METADATA_STORE_JDBC_DATABASE_TYPE: ConfigEntry[String] =
     buildConf("kyuubi.metadata.store.jdbc.database.type")
       .doc("The database type for server jdbc metadata store.<ul>" +
-        " <li>DERBY: Apache Derby, jdbc driver `org.apache.derby.jdbc.AutoloadedDriver`.</li>" +
-        " <li>MYSQL: MySQL, jdbc driver `com.mysql.jdbc.Driver`.</li>" +
-        " <li>CUSTOM: User-defined database type, need to specify corresponding jdbc driver.</li>" +
-        " Note that: The jdbc datasource is powered by HiKariCP, for datasource properties," +
-        " please specify them with prefix: kyuubi.metadata.store.jdbc.datasource." +
+        " <li>DERBY: Apache Derby, JDBC driver `org.apache.derby.jdbc.AutoloadedDriver`.</li>" +
+        " <li>MYSQL: MySQL, JDBC driver `com.mysql.jdbc.Driver`.</li>" +
+        " <li>CUSTOM: User-defined database type, need to specify corresponding JDBC driver.</li>" +
+        " Note that: The JDBC datasource is powered by HiKariCP, for datasource properties," +
+        " please specify them with the prefix: kyuubi.metadata.store.jdbc.datasource." +
         " For example, kyuubi.metadata.store.jdbc.datasource.connectionTimeout=10000.")
       .version("1.6.0")
       .serverOnly
@@ -52,7 +52,7 @@ object JDBCMetadataStoreConf {
 
   val METADATA_STORE_JDBC_DATABASE_SCHEMA_INIT: ConfigEntry[Boolean] =
     buildConf("kyuubi.metadata.store.jdbc.database.schema.init")
-      .doc("Whether to init the jdbc metadata store database schema.")
+      .doc("Whether to init the JDBC metadata store database schema.")
       .version("1.6.0")
       .serverOnly
       .booleanConf
@@ -68,9 +68,10 @@ object JDBCMetadataStoreConf {
 
   val METADATA_STORE_JDBC_URL: ConfigEntry[String] =
     buildConf("kyuubi.metadata.store.jdbc.url")
-      .doc("The jdbc url for server jdbc metadata store. By defaults, it is a DERBY in-memory" +
+      .doc("The JDBC url for server JDBC metadata store. By default, it is a DERBY in-memory" +
         " database url, and the state information is not shared across kyuubi instances. To" +
-        " enable multiple kyuubi instances high available, please specify a production jdbc url.")
+        " enable high availability for multiple kyuubi instances," +
+        " please specify a production JDBC url.")
       .version("1.6.0")
       .serverOnly
       .stringConf
@@ -78,7 +79,7 @@ object JDBCMetadataStoreConf {
 
   val METADATA_STORE_JDBC_USER: ConfigEntry[String] =
     buildConf("kyuubi.metadata.store.jdbc.user")
-      .doc("The username for server jdbc metadata store.")
+      .doc("The username for server JDBC metadata store.")
       .version("1.6.0")
       .serverOnly
       .stringConf
@@ -86,7 +87,7 @@ object JDBCMetadataStoreConf {
 
   val METADATA_STORE_JDBC_PASSWORD: ConfigEntry[String] =
     buildConf("kyuubi.metadata.store.jdbc.password")
-      .doc("The password for server jdbc metadata store.")
+      .doc("The password for server JDBC metadata store.")
       .version("1.6.0")
       .serverOnly
       .stringConf
diff --git a/kyuubi-server/src/test/scala/org/apache/kyuubi/config/AllKyuubiConfiguration.scala b/kyuubi-server/src/test/scala/org/apache/kyuubi/config/AllKyuubiConfiguration.scala
index 31ab67754..6db07a2e1 100644
--- a/kyuubi-server/src/test/scala/org/apache/kyuubi/config/AllKyuubiConfiguration.scala
+++ b/kyuubi-server/src/test/scala/org/apache/kyuubi/config/AllKyuubiConfiguration.scala
@@ -94,7 +94,7 @@ class AllKyuubiConfiguration extends KyuubiFunSuite {
     newOutput += " - limitations under the License."
     newOutput += " -->"
     newOutput += ""
-    newOutput += "<!-- DO NOT MODIFY THIS FILE DIRECTLY, IT IS AUTO GENERATED BY" +
+    newOutput += "<!-- DO NOT MODIFY THIS FILE DIRECTLY, IT IS AUTO-GENERATED BY" +
       " [org.apache.kyuubi.config.AllKyuubiConfiguration] -->"
     newOutput += ""
     newOutput += ""
@@ -120,7 +120,7 @@ class AllKyuubiConfiguration extends KyuubiFunSuite {
       " `kyuubi.engineEnv.VAR_NAME`. For example, with `kyuubi.engineEnv.SPARK_DRIVER_MEMORY=4g`," +
       " the environment variable `SPARK_DRIVER_MEMORY` with value `4g` would be transferred into" +
       " engine side. With `kyuubi.engineEnv.SPARK_CONF_DIR=/apache/confs/spark/conf`, the" +
-      " value of `SPARK_CONF_DIR` in engine side is set to `/apache/confs/spark/conf`."
+      " value of `SPARK_CONF_DIR` on the engine side is set to `/apache/confs/spark/conf`."
 
     newOutput += ""
     newOutput += "## Kyuubi Configurations"
@@ -194,7 +194,7 @@ class AllKyuubiConfiguration extends KyuubiFunSuite {
     newOutput += ""
     newOutput += ("  - For [Static SQL Configurations](" +
       "http://spark.apache.org/docs/latest/configuration.html#static-sql-configuration) and" +
-      " other spark core configs, e.g. `spark.executor.memory`, they will take affect if there" +
+      " other spark core configs, e.g. `spark.executor.memory`, they will take effect if there" +
       " is no existing SQL engine application. Otherwise, they will just be ignored")
     newOutput += ""
     newOutput += ("### Via SET Syntax")
@@ -257,7 +257,7 @@ class AllKyuubiConfiguration extends KyuubiFunSuite {
     newOutput += ""
     newOutput += ("### Hadoop Configurations")
     newOutput += ""
-    newOutput += ("Specifying `HADOOP_CONF_DIR` to the directory contains hadoop configuration" +
+    newOutput += ("Specifying `HADOOP_CONF_DIR` to the directory containing Hadoop configuration" +
       " files or treating them as Spark properties with a `spark.hadoop.` prefix." +
       " Please refer to the Spark official online documentation for" +
       " [Inheriting Hadoop Cluster Configuration](http://spark.apache.org/docs/latest/" +
@@ -269,7 +269,7 @@ class AllKyuubiConfiguration extends KyuubiFunSuite {
     newOutput += ""
     newOutput += ("These configurations are used for SQL engine application to talk to" +
       " Hive MetaStore and could be configured in a `hive-site.xml`." +
-      " Placed it in `$SPARK_HOME/conf` directory, or treating them as Spark properties with" +
+      " Placed it in `$SPARK_HOME/conf` directory, or treat them as Spark properties with" +
       " a `spark.hadoop.` prefix.")
 
     newOutput += ""
diff --git a/kyuubi-zookeeper/src/main/scala/org/apache/kyuubi/zookeeper/ZookeeperConf.scala b/kyuubi-zookeeper/src/main/scala/org/apache/kyuubi/zookeeper/ZookeeperConf.scala
index 98536a667..ee1fe00dc 100644
--- a/kyuubi-zookeeper/src/main/scala/org/apache/kyuubi/zookeeper/ZookeeperConf.scala
+++ b/kyuubi-zookeeper/src/main/scala/org/apache/kyuubi/zookeeper/ZookeeperConf.scala
@@ -25,27 +25,27 @@ object ZookeeperConf {
 
   @deprecated("using kyuubi.zookeeper.embedded.client.port instead", since = "1.2.0")
   val EMBEDDED_ZK_PORT: ConfigEntry[Int] = buildConf("kyuubi.zookeeper.embedded.port")
-    .doc("The port of the embedded zookeeper server")
+    .doc("The port of the embedded ZooKeeper server")
     .version("1.0.0")
     .intConf
     .createWithDefault(2181)
 
   @deprecated("using kyuubi.zookeeper.embedded.data.dir instead", since = "1.2.0")
   val EMBEDDED_ZK_TEMP_DIR: ConfigEntry[String] = buildConf("kyuubi.zookeeper.embedded.directory")
-    .doc("The temporary directory for the embedded zookeeper server")
+    .doc("The temporary directory for the embedded ZooKeeper server")
     .version("1.0.0")
     .stringConf
     .createWithDefault("embedded_zookeeper")
 
   val ZK_CLIENT_PORT: ConfigEntry[Int] = buildConf("kyuubi.zookeeper.embedded.client.port")
-    .doc("clientPort for the embedded zookeeper server to listen for client connections," +
-      " a client here could be Kyuubi server, engine and JDBC client")
+    .doc("clientPort for the embedded ZooKeeper server to listen for client connections," +
+      " a client here could be Kyuubi server, engine, and JDBC client")
     .version("1.2.0")
     .fallbackConf(EMBEDDED_ZK_PORT)
 
   val ZK_CLIENT_PORT_ADDRESS: OptionalConfigEntry[String] =
     buildConf("kyuubi.zookeeper.embedded.client.port.address")
-      .doc("clientPortAddress for the embedded zookeeper server to")
+      .doc("clientPortAddress for the embedded ZooKeeper server to")
       .version("1.2.0")
       .stringConf
       .createOptional
@@ -57,19 +57,19 @@ object ZookeeperConf {
     .fallbackConf(EMBEDDED_ZK_TEMP_DIR)
 
   val ZK_DATA_LOG_DIR: ConfigEntry[String] = buildConf("kyuubi.zookeeper.embedded.data.log.dir")
-    .doc("dataLogDir for the embedded zookeeper server where writes the transaction log .")
+    .doc("dataLogDir for the embedded ZooKeeper server where writes the transaction log .")
     .version("1.2.0")
     .fallbackConf(ZK_DATA_DIR)
 
   val ZK_TICK_TIME: ConfigEntry[Int] = buildConf("kyuubi.zookeeper.embedded.tick.time")
-    .doc("tickTime in milliseconds for the embedded zookeeper server")
+    .doc("tickTime in milliseconds for the embedded ZooKeeper server")
     .version("1.2.0")
     .intConf
     .createWithDefault(3000)
 
   val ZK_MAX_CLIENT_CONNECTIONS: ConfigEntry[Int] =
     buildConf("kyuubi.zookeeper.embedded.max.client.connections")
-      .doc("maxClientCnxns for the embedded zookeeper server to limits the number of concurrent" +
+      .doc("maxClientCnxns for the embedded ZooKeeper server to limit the number of concurrent" +
         " connections of a single client identified by IP address")
       .version("1.2.0")
       .intConf
@@ -77,7 +77,7 @@ object ZookeeperConf {
 
   val ZK_MIN_SESSION_TIMEOUT: ConfigEntry[Int] =
     buildConf("kyuubi.zookeeper.embedded.min.session.timeout")
-      .doc("minSessionTimeout in milliseconds for the embedded zookeeper server will allow the" +
+      .doc("minSessionTimeout in milliseconds for the embedded ZooKeeper server will allow the" +
         " client to negotiate. Defaults to 2 times the tickTime")
       .version("1.2.0")
       .intConf
@@ -85,7 +85,7 @@ object ZookeeperConf {
 
   val ZK_MAX_SESSION_TIMEOUT: ConfigEntry[Int] =
     buildConf("kyuubi.zookeeper.embedded.max.session.timeout")
-      .doc("maxSessionTimeout in milliseconds for the embedded zookeeper server will allow the" +
+      .doc("maxSessionTimeout in milliseconds for the embedded ZooKeeper server will allow the" +
         " client to negotiate. Defaults to 20 times the tickTime")
       .version("1.2.0")
       .intConf