You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by gu...@apache.org on 2020/04/06 09:03:53 UTC

[spark] branch branch-3.0 updated (22e1c4f -> 88252ac)

This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a change to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git.


    from 22e1c4f  [SPARK-31343][SQL][TESTS] Check codegen does not fail on expressions with escape chars in string parameters
     new a21a076  [SPARK-31279][SQL][DOC] Add version information to the configuration of Hive
     new 88252ac  [SPARK-31215][SQL][DOC] Add version information to the static configuration of SQL

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 docs/configuration.md                              |  1 +
 docs/sql-data-sources-hive-tables.md               |  6 +++++-
 .../apache/spark/sql/internal/StaticSQLConf.scala  | 25 ++++++++++++++++++++--
 .../org/apache/spark/sql/hive/HiveUtils.scala      | 11 ++++++++++
 4 files changed, 40 insertions(+), 3 deletions(-)


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org


[spark] 02/02: [SPARK-31215][SQL][DOC] Add version information to the static configuration of SQL

Posted by gu...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git

commit 88252acd72a3bd7780e2680264fea076d725b9ef
Author: beliefer <be...@163.com>
AuthorDate: Tue Mar 31 12:31:25 2020 +0900

    [SPARK-31215][SQL][DOC] Add version information to the static configuration of SQL
    
    ### What changes were proposed in this pull request?
    Add version information to the static configuration of `SQL`.
    
    I sorted out some information show below.
    
    Item name | Since version | JIRA ID | Commit ID | Note
    -- | -- | -- | -- | --
    spark.sql.warehouse.dir | 2.0.0 | SPARK-14994 | 054f991c4350af1350af7a4109ee77f4a34822f0#diff-32bb9518401c0948c5ea19377b5069ab |  
    spark.sql.catalogImplementation | 2.0.0 | SPARK-14720 and SPARK-13643 | 8fc267ab3322e46db81e725a5cb1adb5a71b2b4d#diff-6bdad48cfc34314e89599655442ff210 |  
    spark.sql.globalTempDatabase | 2.1.0 | SPARK-17338 | 23ddff4b2b2744c3dc84d928e144c541ad5df376#diff-6bdad48cfc34314e89599655442ff210 |  
    spark.sql.sources.schemaStringLengthThreshold | 1.3.1 | SPARK-6024 | 6200f0709c5c8440decae8bf700d7859f32ac9d5#diff-41ef65b9ef5b518f77e2a03559893f4d | 1.3
    spark.sql.filesourceTableRelationCacheSize | 2.2.0 | SPARK-19265 | 9d9d67c7957f7cbbdbe889bdbc073568b2bfbb16#diff-32bb9518401c0948c5ea19377b5069ab |
    spark.sql.codegen.cache.maxEntries | 2.4.0 | SPARK-24727 | b2deef64f604ddd9502a31105ed47cb63470ec85#diff-5081b9388de3add800b6e4a6ddf55c01 |
    spark.sql.codegen.comments | 2.0.0 | SPARK-15680 | f0e8738c1ec0e4c5526aeada6f50cf76428f9afd#diff-8bcc5aea39c73d4bf38aef6f6951d42c |  
    spark.sql.debug | 2.1.0 | SPARK-17899 | db8784feaa605adcbd37af4bc8b7146479b631f8#diff-32bb9518401c0948c5ea19377b5069ab |  
    spark.sql.hive.thriftServer.singleSession | 1.6.0 | SPARK-11089 | 167ea61a6a604fd9c0b00122a94d1bc4b1de24ff#diff-ff50aea397a607b79df9bec6f2a841db |  
    spark.sql.extensions | 2.2.0 | SPARK-18127 | f0de600797ff4883927d0c70732675fd8629e239#diff-5081b9388de3add800b6e4a6ddf55c01 |  
    spark.sql.queryExecutionListeners | 2.3.0 | SPARK-19558 | bd4eb9ce57da7bacff69d9ed958c94f349b7e6fb#diff-5081b9388de3add800b6e4a6ddf55c01 |  
    spark.sql.streaming.streamingQueryListeners | 2.4.0 | SPARK-24479 | 7703b46d2843db99e28110c4c7ccf60934412504#diff-5081b9388de3add800b6e4a6ddf55c01 |  
    spark.sql.ui.retainedExecutions | 1.5.0 | SPARK-8861 and SPARK-8862 | ebc3aad272b91cf58e2e1b4aa92b49b8a947a045#diff-81764e4d52817f83bdd5336ef1226bd9 |  
    spark.sql.broadcastExchange.maxThreadThreshold | 3.0.0 | SPARK-26601 | 126310ca68f2f248ea8b312c4637eccaba2fdc2b#diff-5081b9388de3add800b6e4a6ddf55c01 |  
    spark.sql.subquery.maxThreadThreshold | 2.4.6 | SPARK-30556 | 2fc562cafd71ec8f438f37a28b65118906ab2ad2#diff-5081b9388de3add800b6e4a6ddf55c01 |  
    spark.sql.event.truncate.length | 3.0.0 | SPARK-27045 | e60d8fce0b0cf2a6d766ea2fc5f994546550570a#diff-5081b9388de3add800b6e4a6ddf55c01 |
    spark.sql.legacy.sessionInitWithConfigDefaults | 3.0.0 | SPARK-27253 | 83f628b57da39ad9732d1393aebac373634a2eb9#diff-5081b9388de3add800b6e4a6ddf55c01 |
    spark.sql.defaultUrlStreamHandlerFactory.enabled | 3.0.0 | SPARK-25694 | 8469614c0513fbed87977d4e741649db3fdd8add#diff-5081b9388de3add800b6e4a6ddf55c01 |
    spark.sql.streaming.ui.enabled | 3.0.0 | SPARK-29543 | f9b86370cb04b72a4f00cbd4d60873960aa2792c#diff-5081b9388de3add800b6e4a6ddf55c01 |  
    spark.sql.streaming.ui.retainedProgressUpdates | 3.0.0 | SPARK-29543 | f9b86370cb04b72a4f00cbd4d60873960aa2792c#diff-5081b9388de3add800b6e4a6ddf55c01 |  
    spark.sql.streaming.ui.retainedQueries | 3.0.0 | SPARK-29543 | f9b86370cb04b72a4f00cbd4d60873960aa2792c#diff-5081b9388de3add800b6e4a6ddf55c01 |  
    
    ### Why are the changes needed?
    Supplemental configuration version information.
    
    ### Does this PR introduce any user-facing change?
    'No'.
    
    ### How was this patch tested?
    Exists UT
    
    Closes #27981 from beliefer/add-version-to-sql-static-config.
    
    Authored-by: beliefer <be...@163.com>
    Signed-off-by: HyukjinKwon <gu...@apache.org>
---
 docs/configuration.md                              |  1 +
 .../apache/spark/sql/internal/StaticSQLConf.scala  | 25 ++++++++++++++++++++--
 2 files changed, 24 insertions(+), 2 deletions(-)

diff --git a/docs/configuration.md b/docs/configuration.md
index 8d1cc84..8bfd4d5 100644
--- a/docs/configuration.md
+++ b/docs/configuration.md
@@ -1203,6 +1203,7 @@ Apart from these, the following properties are also available, and may be useful
   <td>
     How many finished executions the Spark UI and status APIs remember before garbage collecting.
   </td>
+  <td>1.5.0</td>
 </tr>
 <tr>
   <td><code>spark.streaming.ui.retainedBatches</code></td>
diff --git a/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/StaticSQLConf.scala b/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/StaticSQLConf.scala
index 563e51e..d202528 100644
--- a/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/StaticSQLConf.scala
+++ b/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/StaticSQLConf.scala
@@ -32,17 +32,20 @@ object StaticSQLConf {
 
   val WAREHOUSE_PATH = buildStaticConf("spark.sql.warehouse.dir")
     .doc("The default location for managed databases and tables.")
+    .version("2.0.0")
     .stringConf
     .createWithDefault(Utils.resolveURI("spark-warehouse").toString)
 
   val CATALOG_IMPLEMENTATION = buildStaticConf("spark.sql.catalogImplementation")
     .internal()
+    .version("2.0.0")
     .stringConf
     .checkValues(Set("hive", "in-memory"))
     .createWithDefault("in-memory")
 
   val GLOBAL_TEMP_DATABASE = buildStaticConf("spark.sql.globalTempDatabase")
     .internal()
+    .version("2.1.0")
     .stringConf
     .transform(_.toLowerCase(Locale.ROOT))
     .createWithDefault("global_temp")
@@ -55,9 +58,10 @@ object StaticSQLConf {
   // that's why this conf has to be a static SQL conf.
   val SCHEMA_STRING_LENGTH_THRESHOLD =
     buildStaticConf("spark.sql.sources.schemaStringLengthThreshold")
+      .internal()
       .doc("The maximum length allowed in a single cell when " +
         "storing additional schema information in Hive's metastore.")
-      .internal()
+      .version("1.3.1")
       .intConf
       .createWithDefault(4000)
 
@@ -65,6 +69,7 @@ object StaticSQLConf {
     buildStaticConf("spark.sql.filesourceTableRelationCacheSize")
       .internal()
       .doc("The maximum size of the cache that maps qualified table names to table relation plans.")
+      .version("2.2.0")
       .intConf
       .checkValue(cacheSize => cacheSize >= 0, "The maximum size of the cache must not be negative")
       .createWithDefault(1000)
@@ -73,6 +78,7 @@ object StaticSQLConf {
       .internal()
       .doc("When nonzero, enable caching of generated classes for operators and expressions. " +
         "All jobs share the cache that can use up to the specified number for generated classes.")
+      .version("2.4.0")
       .intConf
       .checkValue(maxEntries => maxEntries >= 0, "The maximum must not be negative")
       .createWithDefault(100)
@@ -82,6 +88,7 @@ object StaticSQLConf {
     .doc("When true, put comment in the generated code. Since computing huge comments " +
       "can be extremely expensive in certain cases, such as deeply-nested expressions which " +
       "operate over inputs with wide schemas, default is false.")
+    .version("2.0.0")
     .booleanConf
     .createWithDefault(false)
 
@@ -90,6 +97,7 @@ object StaticSQLConf {
   val DEBUG_MODE = buildStaticConf("spark.sql.debug")
     .internal()
     .doc("Only used for internal debugging. Not all functions are supported when it is enabled.")
+    .version("2.1.0")
     .booleanConf
     .createWithDefault(false)
 
@@ -98,6 +106,7 @@ object StaticSQLConf {
       .doc("When set to true, Hive Thrift server is running in a single session mode. " +
         "All the JDBC/ODBC connections share the temporary views, function registries, " +
         "SQL configuration and the current database.")
+      .version("1.6.0")
       .booleanConf
       .createWithDefault(false)
 
@@ -109,6 +118,7 @@ object StaticSQLConf {
       "applied in the specified order. For the case of parsers, the last parser is used and each " +
       "parser can delegate to its predecessor. For the case of function name conflicts, the last " +
       "registered function name is used.")
+    .version("2.2.0")
     .stringConf
     .toSequence
     .createOptional
@@ -117,6 +127,7 @@ object StaticSQLConf {
     .doc("List of class names implementing QueryExecutionListener that will be automatically " +
       "added to newly created sessions. The classes should have either a no-arg constructor, " +
       "or a constructor that expects a SparkConf argument.")
+    .version("2.3.0")
     .stringConf
     .toSequence
     .createOptional
@@ -125,6 +136,7 @@ object StaticSQLConf {
     .doc("List of class names implementing StreamingQueryListener that will be automatically " +
       "added to newly created sessions. The classes should have either a no-arg constructor, " +
       "or a constructor that expects a SparkConf argument.")
+    .version("2.4.0")
     .stringConf
     .toSequence
     .createOptional
@@ -132,6 +144,7 @@ object StaticSQLConf {
   val UI_RETAINED_EXECUTIONS =
     buildStaticConf("spark.sql.ui.retainedExecutions")
       .doc("Number of executions to retain in the Spark UI.")
+      .version("1.5.0")
       .intConf
       .createWithDefault(1000)
 
@@ -144,6 +157,7 @@ object StaticSQLConf {
         "Notice the number should be carefully chosen since decreasing parallelism might " +
         "cause longer waiting for other broadcasting. Also, increasing parallelism may " +
         "cause memory problem.")
+      .version("3.0.0")
       .intConf
       .checkValue(thres => thres > 0 && thres <= 128, "The threshold must be in (0,128].")
       .createWithDefault(128)
@@ -152,6 +166,7 @@ object StaticSQLConf {
     buildStaticConf("spark.sql.subquery.maxThreadThreshold")
       .internal()
       .doc("The maximum degree of parallelism to execute the subquery.")
+      .version("2.4.6")
       .intConf
       .checkValue(thres => thres > 0 && thres <= 128, "The threshold must be in (0,128].")
       .createWithDefault(16)
@@ -159,6 +174,7 @@ object StaticSQLConf {
   val SQL_EVENT_TRUNCATE_LENGTH = buildStaticConf("spark.sql.event.truncate.length")
     .doc("Threshold of SQL length beyond which it will be truncated before adding to " +
       "event. Defaults to no truncation. If set to 0, callsite will be logged instead.")
+    .version("3.0.0")
     .intConf
     .checkValue(_ >= 0, "Must be set greater or equal to zero")
     .createWithDefault(Int.MaxValue)
@@ -167,11 +183,13 @@ object StaticSQLConf {
     buildStaticConf("spark.sql.legacy.sessionInitWithConfigDefaults")
       .doc("Flag to revert to legacy behavior where a cloned SparkSession receives SparkConf " +
         "defaults, dropping any overrides in its parent SparkSession.")
+      .version("3.0.0")
       .booleanConf
       .createWithDefault(false)
 
   val DEFAULT_URL_STREAM_HANDLER_FACTORY_ENABLED =
     buildStaticConf("spark.sql.defaultUrlStreamHandlerFactory.enabled")
+      .internal()
       .doc(
         "When true, register Hadoop's FsUrlStreamHandlerFactory to support " +
         "ADD JAR against HDFS locations. " +
@@ -179,7 +197,7 @@ object StaticSQLConf {
         "to support a particular protocol type, or if Hadoop's FsUrlStreamHandlerFactory " +
         "conflicts with other protocol types such as `http` or `https`. See also SPARK-25694 " +
         "and HADOOP-14598.")
-      .internal()
+      .version("3.0.0")
       .booleanConf
       .createWithDefault(true)
 
@@ -187,6 +205,7 @@ object StaticSQLConf {
     buildStaticConf("spark.sql.streaming.ui.enabled")
       .doc("Whether to run the Structured Streaming Web UI for the Spark application when the " +
         "Spark Web UI is enabled.")
+      .version("3.0.0")
       .booleanConf
       .createWithDefault(true)
 
@@ -194,12 +213,14 @@ object StaticSQLConf {
     buildStaticConf("spark.sql.streaming.ui.retainedProgressUpdates")
       .doc("The number of progress updates to retain for a streaming query for Structured " +
         "Streaming UI.")
+      .version("3.0.0")
       .intConf
       .createWithDefault(100)
 
   val STREAMING_UI_RETAINED_QUERIES =
     buildStaticConf("spark.sql.streaming.ui.retainedQueries")
       .doc("The number of inactive queries to retain for Structured Streaming UI.")
+      .version("3.0.0")
       .intConf
       .createWithDefault(100)
 }


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org


[spark] 01/02: [SPARK-31279][SQL][DOC] Add version information to the configuration of Hive

Posted by gu...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git

commit a21a0769b20c5fc613eb1b75362b349ba39290a4
Author: beliefer <be...@163.com>
AuthorDate: Tue Mar 31 12:35:01 2020 +0900

    [SPARK-31279][SQL][DOC] Add version information to the configuration of Hive
    
    ### What changes were proposed in this pull request?
    Add version information to the configuration of `Hive`.
    
    I sorted out some information show below.
    
    Item name | Since version | JIRA ID | Commit ID | Note
    -- | -- | -- | -- | --
    spark.sql.hive.metastore.version | 1.4.0 | SPARK-6908 | 05454fd8aef75b129cbbd0288f5089c5259f4a15#diff-ff50aea397a607b79df9bec6f2a841db |  
    spark.sql.hive.version | 1.1.1 | SPARK-3971 | 64945f868443fbc59cb34b34c16d782dda0fb63d#diff-12fa2178364a810b3262b30d8d48aa2d |  
    spark.sql.hive.metastore.jars | 1.4.0 | SPARK-6908 | 05454fd8aef75b129cbbd0288f5089c5259f4a15#diff-ff50aea397a607b79df9bec6f2a841db |  
    spark.sql.hive.convertMetastoreParquet | 1.1.1 | SPARK-2406 | cc4015d2fa3785b92e6ab079b3abcf17627f7c56#diff-ff50aea397a607b79df9bec6f2a841db |  
    spark.sql.hive.convertMetastoreParquet.mergeSchema | 1.3.1 | SPARK-6575 | 778c87686af0c04df9dfe144b8f744f271a988ad#diff-ff50aea397a607b79df9bec6f2a841db |  
    spark.sql.hive.convertMetastoreOrc | 2.0.0 | SPARK-14070 | 1e886159849e3918445d3fdc3c4cef86c6c1a236#diff-ff50aea397a607b79df9bec6f2a841db |  
    spark.sql.hive.convertInsertingPartitionedTable | 3.0.0 | SPARK-28573 | d5688dc732890923c326f272b0c18c329a69459a#diff-842e3447fc453de26c706db1cac8f2c4 |  
    spark.sql.hive.convertMetastoreCtas | 3.0.0 | SPARK-25271 | 5ad03607d1487e7ab3e3b6d00eef9c4028ed4975#diff-842e3447fc453de26c706db1cac8f2c4 |  
    spark.sql.hive.metastore.sharedPrefixes | 1.4.0 | SPARK-7491 | a8556086d33cb993fab0ae2751e31455e6c664ab#diff-ff50aea397a607b79df9bec6f2a841db |  
    spark.sql.hive.metastore.barrierPrefixes | 1.4.0 | SPARK-7491 | a8556086d33cb993fab0ae2751e31455e6c664ab#diff-ff50aea397a607b79df9bec6f2a841db |  
    spark.sql.hive.thriftServer.async | 1.5.0 | SPARK-6964 | eb19d3f75cbd002f7e72ce02017a8de67f562792#diff-ff50aea397a607b79df9bec6f2a841db |  
    
    ### Why are the changes needed?
    Supplemental configuration version information.
    
    ### Does this PR introduce any user-facing change?
    'No'.
    
    ### How was this patch tested?
    Exists UT
    
    Closes #28042 from beliefer/add-version-to-hive-config.
    
    Authored-by: beliefer <be...@163.com>
    Signed-off-by: HyukjinKwon <gu...@apache.org>
---
 docs/sql-data-sources-hive-tables.md                          |  6 +++++-
 .../src/main/scala/org/apache/spark/sql/hive/HiveUtils.scala  | 11 +++++++++++
 2 files changed, 16 insertions(+), 1 deletion(-)

diff --git a/docs/sql-data-sources-hive-tables.md b/docs/sql-data-sources-hive-tables.md
index 0054d46..22514cd 100644
--- a/docs/sql-data-sources-hive-tables.md
+++ b/docs/sql-data-sources-hive-tables.md
@@ -124,7 +124,7 @@ will compile against built-in Hive and use those classes for internal execution
 The following options can be used to configure the version of Hive that is used to retrieve metadata:
 
 <table class="table">
-  <tr><th>Property Name</th><th>Default</th><th>Meaning</th></tr>
+  <tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since Version</th></tr>
   <tr>
     <td><code>spark.sql.hive.metastore.version</code></td>
     <td><code>2.3.6</code></td>
@@ -132,6 +132,7 @@ The following options can be used to configure the version of Hive that is used
       Version of the Hive metastore. Available
       options are <code>0.12.0</code> through <code>2.3.6</code> and <code>3.0.0</code> through <code>3.1.2</code>.
     </td>
+    <td>1.4.0</td>
   </tr>
   <tr>
     <td><code>spark.sql.hive.metastore.jars</code></td>
@@ -153,6 +154,7 @@ The following options can be used to configure the version of Hive that is used
         they are packaged with your application.</li>
       </ol>
     </td>
+    <td>1.4.0</td>
   </tr>
   <tr>
     <td><code>spark.sql.hive.metastore.sharedPrefixes</code></td>
@@ -166,6 +168,7 @@ The following options can be used to configure the version of Hive that is used
         custom appenders that are used by log4j.
       </p>
     </td>
+    <td>1.4.0</td>
   </tr>
   <tr>
     <td><code>spark.sql.hive.metastore.barrierPrefixes</code></td>
@@ -177,5 +180,6 @@ The following options can be used to configure the version of Hive that is used
         prefix that typically would be shared (i.e. <code>org.apache.spark.*</code>).
       </p>
     </td>
+    <td>1.4.0</td>
   </tr>
 </table>
diff --git a/sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveUtils.scala b/sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveUtils.scala
index 9c4b8a5..3c20e68 100644
--- a/sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveUtils.scala
+++ b/sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveUtils.scala
@@ -65,6 +65,7 @@ private[spark] object HiveUtils extends Logging {
     .doc("Version of the Hive metastore. Available options are " +
         "<code>0.12.0</code> through <code>2.3.6</code> and " +
         "<code>3.0.0</code> through <code>3.1.2</code>.")
+    .version("1.4.0")
     .stringConf
     .createWithDefault(builtinHiveVersion)
 
@@ -73,6 +74,7 @@ private[spark] object HiveUtils extends Logging {
   // already rely on this config.
   val FAKE_HIVE_VERSION = buildConf("spark.sql.hive.version")
     .doc(s"deprecated, please use ${HIVE_METASTORE_VERSION.key} to get the Hive version in Spark.")
+    .version("1.1.1")
     .stringConf
     .createWithDefault(builtinHiveVersion)
 
@@ -89,12 +91,14 @@ private[spark] object HiveUtils extends Logging {
       |   Use Hive jars of specified version downloaded from Maven repositories.
       | 3. A classpath in the standard format for both Hive and Hadoop.
       """.stripMargin)
+    .version("1.4.0")
     .stringConf
     .createWithDefault("builtin")
 
   val CONVERT_METASTORE_PARQUET = buildConf("spark.sql.hive.convertMetastoreParquet")
     .doc("When set to true, the built-in Parquet reader and writer are used to process " +
       "parquet tables created by using the HiveQL syntax, instead of Hive serde.")
+    .version("1.1.1")
     .booleanConf
     .createWithDefault(true)
 
@@ -103,12 +107,14 @@ private[spark] object HiveUtils extends Logging {
       .doc("When true, also tries to merge possibly different but compatible Parquet schemas in " +
         "different Parquet data files. This configuration is only effective " +
         "when \"spark.sql.hive.convertMetastoreParquet\" is true.")
+      .version("1.3.1")
       .booleanConf
       .createWithDefault(false)
 
   val CONVERT_METASTORE_ORC = buildConf("spark.sql.hive.convertMetastoreOrc")
     .doc("When set to true, the built-in ORC reader and writer are used to process " +
       "ORC tables created by using the HiveQL syntax, instead of Hive serde.")
+    .version("2.0.0")
     .booleanConf
     .createWithDefault(true)
 
@@ -118,6 +124,7 @@ private[spark] object HiveUtils extends Logging {
         "`spark.sql.hive.convertMetastoreOrc` is true, the built-in ORC/Parquet writer is used" +
         "to process inserting into partitioned ORC/Parquet tables created by using the HiveSQL " +
         "syntax.")
+      .version("3.0.0")
       .booleanConf
       .createWithDefault(true)
 
@@ -126,6 +133,7 @@ private[spark] object HiveUtils extends Logging {
       "instead of Hive serde in CTAS. This flag is effective only if " +
       "`spark.sql.hive.convertMetastoreParquet` or `spark.sql.hive.convertMetastoreOrc` is " +
       "enabled respectively for Parquet and ORC formats")
+    .version("3.0.0")
     .booleanConf
     .createWithDefault(true)
 
@@ -135,6 +143,7 @@ private[spark] object HiveUtils extends Logging {
       "that should be shared is JDBC drivers that are needed to talk to the metastore. Other " +
       "classes that need to be shared are those that interact with classes that are already " +
       "shared. For example, custom appenders that are used by log4j.")
+    .version("1.4.0")
     .stringConf
     .toSequence
     .createWithDefault(jdbcPrefixes)
@@ -146,12 +155,14 @@ private[spark] object HiveUtils extends Logging {
     .doc("A comma separated list of class prefixes that should explicitly be reloaded for each " +
       "version of Hive that Spark SQL is communicating with. For example, Hive UDFs that are " +
       "declared in a prefix that typically would be shared (i.e. <code>org.apache.spark.*</code>).")
+    .version("1.4.0")
     .stringConf
     .toSequence
     .createWithDefault(Nil)
 
   val HIVE_THRIFT_SERVER_ASYNC = buildConf("spark.sql.hive.thriftServer.async")
     .doc("When set to true, Hive Thrift server executes SQL queries in an asynchronous way.")
+    .version("1.5.0")
     .booleanConf
     .createWithDefault(true)
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org