You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by yu...@apache.org on 2022/03/31 07:02:36 UTC

[spark] branch branch-3.3 updated: [SPARK-37206][BUILD][FOLLOWUP] Update avro to 1.11.0 in `SparkBuild.scala` and docs

This is an automated email from the ASF dual-hosted git repository.

yumwang pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
     new 1ab7333  [SPARK-37206][BUILD][FOLLOWUP] Update avro to 1.11.0 in `SparkBuild.scala` and docs
1ab7333 is described below

commit 1ab73338dd1aeceaa6c7bdda2f61ba813c2244e5
Author: Dongjoon Hyun <do...@apache.org>
AuthorDate: Thu Mar 31 15:00:04 2022 +0800

    [SPARK-37206][BUILD][FOLLOWUP] Update avro to 1.11.0 in `SparkBuild.scala` and docs
    
    ### What changes were proposed in this pull request?
    
    This is a follow-up of https://github.com/apache/spark/pull/34482 to update the Avro version consistently in both Maven and SBT.
    
    In addition, this also reviews and updates the doc links and adds a note for the future.
    
    ### Why are the changes needed?
    
    Due to the mismatch, there occur some compilation failures in some systems.
    ```
    $ build/mvn dependency:tree -pl core | grep avro
    [INFO] +- org.apache.avro:avro:jar:1.11.0:compile
    [INFO] +- org.apache.avro:avro-mapred:jar:1.11.0:compile
    [INFO] |  \- org.apache.avro:avro-ipc:jar:1.11.0:compile
    ```
    
    ```
    $ build/sbt "core/dependencyTree" | grep avro
    [info]   +-org.apache.avro:avro-mapred:1.11.0
    [info]   | +-org.apache.avro:avro-ipc:1.11.0
    [info]   | | +-org.apache.avro:avro:1.10.2
    [info]   +-org.apache.avro:avro:1.10.2
    ```
    
    ### Does this PR introduce _any_ user-facing change?
    
    No.
    
    ### How was this patch tested?
    
    Manually verified.
    
    Closes #36019 from dongjoon-hyun/SPARK-37206.
    
    Authored-by: Dongjoon Hyun <do...@apache.org>
    Signed-off-by: Yuming Wang <yu...@ebay.com>
    (cherry picked from commit d790f3e144f63dc0e7a1ecb754cdfc69f568cc8c)
    Signed-off-by: Yuming Wang <yu...@ebay.com>
---
 docs/sql-data-sources-avro.md                                         | 4 ++--
 .../avro/src/main/scala/org/apache/spark/sql/avro/AvroOptions.scala   | 4 ++--
 pom.xml                                                               | 1 +
 project/SparkBuild.scala                                              | 2 +-
 .../test/scala/org/apache/spark/sql/hive/client/HiveClientSuite.scala | 2 +-
 5 files changed, 7 insertions(+), 6 deletions(-)

diff --git a/docs/sql-data-sources-avro.md b/docs/sql-data-sources-avro.md
index db3e03c..28f4104 100644
--- a/docs/sql-data-sources-avro.md
+++ b/docs/sql-data-sources-avro.md
@@ -393,7 +393,7 @@ applications. Read the [Advanced Dependency Management](https://spark.apache
 Submission Guide for more details. 
 
 ## Supported types for Avro -> Spark SQL conversion
-Currently Spark supports reading all [primitive types](https://avro.apache.org/docs/1.10.2/spec.html#schema_primitive) and [complex types](https://avro.apache.org/docs/1.10.2/spec.html#schema_complex) under records of Avro.
+Currently Spark supports reading all [primitive types](https://avro.apache.org/docs/1.11.0/spec.html#schema_primitive) and [complex types](https://avro.apache.org/docs/1.11.0/spec.html#schema_complex) under records of Avro.
 <table class="table">
   <tr><th><b>Avro type</b></th><th><b>Spark SQL type</b></th></tr>
   <tr>
@@ -457,7 +457,7 @@ In addition to the types listed above, it supports reading `union` types. The fo
 3. `union(something, null)`, where something is any supported Avro type. This will be mapped to the same Spark SQL type as that of something, with nullable set to true.
 All other union types are considered complex. They will be mapped to StructType where field names are member0, member1, etc., in accordance with members of the union. This is consistent with the behavior when converting between Avro and Parquet.
 
-It also supports reading the following Avro [logical types](https://avro.apache.org/docs/1.10.2/spec.html#Logical+Types):
+It also supports reading the following Avro [logical types](https://avro.apache.org/docs/1.11.0/spec.html#Logical+Types):
 
 <table class="table">
   <tr><th><b>Avro logical type</b></th><th><b>Avro type</b></th><th><b>Spark SQL type</b></th></tr>
diff --git a/external/avro/src/main/scala/org/apache/spark/sql/avro/AvroOptions.scala b/external/avro/src/main/scala/org/apache/spark/sql/avro/AvroOptions.scala
index 9fe5007..48b2c34 100644
--- a/external/avro/src/main/scala/org/apache/spark/sql/avro/AvroOptions.scala
+++ b/external/avro/src/main/scala/org/apache/spark/sql/avro/AvroOptions.scala
@@ -77,14 +77,14 @@ private[sql] class AvroOptions(
 
   /**
    * Top level record name in write result, which is required in Avro spec.
-   * See https://avro.apache.org/docs/1.10.2/spec.html#schema_record .
+   * See https://avro.apache.org/docs/1.11.0/spec.html#schema_record .
    * Default value is "topLevelRecord"
    */
   val recordName: String = parameters.getOrElse("recordName", "topLevelRecord")
 
   /**
    * Record namespace in write result. Default value is "".
-   * See Avro spec for details: https://avro.apache.org/docs/1.10.2/spec.html#schema_record .
+   * See Avro spec for details: https://avro.apache.org/docs/1.11.0/spec.html#schema_record .
    */
   val recordNamespace: String = parameters.getOrElse("recordNamespace", "")
 
diff --git a/pom.xml b/pom.xml
index a8a6a13..cee7970 100644
--- a/pom.xml
+++ b/pom.xml
@@ -149,6 +149,7 @@
     the link to metrics.dropwizard.io in docs/monitoring.md.
     -->
     <codahale.metrics.version>4.2.7</codahale.metrics.version>
+    <!-- Should be consistent with SparkBuild.scala and docs -->
     <avro.version>1.11.0</avro.version>
     <aws.kinesis.client.version>1.12.0</aws.kinesis.client.version>
     <!-- Should be consistent with Kinesis client dependency -->
diff --git a/project/SparkBuild.scala b/project/SparkBuild.scala
index b536b50..934fa4a 100644
--- a/project/SparkBuild.scala
+++ b/project/SparkBuild.scala
@@ -708,7 +708,7 @@ object DependencyOverrides {
     dependencyOverrides += "com.google.guava" % "guava" % guavaVersion,
     dependencyOverrides += "xerces" % "xercesImpl" % "2.12.0",
     dependencyOverrides += "jline" % "jline" % "2.14.6",
-    dependencyOverrides += "org.apache.avro" % "avro" % "1.10.2")
+    dependencyOverrides += "org.apache.avro" % "avro" % "1.11.0")
 }
 
 /**
diff --git a/sql/hive/src/test/scala/org/apache/spark/sql/hive/client/HiveClientSuite.scala b/sql/hive/src/test/scala/org/apache/spark/sql/hive/client/HiveClientSuite.scala
index a23efd8..ad0f9a5 100644
--- a/sql/hive/src/test/scala/org/apache/spark/sql/hive/client/HiveClientSuite.scala
+++ b/sql/hive/src/test/scala/org/apache/spark/sql/hive/client/HiveClientSuite.scala
@@ -895,7 +895,7 @@ class HiveClientSuite(version: String, allVersions: Seq[String])
   test("Decimal support of Avro Hive serde") {
     val tableName = "tab1"
     // TODO: add the other logical types. For details, see the link:
-    // https://avro.apache.org/docs/1.8.1/spec.html#Logical+Types
+    // https://avro.apache.org/docs/1.11.0/spec.html#Logical+Types
     val avroSchema =
     """{
       |  "name": "test_record",

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org