You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by "beliefer (via GitHub)" <gi...@apache.org> on 2023/12/18 10:13:45 UTC

[PR] [SPARK-46443][SQL] Decimal precision and scale should decided by JDBC dialect. [spark]

beliefer opened a new pull request, #44398:
URL: https://github.com/apache/spark/pull/44398

   ### What changes were proposed in this pull request?
   This PR fix a but by make JDBC dialect decide the decimal precision and scale.
   
   **How to reproduce the bug?**
   https://github.com/apache/spark/pull/44397 proposed DS V2 push down `PERCENTILE_CONT` and `PERCENTILE_DISC`.
   The bug fired when pushdown the below SQL to H2 JDBC.
   `SELECT "DEPT",PERCENTILE_CONT(0.5) WITHIN GROUP (ORDER BY "SALARY" ASC NULLS FIRST) FROM "test"."employee" WHERE 1=0 GROUP BY "DEPT"`
   
   **The root cause**
   `getQueryOutputSchema` used to get the output schema of query by call `JdbcUtils.getSchema`.
   The query for database H2 show below.
   `SELECT "DEPT",PERCENTILE_CONT(0.5) WITHIN GROUP (ORDER BY "SALARY" ASC NULLS FIRST) FROM "test"."employee" WHERE 1=0 GROUP BY "DEPT"`
   We can get the five variables from `ResultSetMetaData`, please refer:
   ```
   columnName = "PERCENTILE_CONT(0.5) WITHIN GROUP (ORDER BY SALARY NULLS FIRST)"
   dataType = 2
   typeName = "NUMERIC"
   fieldSize = 100000
   fieldScale = 50000
   ```
   Then we get the catalyst schema with `JdbcUtils.getCatalystType`, it calls `DecimalType.bounded(precision, scale)` actually.
   The `DecimalType.bounded(100000, 50000)` returns `DecimalType(38, 38)`.
   At finally, `makeGetter` throws exception.
   ```
   Caused by: org.apache.spark.SparkArithmeticException: [DECIMAL_PRECISION_EXCEEDS_MAX_PRECISION] Decimal precision 42 exceeds max precision 38. SQLSTATE: 22003
   	at org.apache.spark.sql.errors.DataTypeErrors$.decimalPrecisionExceedsMaxPrecisionError(DataTypeErrors.scala:48)
   	at org.apache.spark.sql.types.Decimal.set(Decimal.scala:124)
   	at org.apache.spark.sql.types.Decimal$.apply(Decimal.scala:577)
   	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$makeGetter$4(JdbcUtils.scala:408)
   	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.nullSafeConvert(JdbcUtils.scala:552)
   	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$makeGetter$3(JdbcUtils.scala:408)
   	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$makeGetter$3$adapted(JdbcUtils.scala:406)
   	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:358)
   	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:339)
   ```
   
   ### Why are the changes needed?
   This PR fix the bug that `JdbcUtils` can't get the correct decimal type.
   
   
   ### Does this PR introduce _any_ user-facing change?
   <!--
   Note that it means *any* user-facing change including all aspects such as the documentation fix.
   If yes, please clarify the previous behavior and the change this PR proposes - provide the console output, description and/or an example to show the behavior difference if possible.
   If possible, please also clarify if this is a user-facing change compared to the released Spark versions or within the unreleased branches such as master.
   If no, write 'No'.
   -->
   
   
   ### How was this patch tested?
   <!--
   If tests were added, say they were added here. Please make sure to add some test cases that check the changes thoroughly including negative and positive cases if possible.
   If it was tested in a way different from regular unit tests, please clarify how you tested step by step, ideally copy and paste-able, so that other reviewers can test and check, and descendants can verify in the future.
   If tests were not added, please describe why they were not added and/or why it was difficult to add.
   If benchmark tests were added, please run the benchmarks in GitHub Actions for the consistent environment, and the instructions could accord to: https://spark.apache.org/developer-tools.html#github-workflow-benchmarks.
   -->
   
   
   ### Was this patch authored or co-authored using generative AI tooling?
   <!--
   If generative AI tooling has been used in the process of authoring this patch, please include the
   phrase: 'Generated-by: ' followed by the name of the tool and its version.
   If no, write 'No'.
   Please refer to the [ASF Generative Tooling Guidance](https://www.apache.org/legal/generative-tooling.html) for details.
   -->
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Re: [PR] [SPARK-46443][SQL] Decimal precision and scale should decided by H2 dialect. [spark]

Posted by "cloud-fan (via GitHub)" <gi...@apache.org>.
cloud-fan commented on code in PR #44398:
URL: https://github.com/apache/spark/pull/44398#discussion_r1432253449


##########
sql/core/src/main/scala/org/apache/spark/sql/jdbc/H2Dialect.scala:
##########
@@ -57,6 +57,22 @@ private[sql] object H2Dialect extends JdbcDialect {
   override def isSupportedFunction(funcName: String): Boolean =
     supportedFunctions.contains(funcName)
 
+  override def getCatalystType(
+      sqlType: Int, typeName: String, size: Int, md: MetadataBuilder): Option[DataType] = {
+    sqlType match {
+      case Types.NUMERIC =>
+        val scale = if (null != md) md.build().getLong("scale") else 0L
+        size match {
+          // SPARK-46443: Decimal precision and scale should decided by H2 dialect.
+          // Handle NUMBER fields that have incorrect precision/scale in special way
+          // because JDBC ResultSetMetaData returns 100000 precision and 50000 scale
+          case 100000 if scale == 50000 => Option(DecimalType(DecimalType.MAX_PRECISION, 19))

Review Comment:
   this is too specific. Can we do it if precision > 38?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Re: [PR] [SPARK-46443][SQL] Decimal precision and scale should decided by JDBC dialect. [spark]

Posted by "beliefer (via GitHub)" <gi...@apache.org>.
beliefer commented on PR #44398:
URL: https://github.com/apache/spark/pull/44398#issuecomment-1862727791

   > I think this is already wrong. We should update the H2 dialect to return `decimal(38, 19)`, so that we have half digits for the integral part and half digits for the fraction part.
   
   Let me try this way.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Re: [PR] [SPARK-46443][SQL] Decimal precision and scale should decided by JDBC dialect. [spark]

Posted by "beliefer (via GitHub)" <gi...@apache.org>.
beliefer commented on code in PR #44398:
URL: https://github.com/apache/spark/pull/44398#discussion_r1429868263


##########
sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcDialects.scala:
##########
@@ -105,6 +105,25 @@ abstract class JdbcDialect extends Serializable with Logging {
    */
   def getJDBCType(dt: DataType): Option[JdbcType] = None
 
+  /**
+   * Converts an instance of `java.math.BigDecimal` to a `Decimal` value.
+   * @param d represents a specific `java.math.BigDecimal`.
+   * @param precision the precision for Decimal based on JDBC metadata.
+   * @param scale the scale for Decimal based on JDBC metadata.
+   * @return the `Decimal` value to convert to
+   */
+  @Since("4.0.0")
+  def convertBigDecimalToDecimal(d: BigDecimal, precision: Int, scale: Int): Decimal =
+    // When connecting with Oracle DB through JDBC, the precision and scale of BigDecimal
+    // object returned by ResultSet.getBigDecimal is not correctly matched to the table
+    // schema reported by ResultSetMetaData.getPrecision and ResultSetMetaData.getScale.
+    // If inserting values like 19999 into a column with NUMBER(12, 2) type, you get through
+    // a BigDecimal object with scale as 0. But the dataframe schema has correct type as
+    // DecimalType(12, 2). Thus, after saving the dataframe into parquet file and then
+    // retrieve it, you will get wrong result 199.99.
+    // So it is needed to set precision and scale for Decimal based on JDBC metadata.
+    Decimal(d, precision, scale)

Review Comment:
   I don't know the background. So I let it as the default implementation.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Re: [PR] [SPARK-46443][SQL] Decimal precision and scale should decided by H2 dialect. [spark]

Posted by "beliefer (via GitHub)" <gi...@apache.org>.
beliefer commented on code in PR #44398:
URL: https://github.com/apache/spark/pull/44398#discussion_r1432515207


##########
sql/core/src/main/scala/org/apache/spark/sql/jdbc/H2Dialect.scala:
##########
@@ -57,6 +57,22 @@ private[sql] object H2Dialect extends JdbcDialect {
   override def isSupportedFunction(funcName: String): Boolean =
     supportedFunctions.contains(funcName)
 
+  override def getCatalystType(
+      sqlType: Int, typeName: String, size: Int, md: MetadataBuilder): Option[DataType] = {
+    sqlType match {
+      case Types.NUMERIC =>
+        val scale = if (null != md) md.build().getLong("scale") else 0L
+        size match {
+          // SPARK-46443: Decimal precision and scale should decided by H2 dialect.
+          // Handle NUMBER fields that have incorrect precision/scale in special way
+          // because JDBC ResultSetMetaData returns 100000 precision and 50000 scale
+          case 100000 if scale == 50000 => Option(DecimalType(DecimalType.MAX_PRECISION, 19))

Review Comment:
   I am doubt that H2 may only have this particular situation.
   Other situations greater than 38 have not been actually verified. Can we wait until we encounter other exceptions in the future before expanding?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Re: [PR] [SPARK-46443][SQL] Decimal precision and scale should decided by JDBC dialect. [spark]

Posted by "cloud-fan (via GitHub)" <gi...@apache.org>.
cloud-fan commented on PR #44398:
URL: https://github.com/apache/spark/pull/44398#issuecomment-1862694659

   So what we need is a cast? seems `Decimal(d, p, s)` is not safe.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Re: [PR] [SPARK-46443][SQL] Decimal precision and scale should decided by H2 dialect. [spark]

Posted by "cloud-fan (via GitHub)" <gi...@apache.org>.
cloud-fan commented on PR #44398:
URL: https://github.com/apache/spark/pull/44398#issuecomment-1867133850

   thanks, merging to master/3.5!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Re: [PR] [SPARK-46443][SQL] Decimal precision and scale should decided by JDBC dialect. [spark]

Posted by "beliefer (via GitHub)" <gi...@apache.org>.
beliefer commented on code in PR #44398:
URL: https://github.com/apache/spark/pull/44398#discussion_r1431188007


##########
sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcDialects.scala:
##########
@@ -105,6 +105,25 @@ abstract class JdbcDialect extends Serializable with Logging {
    */
   def getJDBCType(dt: DataType): Option[JdbcType] = None
 
+  /**
+   * Converts an instance of `java.math.BigDecimal` to a `Decimal` value.
+   * @param d represents a specific `java.math.BigDecimal`.
+   * @param precision the precision for Decimal based on JDBC metadata.
+   * @param scale the scale for Decimal based on JDBC metadata.
+   * @return the `Decimal` value to convert to
+   */
+  @Since("4.0.0")
+  def convertBigDecimalToDecimal(d: BigDecimal, precision: Int, scale: Int): Decimal =
+    // When connecting with Oracle DB through JDBC, the precision and scale of BigDecimal
+    // object returned by ResultSet.getBigDecimal is not correctly matched to the table
+    // schema reported by ResultSetMetaData.getPrecision and ResultSetMetaData.getScale.
+    // If inserting values like 19999 into a column with NUMBER(12, 2) type, you get through
+    // a BigDecimal object with scale as 0. But the dataframe schema has correct type as
+    // DecimalType(12, 2). Thus, after saving the dataframe into parquet file and then
+    // retrieve it, you will get wrong result 199.99.
+    // So it is needed to set precision and scale for Decimal based on JDBC metadata.
+    Decimal(d, precision, scale)

Review Comment:
   cc @JoshRosen 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Re: [PR] [SPARK-46443][SQL] Decimal precision and scale should decided by JDBC dialect. [spark]

Posted by "beliefer (via GitHub)" <gi...@apache.org>.
beliefer commented on PR #44398:
URL: https://github.com/apache/spark/pull/44398#issuecomment-1862726989

   > So what we need is a cast? seems `Decimal(d, p, s)` is not safe.
   
   Yes.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Re: [PR] [SPARK-46443][SQL] Decimal precision and scale should decided by JDBC dialect. [spark]

Posted by "cloud-fan (via GitHub)" <gi...@apache.org>.
cloud-fan commented on PR #44398:
URL: https://github.com/apache/spark/pull/44398#issuecomment-1862197046

   Where does `Decimal precision 42` come from?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Re: [PR] [SPARK-46443][SQL] Decimal precision and scale should decided by JDBC dialect. [spark]

Posted by "cloud-fan (via GitHub)" <gi...@apache.org>.
cloud-fan commented on PR #44398:
URL: https://github.com/apache/spark/pull/44398#issuecomment-1862704957

   > The DecimalType.bounded(100000, 50000) returns DecimalType(38, 38).
   
   I think this is already wrong. We should update the H2 dialect to return `decimal(38, 19)`, so that we have half digits for the integral part and half digits for the fraction part.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Re: [PR] [SPARK-46443][SQL] Decimal precision and scale should decided by H2 dialect. [spark]

Posted by "cloud-fan (via GitHub)" <gi...@apache.org>.
cloud-fan commented on code in PR #44398:
URL: https://github.com/apache/spark/pull/44398#discussion_r1434034597


##########
sql/core/src/main/scala/org/apache/spark/sql/jdbc/H2Dialect.scala:
##########
@@ -57,6 +57,20 @@ private[sql] object H2Dialect extends JdbcDialect {
   override def isSupportedFunction(funcName: String): Boolean =
     supportedFunctions.contains(funcName)
 
+  override def getCatalystType(
+      sqlType: Int, typeName: String, size: Int, md: MetadataBuilder): Option[DataType] = {
+    sqlType match {
+      case Types.NUMERIC if size > 38 =>
+        // Handle NUMBER fields that have incorrect precision/scale in special way
+        // because the precision and scale of H2 must be from 1 to 100000. Adjust the precision
+        // and scale of Decimal type according to the ratio of precision and scale.

Review Comment:
   let's make the comment a bit more clearer:
   ```
   H2 supports very large decimal precision like 100000. The max precision in Spark is only 38.
   Here we shrink both the precision and scale of H2 decimal to fit Spark, and still keep the ratio between them.
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Re: [PR] [SPARK-46443][SQL] Decimal precision and scale should decided by H2 dialect. [spark]

Posted by "cloud-fan (via GitHub)" <gi...@apache.org>.
cloud-fan closed pull request #44398: [SPARK-46443][SQL] Decimal precision and scale should decided by H2 dialect.
URL: https://github.com/apache/spark/pull/44398


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Re: [PR] [SPARK-46443][SQL] Decimal precision and scale should decided by H2 dialect. [spark]

Posted by "beliefer (via GitHub)" <gi...@apache.org>.
beliefer commented on PR #44398:
URL: https://github.com/apache/spark/pull/44398#issuecomment-1867135949

   @cloud-fan Thank you!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Re: [PR] [SPARK-46443][SQL] Decimal precision and scale should decided by H2 dialect. [spark]

Posted by "beliefer (via GitHub)" <gi...@apache.org>.
beliefer commented on PR #44398:
URL: https://github.com/apache/spark/pull/44398#issuecomment-1867126053

   The GA failure is unrelated.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Re: [PR] [SPARK-46443][SQL] Decimal precision and scale should decided by JDBC dialect. [spark]

Posted by "cloud-fan (via GitHub)" <gi...@apache.org>.
cloud-fan commented on code in PR #44398:
URL: https://github.com/apache/spark/pull/44398#discussion_r1431363374


##########
sql/core/src/main/scala/org/apache/spark/sql/jdbc/H2Dialect.scala:
##########
@@ -66,6 +66,9 @@ private[sql] object H2Dialect extends JdbcDialect {
     case _ => JdbcUtils.getCommonJDBCType(dt)
   }
 
+  override def convertBigDecimalToDecimal(d: BigDecimal, precision: Int, scale: Int): Decimal =
+    Decimal(d)

Review Comment:
   This seems wrong. I think we should make sure the final `Decimal` instance we return has the same precision and scale as the JDBC column type.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Re: [PR] [SPARK-46443][SQL] Decimal precision and scale should decided by JDBC dialect. [spark]

Posted by "beliefer (via GitHub)" <gi...@apache.org>.
beliefer commented on PR #44398:
URL: https://github.com/apache/spark/pull/44398#issuecomment-1862468433

   > Where does `Decimal precision 42` come from?
   
   It comes from https://github.com/apache/spark/blob/dc0bfc4c700c347f2f58625facec8c5771bde59a/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala#L416
   The schema is `DecimalType(38, 38)` and the data returns from H2 is `java.math.BigDecimal(7, 2)`.
   d = java.math.BigDecimal(7, 2)
   p = 38
   s = 38
   The `Decimal(d, p, s)` causes the exception.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org