You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2021/04/10 09:19:56 UTC

[GitHub] [spark] wangyum edited a comment on pull request #32090: [SPARK-34212][SQL][FOLLOWUP] Support reading data when DecimalMetadata is null

wangyum edited a comment on pull request #32090:
URL: https://github.com/apache/spark/pull/32090#issuecomment-817106123


   I think we should not support this case. It has potential data issues. For example:
   ```scala
       import org.apache.spark.sql.types.StructType
   
       spark.sql("SELECT 999999999 AS a").write.mode("overwrite").parquet("/tmp/SPARK-34212")
       val df = spark.read.schema(StructType.fromDDL("a decimal(2, 0)")).parquet("/tmp/SPARK-34212")
       df.write.saveAsTable("t1")
   
       spark.sql("select a, a * a as a2, a * a * a as a3, a * a * a * a as a4 from t1").write.saveAsTable("t2")
       spark.sql("select * from t2").show(false)
       spark.sql("desc t2").show(false)
   ```
   
   ```
   +---------+----+----+----+
   |a        |a2  |a3  |a4  |
   +---------+----+----+----+
   |999999999|null|null|null|
   +---------+----+----+----+
   
   +--------+-------------+-------+
   |col_name|data_type    |comment|
   +--------+-------------+-------+
   |a       |decimal(2,0) |null   |
   |a2      |decimal(5,0) |null   |
   |a3      |decimal(8,0) |null   |
   |a4      |decimal(11,0)|null   |
   +--------+-------------+-------+
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org