You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Yin Huai (JIRA)" <ji...@apache.org> on 2015/07/16 23:09:04 UTC

[jira] [Created] (SPARK-9119) In some cases, we may save wrong decimal values to parquet

Yin Huai created SPARK-9119:
-------------------------------

             Summary: In some cases, we may save wrong decimal values to parquet
                 Key: SPARK-9119
                 URL: https://issues.apache.org/jira/browse/SPARK-9119
             Project: Spark
          Issue Type: Sub-task
          Components: SQL
            Reporter: Yin Huai
            Priority: Critical


{code}

> 

import org.apache.spark.sql.Row
import org.apache.spark.sql.types.{StructType,StructField,StringType,DecimalType}
import org.apache.spark.sql.types.Decimal
​
val schema = StructType(Array(StructField("name", DecimalType(10, 5), false)))
val rowRDD = sc.parallelize(Array(Row(Decimal("67123.45"))))
val df = sqlContext.createDataFrame(rowRDD, schema)
df.registerTempTable("test")
df.show()
​
// +--------+
// |    name|
// +--------+
// |67123.45|
// +--------+

sqlContext.sql("create table testDecimal as select * from test")
sqlContext.table("testDecimal").show()
// +--------+
// |    name|
// +--------+
// |67.12345|
// +--------+
{code}

The problem is when we do conversions, we do not use precision/scale info in the schema.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org