You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@carbondata.apache.org by "anubhav tarar (JIRA)" <ji...@apache.org> on 2017/03/06 11:50:32 UTC

[jira] [Assigned] (CARBONDATA-730) unsupported type: DecimalType

     [ https://issues.apache.org/jira/browse/CARBONDATA-730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

anubhav tarar reassigned CARBONDATA-730:
----------------------------------------

    Assignee: anubhav tarar

> unsupported type: DecimalType
> -----------------------------
>
>                 Key: CARBONDATA-730
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-730
>             Project: CarbonData
>          Issue Type: Improvement
>          Components: spark-integration
>    Affects Versions: 1.0.0-incubating
>         Environment: Spark 1.6.2 Hadoop 2.6
>            Reporter: Sanoj MG
>            Assignee: anubhav tarar
>            Priority: Minor
>             Fix For: 1.1.0-incubating
>
>
> Below exception is thrown while trying to save dataframe with a decimal column type. 
> scala> df.printSchema
>  |-- account: integer (nullable = true)
>  |-- currency: integer (nullable = true)
>  |-- branch: integer (nullable = true)
>  |-- country: integer (nullable = true)
>  |-- date: date (nullable = true)
>  |-- fcbalance: decimal(16,3) (nullable = true)
>  |-- lcbalance: decimal(16,3) (nullable = true)
> scala> df.write.format("carbondata").option("tableName", "accBal").option("compress", "true").mode(SaveMode.Overwrite).save()
> java.lang.RuntimeException: unsupported type: DecimalType(16,3)
>         at scala.sys.package$.error(package.scala:27)
>         at org.apache.carbondata.spark.CarbonDataFrameWriter.org$apache$carbondata$spark$CarbonDataFrameWriter$$convertToCarbonType(CarbonDataFrameWriter.scala:172)
>         at org.apache.carbondata.spark.CarbonDataFrameWriter$$anonfun$2.apply(CarbonDataFrameWriter.scala:178)
>         at org.apache.carbondata.spark.CarbonDataFrameWriter$$anonfun$2.apply(CarbonDataFrameWriter.scala:177)
>         at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> This is working fine with below change : 
> git diff
> diff --git a/integration/spark/src/main/scala/org/apache/carbondata/spark/CarbonDataFrameWriter.scala b/integration/spark/src/main/scala/org/apache/carbondata/spark/CarbonDataFrameWriter.scala
> index b843f59..cf9a775 100644
> --- a/integration/spark/src/main/scala/org/apache/carbondata/spark/CarbonDataFrameWriter.scala
> +++ b/integration/spark/src/main/scala/org/apache/carbondata/spark/CarbonDataFrameWriter.scala
> @@ -169,6 +169,7 @@ class CarbonDataFrameWriter(val dataFrame: DataFrame) {
>        case BooleanType => CarbonType.DOUBLE.getName
>        case TimestampType => CarbonType.TIMESTAMP.getName
>        case DateType => CarbonType.DATE.getName
> +      case dt: DecimalType => s"${CarbonType.DECIMAL.getName}(${dt.precision}, ${dt.scale})"
>        case other => sys.error(s"unsupported type: $other")
>      }
>    }
> Can I create a pull request?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)