You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Xiao Li (JIRA)" <ji...@apache.org> on 2017/06/24 05:13:00 UTC

[jira] [Assigned] (SPARK-20555) Incorrect handling of Oracle's decimal types via JDBC

     [ https://issues.apache.org/jira/browse/SPARK-20555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Xiao Li reassigned SPARK-20555:
-------------------------------

    Assignee: Gabor Feher

> Incorrect handling of Oracle's decimal types via JDBC
> -----------------------------------------------------
>
>                 Key: SPARK-20555
>                 URL: https://issues.apache.org/jira/browse/SPARK-20555
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.1.0
>            Reporter: Gabor Feher
>            Assignee: Gabor Feher
>             Fix For: 2.1.2, 2.2.0
>
>
> When querying an Oracle database, Spark maps some Oracle numeric data types to incorrect Catalyst data types:
> 1. DECIMAL(1) becomes BooleanType
> In Orcale, a DECIMAL(1) can have values from -9 to 9.
> In Spark now, values larger than 1 become the boolean value true.
> 2. DECIMAL(3,2) becomes IntegerType
> In Oracle, a DECIMAL(2) can have values like 1.23
> In Spark now, digits after the decimal point are dropped.
> 3. DECIMAL(10) becomes IntegerType
> In Oracle, a DECIMAL(10) can have the value 9999999999 (ten nines), which is more than 2^31
> Spark throws an exception: "java.sql.SQLException: Numeric Overflow"
> I think the best solution is to always keep Oracle's decimal types. (In theory we could introduce a FloatType in some case of #2, and fix #3 by only introducing IntegerType for DECIMAL(9). But in my opinion, that would end up complicated and error-prone.)
> Note: I think the above problems were introduced as part of  https://github.com/apache/spark/pull/14377
> The main purpose of that PR seems to be converting Spark types to correct Oracle types, and that part seems good to me. But it also adds the inverse conversions. As it turns out in the above examples, that is not possible.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org