You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (JIRA)" <ji...@apache.org> on 2015/09/16 22:31:45 UTC

[jira] [Commented] (SPARK-10648) Spark-SQL JDBC fails to set a default precision and scale when they are not defined in an oracle schema.

    [ https://issues.apache.org/jira/browse/SPARK-10648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14791081#comment-14791081 ] 

Apache Spark commented on SPARK-10648:
--------------------------------------

User 'travishegner' has created a pull request for this issue:
https://github.com/apache/spark/pull/8780

> Spark-SQL JDBC fails to set a default precision and scale when they are not defined in an oracle schema.
> --------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-10648
>                 URL: https://issues.apache.org/jira/browse/SPARK-10648
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.5.0
>         Environment: using oracle 11g, ojdbc7.jar
>            Reporter: Travis Hegner
>
> Using oracle 11g as a datasource with ojdbc7.jar. When importing data into a scala app, I am getting an exception "Overflowed precision". Some times I would get the exception "Unscaled value too large for precision".
> This issue likely affects older versions as well, but this was the version I verified it on.
> I narrowed it down to the fact that the schema detection system was trying to set the precision to 0, and the scale to -127.
> I have a proposed pull request to follow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org