You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Travis Hegner (JIRA)" <ji...@apache.org> on 2015/09/16 22:25:46 UTC

[jira] [Created] (SPARK-10648) Spark-SQL JDBC fails to set a default precision and scale when they are not defined in an oracle schema.

Travis Hegner created SPARK-10648:
-------------------------------------

             Summary: Spark-SQL JDBC fails to set a default precision and scale when they are not defined in an oracle schema.
                 Key: SPARK-10648
                 URL: https://issues.apache.org/jira/browse/SPARK-10648
             Project: Spark
          Issue Type: Bug
          Components: SQL
    Affects Versions: 1.5.0
         Environment: using oracle 11g, ojdbc7.jar
            Reporter: Travis Hegner


Using oracle 11g as a datasource with ojdbc7.jar. When importing data into a scala app, I am getting an exception "Overflowed precision". Some times I would get the exception "Unscaled value too large for precision".

This issue likely affects older versions as well, but this was the version I verified it on.

I narrowed it down to the fact that the schema detection system was trying to set the precision to 0, and the scale to -127.

I have a proposed pull request to follow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org