You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Dongjoon Hyun (JIRA)" <ji...@apache.org> on 2019/01/12 19:10:00 UTC

[jira] [Resolved] (SPARK-26538) Postgres numeric array support

     [ https://issues.apache.org/jira/browse/SPARK-26538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Dongjoon Hyun resolved SPARK-26538.
-----------------------------------
       Resolution: Fixed
    Fix Version/s: 3.0.0
                   2.4.1
                   2.3.3

This is resolved via https://github.com/apache/spark/pull/23456

> Postgres numeric array support
> ------------------------------
>
>                 Key: SPARK-26538
>                 URL: https://issues.apache.org/jira/browse/SPARK-26538
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.2.2, 2.3.2, 2.4.1
>         Environment: PostgreSQL 10.4, 9.6.9.
>            Reporter: Oleksii
>            Priority: Minor
>             Fix For: 2.3.3, 2.4.1, 3.0.0
>
>
> Consider the following table definition:
> {code:sql}
> create table test1
> (
>    v  numeric[],
>    d  numeric
> );
> insert into test1 values('{1111.222,2222.332}', 222.4555);
> {code}
> When reading the table into a Dataframe, I get the following schema:
> {noformat}
> root
>  |-- v: array (nullable = true)
>  |    |-- element: decimal(0,0) (containsNull = true)
>  |-- d: decimal(38,18) (nullable = true){noformat}
> Notice that for both columns precision and scale were not specified, but in case of the array element I got both set to 0, while in the other case defaults were set.
> Later, when I try to read the Dataframe, I get the following error:
> {noformat}
> java.lang.IllegalArgumentException: requirement failed: Decimal precision 4 exceeds max precision 0
>         at scala.Predef$.require(Predef.scala:224)
>         at org.apache.spark.sql.types.Decimal.set(Decimal.scala:114)
>         at org.apache.spark.sql.types.Decimal$.apply(Decimal.scala:453)
>         at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$16$$anonfun$apply$6$$anonfun$apply$7.apply(JdbcUtils.scala:474)
>         ...{noformat}
> I would expect to get array elements of type decimal(38,18) and no error when reading in this case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org