You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues-all@impala.apache.org by "Tim Armstrong (Jira)" <ji...@apache.org> on 2020/06/18 21:51:00 UTC

[jira] [Updated] (IMPALA-7087) Impala is unable to read Parquet decimal columns with lower precision/scale than table metadata

     [ https://issues.apache.org/jira/browse/IMPALA-7087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Tim Armstrong updated IMPALA-7087:
----------------------------------
    Labels: decimal parquet ramp-up  (was: decimal parquet)

> Impala is unable to read Parquet decimal columns with lower precision/scale than table metadata
> -----------------------------------------------------------------------------------------------
>
>                 Key: IMPALA-7087
>                 URL: https://issues.apache.org/jira/browse/IMPALA-7087
>             Project: IMPALA
>          Issue Type: Sub-task
>          Components: Backend
>            Reporter: Tim Armstrong
>            Priority: Major
>              Labels: decimal, parquet, ramp-up
>         Attachments: binary_decimal_precision_and_scale_widening.parquet
>
>
> This is similar to IMPALA-2515, except relates to a different precision/scale in the file metadata rather than just a mismatch in the bytes used to store the data. In a lot of cases we should be able to convert the decimal type on the fly to the higher-precision type.
> {noformat}
> ERROR: File '/hdfs/path/000000_0_x_2' column 'alterd_decimal' has an invalid type length. Expecting: 11 len in file: 8
> {noformat}
> It would be convenient to allow reading parquet files where the precision/scale in the file can be converted to the precision/scale in the table metadata without loss of precision.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-all-unsubscribe@impala.apache.org
For additional commands, e-mail: issues-all-help@impala.apache.org