You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@arrow.apache.org by "Charlie Gao (Jira)" <ji...@apache.org> on 2022/07/07 20:18:00 UTC

[jira] [Comment Edited] (ARROW-17008) [R] Parquet Snappy Compression Fails for Integer Type Data

    [ https://issues.apache.org/jira/browse/ARROW-17008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17563947#comment-17563947 ] 

Charlie Gao edited comment on ARROW-17008 at 7/7/22 8:17 PM:
-------------------------------------------------------------

Hope the below helps. Btw. format version 1.0 or 2.0 produces the same results.

{noformat}
############ file meta data ############
created_by: parquet-cpp-arrow version 8.0.0
num_columns: 1
num_rows: 1000000
num_row_groups: 1
format_version: 1.0
serialized_size: 929


############ Columns ############
x

############ Column(x) ############
name: x
path: x
max_definition_level: 1
max_repetition_level: 0
physical_type: INT32
logical_type: None
converted_type (legacy): NONE
compression: SNAPPY (space_saved: -0%)
{noformat}

{noformat}

############ file meta data ############
created_by: parquet-cpp-arrow version 8.0.0
num_columns: 1
num_rows: 1000000
num_row_groups: 1
format_version: 1.0
serialized_size: 936


############ Columns ############
x

############ Column(x) ############
name: x
path: x
max_definition_level: 1
max_repetition_level: 0
physical_type: DOUBLE
logical_type: None
converted_type (legacy): NONE
compression: SNAPPY (space_saved: 49%)
{noformat}


was (Author: JIRAUSER292465):
Hope the below helps. Btw. format version 1.0 or 2.0 produces the same results.

 

############ file meta data ############
created_by: parquet-cpp-arrow version 8.0.0
num_columns: 1
num_rows: 1000000
num_row_groups: 1
format_version: 1.0
serialized_size: 929


############ Columns ############
x

############ Column(x) ############
name: x
path: x
max_definition_level: 1
max_repetition_level: 0
physical_type: INT32
logical_type: None
converted_type (legacy): NONE
compression: SNAPPY (space_saved: -0%)

 

############ file meta data ############
created_by: parquet-cpp-arrow version 8.0.0
num_columns: 1
num_rows: 1000000
num_row_groups: 1
format_version: 1.0
serialized_size: 936


############ Columns ############
x

############ Column(x) ############
name: x
path: x
max_definition_level: 1
max_repetition_level: 0
physical_type: DOUBLE
logical_type: None
converted_type (legacy): NONE
compression: SNAPPY (space_saved: 49%)

> [R] Parquet Snappy Compression Fails for Integer Type Data
> ----------------------------------------------------------
>
>                 Key: ARROW-17008
>                 URL: https://issues.apache.org/jira/browse/ARROW-17008
>             Project: Apache Arrow
>          Issue Type: Bug
>          Components: R
>    Affects Versions: 8.0.0
>         Environment: R4.2.1 Ubuntu 22.04 x86_64
> R4.1.2 Ubuntu 22.04 Aarch64
>            Reporter: Charlie Gao
>            Priority: Major
>
> Snappy compression is not working when writing to parquet for integer type data.
> E.g. compare file sizes for:
> {code:r}
> write_parquet(data.frame(x = 1:1e6), "snappy.parquet", compression = "snappy")
> write_parquet(data.frame(x = 1:1e6), "uncomp.parquet", compression = "uncompressed")
> {code}
> whereas for double:
> {code:r}
> write_parquet(data.frame(x = as.double(1:1e6)), "snappyd.parquet", compression = "snappy")
> write_parquet(data.frame(x = as.double(1:1e6)), "uncompd.parquet", compression = "uncompressed")
> {code}
> I have inspected the integer files using parquet-tools and compression level shows as 0%. Needless to say, I can achieve compression using Spark (sparklyr) etc.
> Thanks.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)