You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@arrow.apache.org by "Wes McKinney (JIRA)" <ji...@apache.org> on 2017/08/20 16:35:00 UTC

[jira] [Commented] (ARROW-1382) [Python] Deduplicate ndarrays when using pyarrow.serialize

    [ https://issues.apache.org/jira/browse/ARROW-1382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16134495#comment-16134495 ] 

Wes McKinney commented on ARROW-1382:
-------------------------------------

Deduplication would be nice. I changed the issue title to reflect

> [Python] Deduplicate ndarrays when using pyarrow.serialize
> ----------------------------------------------------------
>
>                 Key: ARROW-1382
>                 URL: https://issues.apache.org/jira/browse/ARROW-1382
>             Project: Apache Arrow
>          Issue Type: Bug
>          Components: Python
>            Reporter: Robert Nishihara
>
> If a Python object appears multiple times within a list/tuple/dictionary, then when pyarrow serializes the object, it will duplicate the object many times. This leads to a potentially huge expansion in the size of the object (e.g., the serialized version of {{100 * [np.zeros(10 ** 6)]}} will be 100 times bigger than it needs to be).
> {code}
> import pyarrow as pa
> l = [0]
> original_object = [l, l]
> # Serialize and deserialize the object.
> buf = pa.serialize(original_object).to_buffer()
> new_object = pa.deserialize(buf)
> # This works.
> assert original_object[0] is original_object[1]
> # This fails.
> assert new_object[0] is new_object[1]
> {code}
> One potential way to address this is to use the Arrow dictionary encoding.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)