You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@arrow.apache.org by "Robert Nishihara (JIRA)" <ji...@apache.org> on 2017/07/07 19:41:00 UTC

[jira] [Updated] (ARROW-1194) Getting record batch size with pa.get_record_batch_size returns a size that is too small for pandas DataFrame.

     [ https://issues.apache.org/jira/browse/ARROW-1194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Robert Nishihara updated ARROW-1194:
------------------------------------
    Summary: Getting record batch size with pa.get_record_batch_size returns a size that is too small for pandas DataFrame.  (was: Trouble deserializing a pandas DataFrame from a PyArrow buffer.)

> Getting record batch size with pa.get_record_batch_size returns a size that is too small for pandas DataFrame.
> --------------------------------------------------------------------------------------------------------------
>
>                 Key: ARROW-1194
>                 URL: https://issues.apache.org/jira/browse/ARROW-1194
>             Project: Apache Arrow
>          Issue Type: Bug
>          Components: Python
>    Affects Versions: 0.5.0
>         Environment: Ubuntu 16.04
> Python 3.6
>            Reporter: Robert Nishihara
>
> I'm running into the following problem.
> Suppose I create a dataframe and serialize it.
> {code:language=python}
> import pyarrow as pa
> import pandas as pd
> df = pd.DataFrame({"a": [1, 2, 3]})
> record_batch = pa.RecordBatch.from_pandas(df)
> {code}
> It's size is 352 according to
> {code:language=python}
> pa.get_record_batch_size(record_batch)  # This is 352.
> {code}
> However, if I write it using a stream_writer and then attempt to read it, the resulting buffer has size 928.
> {code:language=python}
> sink = pa.BufferOutputStream()
> stream_writer = pa.RecordBatchStreamWriter(sink, record_batch.schema)
> stream_writer.write_batch(record_batch)
> new_buf = sink.get_result()
> new_buf.size  # This is 928.
> {code}
> I'm running into this problem because I'm attempting to write the pandas DataFrame to the Plasma object store as follows (after Plasma has been started and a client has been created), so I need to know the size ahead of time.
> {code:language=python}
> data_size = pa.get_record_batch_size(record_batch)
> object_id = plasma.ObjectID(np.random.bytes(20))
> buf = client.create(object_id, data_size)  # Note that if I replace "data_size" by "data_size + 1000" then it works.
> stream = plasma.FixedSizeBufferOutputStream(buf)
> stream_writer = pa.RecordBatchStreamWriter(stream, record_batch.schema)
> stream_writer.write_batch(record_batch)
> {code}
> The above fails because the buffer allocated in Plasma only has size 352, but 928 bytes are needed.
> So my question is, am I determining the size of the record batch incorrectly? Or could there be a bug in {code:language=python}pa.get_record_batch_size{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)