You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@arrow.apache.org by "Liangcai li (Jira)" <ji...@apache.org> on 2022/10/02 12:07:00 UTC

[jira] [Commented] (ARROW-17912) Arrow C++ IPC fails to send an empty table, but Arrow Java can do it.

    [ https://issues.apache.org/jira/browse/ARROW-17912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17612091#comment-17612091 ] 

Liangcai li commented on ARROW-17912:
-------------------------------------

After some investigation, according to [the code here|https://github.com/apache/arrow/blob/release-8.0.0/cpp/src/arrow/table.cc#L621], it look likes the *_TableBatchReader_* will return null if given an empty table, not an empty batch. So the RecordBatchWriter will skip the empty table, and write no batch into the stream. But the [Pyarrow requires at least one batch |https://github.com/apache/arrow/blob/release-7.0.0/python/pyarrow/table.pxi#L1936]when the schema is not specified.

> Arrow C++ IPC fails to send an empty table, but Arrow Java can do it.
> ---------------------------------------------------------------------
>
>                 Key: ARROW-17912
>                 URL: https://issues.apache.org/jira/browse/ARROW-17912
>             Project: Apache Arrow
>          Issue Type: Bug
>          Components: C++
>            Reporter: Liangcai li
>            Priority: Major
>
> My current work is about Pyspark Cogroup Pandas UDF. And two processes are involved, the JVM one (sender) and the Python one (receiver).
> [Spark is using the Arrow Java `ArrowStreamWriter`|https://github.com/apache/spark/blob/branch-3.3/sql/core/src/main/scala/org/apache/spark/sql/execution/python/CoGroupedArrowPythonRunner.scala#L99] to serialize Arrow tables being sent from the JVM process to the Python process, and ArrowStreamWriter can handle empty tables correctly.
> [While cuDF is using the Arrow C++ RecordBatchWriter |https://github.com/rapidsai/cudf/blob/branch-22.10/java/src/main/native/src/TableJni.cpp#L254]to do the same serialization, but it leads to an error as below on the Python side, where [the Pyspark is calling Pyarrow *Table.from_batches*|https://github.com/apache/spark/blob/branch-3.3/python/pyspark/sql/pandas/serializers.py#L366] to deserialize the arrow stream.
> ```
> {color:#de350b}_'Must pass schema, or at least one RecordBatch'_{color}
> ```



--
This message was sent by Atlassian Jira
(v8.20.10#820010)