You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@arrow.apache.org by "Weston Pace (Jira)" <ji...@apache.org> on 2021/10/01 11:32:00 UTC

[jira] [Commented] (ARROW-13611) [C++] Scanning datasets does not enforce back pressure

    [ https://issues.apache.org/jira/browse/ARROW-13611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17423248#comment-17423248 ] 

Weston Pace commented on ARROW-13611:
-------------------------------------

That is actually what I've been working on today (shame on me for not toggling Start Progress).  Right now I have a PR up that adds backpressure back in for unordered scans.  I expect to have a PR ready that will add backpressure for ordered scans soon (I am hoping tomorrow).  If you are interested I can probably get you a test wheel (are you installing with conda or pypi?) sometime next week and would appreciate if you could let me know if it solves your issue.

> [C++] Scanning datasets does not enforce back pressure
> ------------------------------------------------------
>
>                 Key: ARROW-13611
>                 URL: https://issues.apache.org/jira/browse/ARROW-13611
>             Project: Apache Arrow
>          Issue Type: Bug
>          Components: C++
>    Affects Versions: 4.0.0, 5.0.0, 4.0.1
>            Reporter: Weston Pace
>            Assignee: Weston Pace
>            Priority: Major
>              Labels: pull-request-available, query-engine
>             Fix For: 6.0.0
>
>          Time Spent: 20m
>  Remaining Estimate: 0h
>
> I have a simple test case where I scan the batches of a 4GB dataset and print out the currently used memory:
> {code:python}
> import pyarrow as pa
> import pyarrow.dataset as ds
> dataset = ds.dataset('/home/pace/dev/data/dataset/csv/5_big', format='csv')
> num_rows = 0
> for batch in dataset.to_batches():
>     print(pa.total_allocated_bytes())
>     num_rows += batch.num_rows
> print(num_rows)
> {code}
> In pyarrow 3.0.0 this consumes just over 5MB.  In pyarrow 4.0.0 and 5.0.0 this consumes multiple GB of RAM.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)