You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@arrow.apache.org by "Joris Van den Bossche (Jira)" <ji...@apache.org> on 2020/09/09 12:58:00 UTC
[jira] [Updated] (ARROW-2628) [Python] parquet.write_to_dataset is
memory-hungry on large DataFrames
[ https://issues.apache.org/jira/browse/ARROW-2628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Joris Van den Bossche updated ARROW-2628:
-----------------------------------------
Labels: dataset dataset-parquet-write parquet (was: dataset parquet)
> [Python] parquet.write_to_dataset is memory-hungry on large DataFrames
> ----------------------------------------------------------------------
>
> Key: ARROW-2628
> URL: https://issues.apache.org/jira/browse/ARROW-2628
> Project: Apache Arrow
> Issue Type: Improvement
> Components: C++, Python
> Reporter: Wes McKinney
> Priority: Major
> Labels: dataset, dataset-parquet-write, parquet
>
> See discussion in https://github.com/apache/arrow/issues/1749. We should consider strategies for writing very large tables to a partitioned directory scheme.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)