You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@arrow.apache.org by "Neal Richardson (Jira)" <ji...@apache.org> on 2020/09/25 23:31:00 UTC
[jira] [Updated] (ARROW-9293) [R] Add chunk_size to Table$create()
[ https://issues.apache.org/jira/browse/ARROW-9293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Neal Richardson updated ARROW-9293:
-----------------------------------
Fix Version/s: (was: 2.0.0)
3.0.0
> [R] Add chunk_size to Table$create()
> ------------------------------------
>
> Key: ARROW-9293
> URL: https://issues.apache.org/jira/browse/ARROW-9293
> Project: Apache Arrow
> Issue Type: Improvement
> Components: R
> Reporter: Neal Richardson
> Priority: Major
> Fix For: 3.0.0
>
>
> While working on ARROW-3308, I noticed that write_feather has a chunk_size argument, which by default will write batches of 64k rows into the file. In principle, a chunking strategy like this would prevent the need to bump up to large_utf8 when ingesting a large character vector because you'd end up with many chunks that each fit into a regular utf8 type. However, the way the function works, the data.frame is converted to a Table with all ChunkedArrays containing a single chunk first, which is where the large_utf8 type gets set. But if Table$create() could be instructed to make multiple chunks, this would be resolved.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)