You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@arrow.apache.org by "Antoine Pitrou (Jira)" <ji...@apache.org> on 2021/06/23 15:19:00 UTC
[jira] [Comment Edited] (ARROW-9293) [R] Add chunk_size to
Table$create()
[ https://issues.apache.org/jira/browse/ARROW-9293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17368270#comment-17368270 ]
Antoine Pitrou edited comment on ARROW-9293 at 6/23/21, 3:18 PM:
-----------------------------------------------------------------
cc [~thisisnic]
was (Author: pitrou):
cc @thisisnic
> [R] Add chunk_size to Table$create()
> ------------------------------------
>
> Key: ARROW-9293
> URL: https://issues.apache.org/jira/browse/ARROW-9293
> Project: Apache Arrow
> Issue Type: Improvement
> Components: R
> Reporter: Neal Richardson
> Assignee: Romain Francois
> Priority: Major
> Fix For: 5.0.0
>
>
> While working on ARROW-3308, I noticed that write_feather has a chunk_size argument, which by default will write batches of 64k rows into the file. In principle, a chunking strategy like this would prevent the need to bump up to large_utf8 when ingesting a large character vector because you'd end up with many chunks that each fit into a regular utf8 type. However, the way the function works, the data.frame is converted to a Table with all ChunkedArrays containing a single chunk first, which is where the large_utf8 type gets set. But if Table$create() could be instructed to make multiple chunks, this would be resolved.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)