You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@arrow.apache.org by "Neal Richardson (Jira)" <ji...@apache.org> on 2020/07/01 22:52:00 UTC

[jira] [Created] (ARROW-9293) [R] Add chunk_size to Table$create()

Neal Richardson created ARROW-9293:
--------------------------------------

             Summary: [R] Add chunk_size to Table$create()
                 Key: ARROW-9293
                 URL: https://issues.apache.org/jira/browse/ARROW-9293
             Project: Apache Arrow
          Issue Type: Improvement
          Components: R
            Reporter: Neal Richardson
             Fix For: 2.0.0


While working on ARROW-3308, I noticed that write_feather has a chunk_size argument, which by default will write batches of 64k rows into the file. In principle, a chunking strategy like this would prevent the need to bump up to large_utf8 when ingesting a large character vector because you'd end up with many chunks that each fit into a regular utf8 type. However, the way the function works, the data.frame is converted to a Table with all ChunkedArrays containing a single chunk first, which is where the large_utf8 type gets set. But if Table$create() could be instructed to make multiple chunks, this would be resolved.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)