You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@arrow.apache.org by "thisisnic (via GitHub)" <gi...@apache.org> on 2023/05/28 12:57:36 UTC

[GitHub] [arrow] thisisnic opened a new issue, #35807: [R][C++] Support upcasting NULL columns in Dataset CSV reader

thisisnic opened a new issue, #35807:
URL: https://github.com/apache/arrow/issues/35807

   ### Describe the enhancement requested
   
   When a column is sparsely populated in a Dataset, Arrow may infer its type as `null`, but then error when a non-null example is encountered, as shown in the example below.  There are existing workarounds for this, including manually specifying the type of the column in question via a schema or partial schema, or setting the `block_size` read option, so that more data is read in before the column type is inferred.  
   
   Would it be possible to support upcasting if and only if a column has been inferred as `null`?  I'm curious as to whether it'd be possible or if it would be massively complicated.
   
   ``` r
   library(arrow)
   library(dplyr)
   
   df <- tibble::tibble(x = 1:1000001, y = c(rep(NA, 1000000), 123456))
   tf <- tempfile()               
   write_csv_arrow(df, tf)
   
   # Individual file reader has no problems
   read_csv_arrow(tf)
   #> # A tibble: 1,000,001 × 2
   #>        x     y
   #>    <int> <int>
   #>  1     1    NA
   #>  2     2    NA
   #>  3     3    NA
   #>  4     4    NA
   #>  5     5    NA
   #>  6     6    NA
   #>  7     7    NA
   #>  8     8    NA
   #>  9     9    NA
   #> 10    10    NA
   #> # ℹ 999,991 more rows
   
   # Dataset has an error
   open_dataset(tf, format = "csv") %>% collect()
   #> Error in `compute.Dataset()`:
   #> ! Invalid: In CSV column #1: Row #1000002: CSV conversion error to null: invalid value '123456'
   #> ℹ If you have supplied a schema and your data contains a header row, you should supply the argument `skip = 1` to prevent the header being read in as data.
   #> Backtrace:
   #>      ▆
   #>   1. ├─open_dataset(tf, format = "csv") %>% collect()
   #>   2. ├─dplyr::collect(.)
   #>   3. └─arrow:::collect.Dataset(.)
   #>   4.   ├─arrow:::collect.ArrowTabular(compute.Dataset(x), as_data_frame)
   #>   5.   │ └─base::as.data.frame(x, ...)
   #>   6.   └─arrow:::compute.Dataset(x)
   #>   7.     └─base::tryCatch(...)
   #>   8.       └─base (local) tryCatchList(expr, classes, parentenv, handlers)
   #>   9.         └─base (local) tryCatchOne(expr, names, parentenv, handlers[[1L]])
   #>  10.           └─value[[3L]](cond)
   #>  11.             └─arrow:::augment_io_error_msg(e, call, schema = schema())
   #>  12.               └─arrow:::handle_csv_read_error(msg, call, schema)
   #>  13.                 └─rlang::abort(msg, call = call)
   ```
   
   <sup>Created on 2023-05-28 with [reprex v2.0.2](https://reprex.tidyverse.org)</sup>
   
   
   ### Component(s)
   
   R


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@arrow.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org