You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@arrow.apache.org by "Jonathan Keane (Jira)" <ji...@apache.org> on 2021/07/26 20:26:00 UTC

[jira] [Resolved] (ARROW-12688) [R] Use DuckDB to query an Arrow Dataset

     [ https://issues.apache.org/jira/browse/ARROW-12688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jonathan Keane resolved ARROW-12688.
------------------------------------
    Fix Version/s: 5.0.0
       Resolution: Fixed

Issue resolved by pull request 10780
[https://github.com/apache/arrow/pull/10780]

> [R] Use DuckDB to query an Arrow Dataset
> ----------------------------------------
>
>                 Key: ARROW-12688
>                 URL: https://issues.apache.org/jira/browse/ARROW-12688
>             Project: Apache Arrow
>          Issue Type: New Feature
>          Components: C++, R
>            Reporter: Neal Richardson
>            Assignee: Jonathan Keane
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 5.0.0
>
>          Time Spent: 5h 40m
>  Remaining Estimate: 0h
>
> DuckDB can read data from an Arrow C-interface stream. Once we can provide that struct from R, presumably DuckDB could query on that stream. 
> A first step is just connecting the pieces. A second step would be to handle parts of the DuckDB query and push down filtering/projection to Arrow. 
> We need a function something like this:
> {code}
> #' Run a DuckDB query on Arrow data
> #'
> #' @param .data An `arrow` data object: `Dataset`, `Table`, `RecordBatch`, or 
> #' an `arrow_dplyr_query` containing filter/mutate/etc. expressions
> #' @return A `duckdb::duckdb_connection`
> to_duckdb <- function(.data) {
>   # ARROW-12687: [C++][Python][Dataset] Convert Scanner into a RecordBatchReader 
>   reader <- Scanner$create(.data)$ToRecordBatchReader()
>   # ARROW-12689: [R] Implement ArrowArrayStream C interface
>   stream_ptr <- allocate_arrow_array_stream()
>   on.exit(delete_arrow_array_stream(stream_ptr))
>   ExportRecordBatchReader(x, stream_ptr)
>   # TODO: DuckDB method to create table/connection from ArrowArrayStream ptr
>   duckdb::duck_connection_from_arrow_stream(stream_ptr)
> }
> {code}
> Assuming this existed, we could do something like (a variation of https://arrow.apache.org/docs/r/articles/dataset.html):
> {code}
> ds <- open_dataset("nyc-taxi", partitioning = c("year", "month"))
> ds %>%
>   filter(total_amount > 100, year == 2015) %>%
>   select(tip_amount, total_amount, passenger_count) %>%
>   mutate(tip_pct = 100 * tip_amount / total_amount) %>%
>   to_duckdb() %>%
>   group_by(passenger_count) %>%
>   summarise(
>     median_tip_pct = median(tip_pct),
>     n = n()
>   )
> {code}
> and duckdb would do the aggregation while the data reading, predicate pushdown, filtering, and projection would happen in Arrow. Or you could do {{dbGetQuery(ds, "SOME SQL")}} and that would evaluate on arrow data. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)