You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@arrow.apache.org by "Benson Muite (Jira)" <ji...@apache.org> on 2022/10/23 19:18:00 UTC

[jira] [Updated] (ARROW-18137) [C++][Python][Docs] Allow passing no aggregations to TableGroupBy.aggregate

     [ https://issues.apache.org/jira/browse/ARROW-18137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Benson Muite updated ARROW-18137:
---------------------------------
    Summary: [C++][Python][Docs] Allow passing no aggregations to TableGroupBy.aggregate  (was: Allow passing no aggregations to TableGroupBy.aggregate)

> [C++][Python][Docs] Allow passing no aggregations to TableGroupBy.aggregate
> ---------------------------------------------------------------------------
>
>                 Key: ARROW-18137
>                 URL: https://issues.apache.org/jira/browse/ARROW-18137
>             Project: Apache Arrow
>          Issue Type: New Feature
>          Components: C++, Python
>    Affects Versions: 9.0.0
>            Reporter: Jacek Pliszka
>            Assignee: Jacek Pliszka
>            Priority: Minor
>              Labels: pull-request-available
>          Time Spent: 20m
>  Remaining Estimate: 0h
>
> If we could allow TableGroupBy.aggregate to accept no aggregation functions then it would behave like pandas drop_duplicates.
> {code:python}
> t.group_by(['keys', 'values']).aggregate()
> {code}
> I did some naive benchmarks and looks like it should be 30% faster than converting to pandas and deduplicating. This was my naive test:
> {code:python}
>  t.append_column('i', pa.array([1]*len(t),pa.int64())).group_by(['keys', 'values']).aggregate([("i", "max")]).drop(['i_max'])
> {code}
> And on small 5M table it took 245ms while 359ms for t.to_pandas().drop_duplicates()
> Actual aggregation without adding dummy column should be  even faster still will allow drop_duplicates functionality until better implementation arrives



--
This message was sent by Atlassian Jira
(v8.20.10#820010)