You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@calcite.apache.org by "Gian Merlino (JIRA)" <ji...@apache.org> on 2017/01/17 21:20:26 UTC
[jira] [Commented] (CALCITE-1579) Druid adapter: wrong semantics of
groupBy query limit with granularity
[ https://issues.apache.org/jira/browse/CALCITE-1579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15826835#comment-15826835 ]
Gian Merlino commented on CALCITE-1579:
---------------------------------------
See also https://github.com/druid-io/druid/issues/1926. Druid's builtin SQL layer deals with this by avoiding granularity != "all" for groupBys. Instead, if you have __time in a GROUP BY, it uses a DimensionSpec for __time (with a queryGranularity if there's a floor).
> Druid adapter: wrong semantics of groupBy query limit with granularity
> ----------------------------------------------------------------------
>
> Key: CALCITE-1579
> URL: https://issues.apache.org/jira/browse/CALCITE-1579
> Project: Calcite
> Issue Type: Bug
> Components: druid
> Affects Versions: 1.11.0
> Reporter: Jesus Camacho Rodriguez
> Assignee: Jesus Camacho Rodriguez
> Priority: Critical
>
> Similar to CALCITE-1578, but for GroupBy queries. Limit is applied per granularity unit, not globally for the query.
> Currently, the following SQL query infers granularity 'day' for Druid _groupBy_ and pushes the limit, which is incorrect.
> {code:sql}
> SELECT i_brand_id, floor_day(`__time`), max(ss_quantity), sum(ss_wholesale_cost) as s
> FROM store_sales_sold_time_subset
> GROUP BY i_brand_id, floor_day(`__time`)
> ORDER BY s
> LIMIT 10;
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)