You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "ASF GitHub Bot (JIRA)" <ji...@apache.org> on 2019/08/16 16:36:00 UTC

[jira] [Work logged] (HIVE-20683) Add the Ability to push Dynamic Between and Bloom filters to Druid

     [ https://issues.apache.org/jira/browse/HIVE-20683?focusedWorklogId=296403&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296403 ]

ASF GitHub Bot logged work on HIVE-20683:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 16/Aug/19 16:35
            Start Date: 16/Aug/19 16:35
    Worklog Time Spent: 10m 
      Work Description: b-slim commented on pull request #723: [HIVE-20683] Add the Ability to push Dynamic Between and Bloom filters to Druid
URL: https://github.com/apache/hive/pull/723#discussion_r314798310
 
 

 ##########
 File path: druid-handler/src/java/org/apache/hadoop/hive/druid/DruidStorageHandlerUtils.java
 ##########
 @@ -894,4 +944,257 @@ public static IndexSpec getIndexSpec(Configuration jc) {
     ImmutableList<AggregatorFactory> aggregatorFactories = aggregatorFactoryBuilder.build();
     return Pair.of(dimensions, aggregatorFactories.toArray(new AggregatorFactory[0]));
   }
+
+  // Druid only supports String,Long,Float,Double selectors
+  private static Set<TypeInfo> druidSupportedTypeInfos = ImmutableSet.<TypeInfo>of(
+      TypeInfoFactory.stringTypeInfo, TypeInfoFactory.charTypeInfo,
+      TypeInfoFactory.varcharTypeInfo, TypeInfoFactory.byteTypeInfo,
+      TypeInfoFactory.intTypeInfo, TypeInfoFactory.longTypeInfo,
+      TypeInfoFactory.shortTypeInfo, TypeInfoFactory.doubleTypeInfo
+  );
+
+  private static Set<TypeInfo> stringTypeInfos = ImmutableSet.<TypeInfo>of(
+      TypeInfoFactory.stringTypeInfo,
+      TypeInfoFactory.charTypeInfo, TypeInfoFactory.varcharTypeInfo
+  );
+
+
+  public static org.apache.druid.query.Query addDynamicFilters(org.apache.druid.query.Query query,
+      ExprNodeGenericFuncDesc filterExpr, Configuration conf, boolean resolveDynamicValues
+  ) {
+    List<VirtualColumn> virtualColumns = Arrays
+        .asList(getVirtualColumns(query).getVirtualColumns());
+    org.apache.druid.query.Query rv = query;
+    DimFilter joinReductionFilter = toDruidFilter(filterExpr, conf, virtualColumns,
+        resolveDynamicValues
+    );
+    if(joinReductionFilter != null) {
+      String type = query.getType();
+      DimFilter filter = new AndDimFilter(joinReductionFilter, query.getFilter());
+      switch (type) {
+      case org.apache.druid.query.Query.TIMESERIES:
+        rv = Druids.TimeseriesQueryBuilder.copy((TimeseriesQuery) query)
+            .filters(filter)
+            .virtualColumns(VirtualColumns.create(virtualColumns))
+            .build();
+        break;
+      case org.apache.druid.query.Query.TOPN:
+        rv = new TopNQueryBuilder((TopNQuery) query)
+            .filters(filter)
+            .virtualColumns(VirtualColumns.create(virtualColumns))
+            .build();
+        break;
+      case org.apache.druid.query.Query.GROUP_BY:
+        rv = new GroupByQuery.Builder((GroupByQuery) query)
+            .setDimFilter(filter)
+            .setVirtualColumns(VirtualColumns.create(virtualColumns))
+            .build();
+        break;
+      case org.apache.druid.query.Query.SCAN:
+        rv = ScanQuery.ScanQueryBuilder.copy((ScanQuery) query)
+            .filters(filter)
+            .virtualColumns(VirtualColumns.create(virtualColumns))
+            .build();
+        break;
+      case org.apache.druid.query.Query.SELECT:
+        rv = Druids.SelectQueryBuilder.copy((SelectQuery) query)
+            .filters(filter)
+            .virtualColumns(VirtualColumns.create(virtualColumns))
+            .build();
+        break;
+      default:
+        throw new UnsupportedOperationException("Unsupported Query type " + type);
+      }
+    }
+    return rv;
+  }
+
+  private static DimFilter toDruidFilter(ExprNodeDesc filterExpr, Configuration configuration,
 
 Review comment:
   please add nullable annotations
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 296403)
    Time Spent: 20m  (was: 10m)

> Add the Ability to push Dynamic Between and Bloom filters to Druid
> ------------------------------------------------------------------
>
>                 Key: HIVE-20683
>                 URL: https://issues.apache.org/jira/browse/HIVE-20683
>             Project: Hive
>          Issue Type: New Feature
>          Components: Druid integration
>            Reporter: Nishant Bangarwa
>            Assignee: Nishant Bangarwa
>            Priority: Major
>              Labels: pull-request-available
>         Attachments: HIVE-20683.1.patch, HIVE-20683.2.patch, HIVE-20683.3.patch, HIVE-20683.4.patch, HIVE-20683.5.patch, HIVE-20683.6.patch, HIVE-20683.8.patch, HIVE-20683.patch
>
>          Time Spent: 20m
>  Remaining Estimate: 0h
>
> For optimizing joins, Hive generates BETWEEN filter with min-max and BLOOM filter for filtering one side of semi-join.
> Druid 0.13.0 will have support for Bloom filters (Added via https://github.com/apache/incubator-druid/pull/6222)
> Implementation details - 
> # Hive generates and passes the filters as part of 'filterExpr' in TableScan. 
> # DruidQueryBasedRecordReader gets this filter passed as part of the conf. 
> # During execution phase, before sending the query to druid in DruidQueryBasedRecordReader we will deserialize this filter, translate it into a DruidDimFilter and add it to existing DruidQuery.  Tez executor already ensures that when we start reading results from the record reader, all the dynamic values are initialized. 
> # Explaining a druid query also prints the query sent to druid as {{druid.json.query}}. We also need to make sure to update the druid query with the filters. During explain we do not have the actual values for the dynamic values, so instead of values we will print the dynamic expression itself as part of druid query. 
> Note:- This work needs druid to be updated to version 0.13.0



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)