You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "ASF GitHub Bot (Jira)" <ji...@apache.org> on 2020/08/05 07:47:00 UTC

[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

     [ https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=466643&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-466643 ]

ASF GitHub Bot logged work on HIVE-23880:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 05/Aug/20 07:46
            Start Date: 05/Aug/20 07:46
    Worklog Time Spent: 10m 
      Work Description: abstractdog commented on pull request #1280:
URL: https://github.com/apache/hive/pull/1280#issuecomment-669038358


   @zabetak : let me grab the opportunity to thank you for your [JMH benchmarks](https://issues.apache.org/jira/browse/HIVE-23880?focusedCommentId=17163111&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17163111)! it helped a lot, some of my findings from the last 2 weeks:
   
   1. on cluster, JDK11 is better, in every scenario, we'll have to switch to that in LLAP daemons
   
   2. more threads doesn't make any serious improvement <- that's the most important what I've found in the last two weeks...basically, my implementation was wrong, and the results got distorted by the improper usage of executor service (that's what is fixed in the new, squashed commit), so now, on the cluster I can see results which are in line with your JMH findings
   
   3. removed automatic thread calculation: performance tests revealed that 1 thread is the most optimal, and can lead to serious improvements, this is something that cannot be measured from JMH easily because the advantage of 1 thread (which is the main task thread + 1 thread) is to decouple from the main thread, and let it handle other, probably CPU heavy stuff (waiting for inputs one by one, build vectorized row batches one by one, etc.), by this I reduced the task runtime by 50-60 seconds (170s -> 110s)
   
   4. as agreed with @ashutoshc, I've left the support of multiple threads in the code, because we don't know if we can have the advantage of it later, and the split logic doesn't consume significant amount of resources...but I've set default 1 thread in HiveConf in order to let the user know that this is the recommended, optimal usage
   
   cc: @pgaref , @ashutoshc 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 466643)
    Time Spent: 3h 10m  (was: 3h)

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---------------------------------------------------------------------------
>
>                 Key: HIVE-23880
>                 URL: https://issues.apache.org/jira/browse/HIVE-23880
>             Project: Hive
>          Issue Type: Improvement
>            Reporter: László Bodor
>            Assignee: László Bodor
>            Priority: Major
>              Labels: pull-request-available
>         Attachments: lipwig-output3605036885489193068.svg
>
>          Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in case of large number of source mapper tasks (~1000, Map 1 in below example) and a large amount of expected entries (50M) in bloom filters.
> For example in TPCDS Q93:
> {code}
> select /*+ semi(store_returns, sr_item_sk, store_sales, 70000000)*/ ss_customer_sk
>             ,sum(act_sales) sumsales
>       from (select ss_item_sk
>                   ,ss_ticket_number
>                   ,ss_customer_sk
>                   ,case when sr_return_quantity is not null then (ss_quantity-sr_return_quantity)*ss_sales_price
>                                                             else (ss_quantity*ss_sales_price) end act_sales
>             from store_sales left outer join store_returns on (sr_item_sk = ss_item_sk
>                                                                and sr_ticket_number = ss_ticket_number)
>                 ,reason
>             where sr_reason_sk = r_reason_sk
>               and r_reason_desc = 'reason 66') t
>       group by ss_customer_sk
>       order by sumsales, ss_customer_sk
> limit 100;
> {code}
> On 10TB-30TB scale there is a chance that from 3-4 mins of query runtime 1-2 mins are spent with merging bloom filters (Reducer 2), as in:  [^lipwig-output3605036885489193068.svg] 
> {code}
> ----------------------------------------------------------------------------------------------
>         VERTICES      MODE        STATUS  TOTAL  COMPLETED  RUNNING  PENDING  FAILED  KILLED
> ----------------------------------------------------------------------------------------------
> Map 3 ..........      llap     SUCCEEDED      1          1        0        0       0       0
> Map 1 ..........      llap     SUCCEEDED   1263       1263        0        0       0       0
> Reducer 2             llap       RUNNING      1          0        1        0       0       0
> Map 4                 llap       RUNNING   6154          0      207     5947       0       0
> Reducer 5             llap        INITED     43          0        0       43       0       0
> Reducer 6             llap        INITED      1          0        0        1       0       0
> ----------------------------------------------------------------------------------------------
> VERTICES: 02/06  [====>>----------------------] 16%   ELAPSED TIME: 149.98 s
> ----------------------------------------------------------------------------------------------
> {code}
> For example, 70M entries in bloom filter leads to a 436 465 696 bits, so merging 1263 bloom filters means running ~ 1263 * 436 465 696 bitwise OR operation, which is very hot codepath, but can be parallelized.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)