You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "Xuefu Zhang (JIRA)" <ji...@apache.org> on 2017/02/07 01:36:41 UTC

[jira] [Commented] (HIVE-15682) Eliminate the dummy iterator and optimize the per row based reducer-side processing

    [ https://issues.apache.org/jira/browse/HIVE-15682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15855125#comment-15855125 ] 

Xuefu Zhang commented on HIVE-15682:
------------------------------------

I took a look at the code block again, and it doesn't appear that per-row based dummy iterator has much performance penalty. Nevertheless, we can create a single dummy iterator and reuse it to avoid object creation (and gc). Other than this, the whole code path doesn't seem having much room to optimize

> Eliminate the dummy iterator and optimize the per row based reducer-side processing
> -----------------------------------------------------------------------------------
>
>                 Key: HIVE-15682
>                 URL: https://issues.apache.org/jira/browse/HIVE-15682
>             Project: Hive
>          Issue Type: Improvement
>          Components: Spark
>    Affects Versions: 2.2.0
>            Reporter: Xuefu Zhang
>            Assignee: Xuefu Zhang
>
> HIVE-15580 introduced a dummy iterator per input row which can be eliminated. This is because {{SparkReduceRecordHandler}} is able to handle single key value pairs. We can refactor this part of code 1. to remove the need for a iterator and 2. to optimize the code path for per (key, value) based (instead of (key, value iterator)) processing. It would be also great if we can measure the performance after the optimizations and compare to performance prior to HIVE-15580.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)