You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@impala.apache.org by "Tim Armstrong (JIRA)" <ji...@apache.org> on 2017/05/25 15:49:04 UTC

[jira] [Resolved] (IMPALA-5347) Parquet scanner has a lot of small CPU inefficiencies

     [ https://issues.apache.org/jira/browse/IMPALA-5347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Tim Armstrong resolved IMPALA-5347.
-----------------------------------
       Resolution: Fixed
    Fix Version/s: Impala 2.9.0


IMPALA-5347: Parquet scanner microoptimizations

A mix of microoptimizations that profiling the parquet scanner revealed.
All lead to some measurable improvement and added up to significant
speedups for various scans.

* Add ALWAYS_INLINE to hot functions that GCC was mistakenly not inlining
  in all cases.
* Apply __restrict__ in a few places so the compiler knows that it is
  safe to cache values accessed via those pointers
* memset() the whole batch instead of the null indicators is cases where
  it is almost certainly cheaper.
  git pull ssh://gerrit.cloudera.org:29418/Impala-ASF refs/changes/50/6950/19
--
To view, visit http://gerrit.cloudera.org:8080/6950
To unsubscribe, visit http://gerrit.cloudera.org:8080/settings


> Parquet scanner has a lot of small CPU inefficiencies
> -----------------------------------------------------
>
>                 Key: IMPALA-5347
>                 URL: https://issues.apache.org/jira/browse/IMPALA-5347
>             Project: IMPALA
>          Issue Type: Improvement
>          Components: Backend
>    Affects Versions: Impala 2.9.0
>            Reporter: Tim Armstrong
>            Assignee: Tim Armstrong
>            Priority: Minor
>              Labels: parquet, perfomance
>             Fix For: Impala 2.9.0
>
>
> I spent some time looking at the parquet scanner in perf top. There are a lot of cases where the code is inefficient in ways that are easily fixed. Together this could add up to a significant perf win for scans.
> The assembly of the core MaterializeValueBatch() loop has a lot of obvious inefficiency:
> * Many loads from memory of values that are constant within the loop
> * The generated bit unpacking and dictionary decoding code has a lot of inefficiency, e.g. a complicated bounds check
> * Hot functions like DictDecoder::Get() are not inlined.
> A lot of time is also spent on some scans calling memset() on one or two bytes inside InitTuple().



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)