You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@kudu.apache.org by "Grant Henke (Jira)" <ji...@apache.org> on 2020/02/20 22:47:00 UTC

[jira] [Resolved] (KUDU-3056) kudu-spark HdrHistogramAccumulator is too big, and make spark job failed

     [ https://issues.apache.org/jira/browse/KUDU-3056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Grant Henke resolved KUDU-3056.
-------------------------------
    Fix Version/s:     (was: NA)
                   1.12.0
       Resolution: Fixed

> kudu-spark HdrHistogramAccumulator is too big, and make spark  job failed 
> --------------------------------------------------------------------------
>
>                 Key: KUDU-3056
>                 URL: https://issues.apache.org/jira/browse/KUDU-3056
>             Project: Kudu
>          Issue Type: Bug
>          Components: spark
>    Affects Versions: 1.9.0
>            Reporter: caiconghui
>            Assignee: Grant Henke
>            Priority: Major
>             Fix For: 1.12.0
>
>         Attachments: heap1.png, heap2.png, heap3.png
>
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> in production envrinment, we use kudu-spark to read kudu table, but  even we don't use the 
> HdrHistogramAccumulator, the HdrHistogramAccumulator stored in an array  is stiil so big,
> totoal of them are almost 2 MB, so that  when the number of kudu-spark task(for read kudu data and shuffle) is more than 900, the spark job failed, and the follwing error occured,
>  
> Job aborted due to stage failure: Total size of serialized results of 1413 tasks (3.0 GB) is bigger than spark.driver.maxResultSize (3.0 GB)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)