You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@kudu.apache.org by "Grant Henke (Jira)" <ji...@apache.org> on 2020/02/18 13:40:00 UTC

[jira] [Commented] (KUDU-3056) kudu-spark HdrHistogramAccumulator is too big, and make spark job failed

    [ https://issues.apache.org/jira/browse/KUDU-3056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17039087#comment-17039087 ] 

Grant Henke commented on KUDU-3056:
-----------------------------------

I have not seen this reported before. It could be useful to disable the histogram accumulators if this is an ongoing issue. There may also be an opportunity to mak% them smaller.

As a workaround could you increase spark.driver.maxResultSize?

> kudu-spark HdrHistogramAccumulator is too big, and make spark  job failed 
> --------------------------------------------------------------------------
>
>                 Key: KUDU-3056
>                 URL: https://issues.apache.org/jira/browse/KUDU-3056
>             Project: Kudu
>          Issue Type: Bug
>          Components: spark
>    Affects Versions: 1.9.0
>            Reporter: caiconghui
>            Priority: Major
>             Fix For: NA
>
>         Attachments: heap1.png, heap2.png, heap3.png
>
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> in production envrinment, we use kudu-spark to read kudu tablel but  even we don't use the 
> HdrHistogramAccumulator, the HdrHistogramAccumulator stored in an array  is stiil so big,
> totoal of them are almost 2 MB, so that  when the number of kudu-spark task(for read kudu data and shuffle) is more than 900, the spark job failed, and the follwing error occured,
>  
> Job aborted due to stage failure: Total size of serialized results of 1413 tasks (3.0 GB) is bigger than spark.driver.maxResultSize (3.0 GB)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)