You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@kudu.apache.org by "caiconghui (Jira)" <ji...@apache.org> on 2020/02/17 14:13:00 UTC
[jira] [Created] (KUDU-3056) kudu-spark HdrHistogramAccumulator is
too big, and make spark job failed
caiconghui created KUDU-3056:
--------------------------------
Summary: kudu-spark HdrHistogramAccumulator is too big, and make spark job failed
Key: KUDU-3056
URL: https://issues.apache.org/jira/browse/KUDU-3056
Project: Kudu
Issue Type: Bug
Components: spark
Affects Versions: 1.9.0
Reporter: caiconghui
Fix For: NA
Attachments: heap1.png, heap2.png, heap3.png
in production envrinment, we use kudu-spark to read kudu table, but even we don't use the
HdrHistogramAccumulator, the HdrHistogramAccumulator stored in an array is stiil so big,
totoal of them are almost 2 MB, so that when the number of kudu-spark task(for read kudu data and shuffle) is more than 900, the spark job failed, and the follwing error occured,
Job aborted due to stage failure: Total size of serialized results of 1413 tasks (3.0 GB) is bigger than spark.driver.maxResultSize (3.0 GB)
--
This message was sent by Atlassian Jira
(v8.3.4#803005)