You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2018/01/24 13:02:00 UTC
[jira] [Resolved] (SPARK-22784) Configure reading buffer size in
Spark History Server
[ https://issues.apache.org/jira/browse/SPARK-22784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Sean Owen resolved SPARK-22784.
-------------------------------
Resolution: Won't Fix
> Configure reading buffer size in Spark History Server
> -----------------------------------------------------
>
> Key: SPARK-22784
> URL: https://issues.apache.org/jira/browse/SPARK-22784
> Project: Spark
> Issue Type: Improvement
> Components: Spark Core
> Affects Versions: 2.2.1
> Reporter: Mikhail Erofeev
> Priority: Minor
> Attachments: replay-baseline.svg
>
>
> Motivation:
> Our Spark History Server spends most of the backfill time inside BufferedReader and StringBuffer. It happens because average line size of our events is ~1.500.000 chars (due to a lot of partitions and iterations), whereas the default buffer size is 2048 bytes. See the attached flame graph.
> Implementation:
> I've added logging of spent time and line size for each job.
> Parametrised ReplayListenerBus with a new buffer size parameter.
> Measured the best buffer size. x20 of the average line size (30mb) gives 32% speedup in a local test.
> Result:
> Backfill of Spark History and reading to the cache will be up to 30% faster after tuning.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org