You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Lantao Jin (Jira)" <ji...@apache.org> on 2020/09/25 06:17:00 UTC

[jira] [Updated] (SPARK-32994) Heavy external accumulators may lead driver full GC problem

     [ https://issues.apache.org/jira/browse/SPARK-32994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Lantao Jin updated SPARK-32994:
-------------------------------
    Summary: Heavy external accumulators may lead driver full GC problem  (was: External accumulators (not start with InternalAccumulator.METRICS_PREFIX) may lead driver full GC problem)

> Heavy external accumulators may lead driver full GC problem
> -----------------------------------------------------------
>
>                 Key: SPARK-32994
>                 URL: https://issues.apache.org/jira/browse/SPARK-32994
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core, SQL
>    Affects Versions: 2.4.7, 3.0.1, 3.1.0
>            Reporter: Lantao Jin
>            Priority: Major
>         Attachments: Screen Shot 2020-09-24 at 5.19.26 PM.png, Screen Shot 2020-09-24 at 5.19.58 PM.png, Screen Shot 2020-09-25 at 11.32.51 AM.png, Screen Shot 2020-09-25 at 11.35.01 AM.png, Screen Shot 2020-09-25 at 11.36.48 AM.png
>
>
> We use Spark + Delta Lake, recently we find our Spark driver faced full GC problem (very heavy) when users submit a MERGE INTO query. The driver held over 100GB memory (depends on how much the max heap size set) and can not be GC forever. By making a heap dump we found the root cause.
>  !Screen Shot 2020-09-25 at 11.32.51 AM.png|width=70%! 
>  !Screen Shot 2020-09-25 at 11.35.01 AM.png|width=100%! 
>  !Screen Shot 2020-09-25 at 11.36.48 AM.png|width=100%! 
> From above heap dump, Delta uses a SetAccumulator to records touched files names
> {code}
>     // Accumulator to collect all the distinct touched files
>     val touchedFilesAccum = new SetAccumulator[String]()
>     spark.sparkContext.register(touchedFilesAccum, TOUCHED_FILES_ACCUM_NAME)
>     // UDFs to records touched files names and add them to the accumulator
>     val recordTouchedFileName = udf { (fileName: String) => {
>       touchedFilesAccum.add(fileName)
>       1
>     }}.asNondeterministic()
> {code}
> In a big query, each task may hold thousands of file names, and if a stage contains dozens of thousands of tasks, DAGscheduler may hold millions of `CompletionEvent`. And each `CompletionEvent` holds the thousands of file names in its `accumUpdates`. All accumulator objects will use Spark listener event to deliver to the event loop and even a full GC can not release memory.
> A PR will be submitted. With the patch, the memory problem was gone.
> Before the patch: A full GC doesn't help.
>  !Screen Shot 2020-09-24 at 5.19.58 PM.png|width=70%! 
> After the patch: No full GC and memory is not ramp up.
>  !Screen Shot 2020-09-24 at 5.19.26 PM.png|width=70%! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org