You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "shufan (Jira)" <ji...@apache.org> on 2023/02/09 07:44:00 UTC

[jira] [Commented] (SPARK-4073) Parquet+Snappy can cause significant off-heap memory usage

    [ https://issues.apache.org/jira/browse/SPARK-4073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17686239#comment-17686239 ] 

shufan commented on SPARK-4073:
-------------------------------

I had a similar problem。

When I submitted a hive on spark task, which was to join two tables, one of the big tables was parquet + snappy, about 5G in size, with 100 million rows of data, the executor was killed by k8s. 

Configuration is
    set spark.executor.memoryOverhead=6g;
    set spark.executor.memory=5g;
    set spark.executor.cores=4;
    set spark.executor.instances=2;
    set spark.executor.extraJavaOptions=-XX:MaxDirectMemorySize=4096m -Dio.netty.maxDirectMemory=104857600;

In the above configuration, the jvm memory usage exceeds 11G. The executor has less than 5G of heap memory and the Direct ByteBuffer less than 4G, which are around 9G. 11 - 9 = 2G of memory is unknown.

Can you tell me which part of the remaining 2G of memory is used? Is there any way to limit it?

> Parquet+Snappy can cause significant off-heap memory usage
> ----------------------------------------------------------
>
>                 Key: SPARK-4073
>                 URL: https://issues.apache.org/jira/browse/SPARK-4073
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.1.0
>            Reporter: Patrick Wendell
>            Priority: Critical
>
> The parquet snappy codec allocates off-heap buffers for decompression[1]. In one cases the observed size of these buffers was high enough to add several GB of data to the overall virtual memory usage of the Spark executor process. I don't understand enough about our use of Snappy to fully grok how much data we would _expect_ to be present in these buffers at any given time, but I can say a few things.
> 1. The dataset had individual rows that were fairly large, e.g. megabytes.
> 2. Direct buffers are not cleaned up until GC events, and overall there was not much heap contention. So maybe they just weren't being cleaned.
> I opened PARQUET-118 to see if they can provide an option to use on-heap buffers for decompression. In the mean time, we could consider changing the default back to gzip, or we could do nothing (not sure how many other users will hit this).
> [1] https://github.com/apache/incubator-parquet-mr/blob/master/parquet-hadoop/src/main/java/parquet/hadoop/codec/SnappyDecompressor.java#L28



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org