You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Takeshi Yamamuro (JIRA)" <ji...@apache.org> on 2017/09/28 02:20:00 UTC

[jira] [Commented] (SPARK-22149) spark.shuffle.memoryFraction (deprecated) in spark 2

    [ https://issues.apache.org/jira/browse/SPARK-22149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16183575#comment-16183575 ] 

Takeshi Yamamuro commented on SPARK-22149:
------------------------------------------

I think you should first ask in the spark mailing list. Then, if we'd better to do something, you can open a jira. Thanks!

> spark.shuffle.memoryFraction (deprecated) in spark 2
> ----------------------------------------------------
>
>                 Key: SPARK-22149
>                 URL: https://issues.apache.org/jira/browse/SPARK-22149
>             Project: Spark
>          Issue Type: Documentation
>          Components: Documentation
>    Affects Versions: 2.1.1
>            Reporter: regis le bretonnic
>            Priority: Minor
>
> Hi
> This is not a bug but maybe a lack of documentation.
> I have a job that produce a lot of blockmgr files... I do not understand why the shuffle writes so much on disk and not in the heap of nodemanager.
> I wanted to increase spark.shuffle.memoryFraction to reduce the part of data on disk, but this parameter is deprecated in the version we use (https://spark.apache.org/docs/2.1.1/configuration.html)
> How to increase the memory size allocated to shuffle in spark 2 ? Is there a non documented parameter ?
> I do not use an external shuffle service and I'd prefer to avoid it for now...
> Thanks in advance



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org