You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2017/08/18 06:01:00 UTC

[jira] [Resolved] (SPARK-21776) How to use the memory-mapped file on Spark??

     [ https://issues.apache.org/jira/browse/SPARK-21776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sean Owen resolved SPARK-21776.
-------------------------------
    Resolution: Invalid

> How to use the memory-mapped file on Spark??
> --------------------------------------------
>
>                 Key: SPARK-21776
>                 URL: https://issues.apache.org/jira/browse/SPARK-21776
>             Project: Spark
>          Issue Type: Improvement
>          Components: Block Manager, Documentation, Input/Output, Spark Core
>    Affects Versions: 2.1.1
>         Environment: Spark 2.1.1 
> Scala 2.11.8
>            Reporter: zhaP524
>            Priority: Trivial
>         Attachments: screenshot-1.png, screenshot-2.png
>
>
>       In generation, we have to use the Spark full quantity loaded HBase table based on one dimension table to generate business, because the base table is total quantity loaded, the memory will pressure is very big, I want to see if the Spark can use this way to deal with memory mapped file?Is there such a mechanism?How do you use it?
>       And I found in the Spark a parameter: spark.storage.memoryMapThreshold=2m, is not very clear what this parameter is used for?
>        There is a putBytes and getBytes method in DiskStore.scala with Spark source code, is this the memory-mapped file mentioned above?How to understand?
>        Let me know if you have any trouble..
> Wish to You!!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org