You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Xianyang Liu (JIRA)" <ji...@apache.org> on 2017/09/11 08:00:20 UTC
[jira] [Updated] (SPARK-21923) Avoid calling
reserveUnrollMemoryForThisTask for every record
[ https://issues.apache.org/jira/browse/SPARK-21923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Xianyang Liu updated SPARK-21923:
---------------------------------
Summary: Avoid calling reserveUnrollMemoryForThisTask for every record (was: Avoid call reserveUnrollMemoryForThisTask every record)
> Avoid calling reserveUnrollMemoryForThisTask for every record
> -------------------------------------------------------------
>
> Key: SPARK-21923
> URL: https://issues.apache.org/jira/browse/SPARK-21923
> Project: Spark
> Issue Type: Improvement
> Components: Spark Core
> Affects Versions: 2.2.0
> Reporter: Xianyang Liu
>
> When Spark persist data to Unsafe memory, we call the method `MemoryStore.putIteratorAsBytes`, which need synchronize the `memoryManager` for every record write. This implementation is not necessary, we can apply for more memory at a time to reduce unnecessary synchronization.
> Test case:
> ```scala
> val start = System.currentTimeMillis()
> val data = sc.parallelize(0 until Integer.MAX_VALUE, 100)
> .persist(StorageLevel.OFF_HEAP)
> .count()
> println(System.currentTimeMillis() - start)
> ```
> Test result:
> before
> | 27647 | 29108 | 28591 | 28264 | 27232 |
> after
> | 26868 | 26358 | 27767 | 26653 | 26693 |
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org