You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "wei wu (JIRA)" <ji...@apache.org> on 2015/12/29 03:31:49 UTC

[jira] [Comment Edited] (SPARK-12196) Store blocks in different speed storage devices by hierarchy way

    [ https://issues.apache.org/jira/browse/SPARK-12196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15073386#comment-15073386 ] 

wei wu edited comment on SPARK-12196 at 12/29/15 2:31 AM:
----------------------------------------------------------

Yes, Hao. The local dir path format "[SSD]file:///") may be not identified by the Yarn local dir setting.
Another question is that: if the user have mount the device (in production environment cluster ) as the follows :
/mnt/c, /mnt/d, /mnt/e/, ....,  /mnt/i
If the user want to use the new feature in spark new version, the user should re-mount the disk device.
we think the following configuration may be better:
spark.local.dir = /mnt/c, /mnt/d, /mnt/e/, ....,  /mnt/i
spark.storage.hierarchyStore.reserved.quota = SSD 50GB, DISK, SSD 80GB, .... ,  DISK

And we suggest the  following configuration idea: 
I think we should set a  space reverser thread in block manager  to check if enough space is reserved for each SSD storage. The reserver is used to solve the no free SSD space problem when concurrently write blocks.  Just like: spark.ssd. reserver.interval.ms = 1000

If  the SSD capacity is small, the SSD may be cache the RDD or save the shuffle data.  Different job may compete the SSD resource (may be cache RDD or shuffle data). But the user want to give priority in use of the SSD to cache the RDD.  I think we should add the similar configuration to Flag for enabling the SSD storage to shuffle data.
spark.ssd.shuffle.enabled = false





was (Author: allan wu):
Yes, Hao. The local dir path format "[SSD]file:///") may be not identified by the Yarn local dir setting.
Another question is that: if the use have mount the device (in production environment cluster ) as the follows :
/mnt/c, /mnt/d, /mnt/e/, ....,  /mnt/i
If the user want to use the new feature in spark new version, the user should re-mount the disk device.
we think the following configuration may be better:
spark.local.dir = /mnt/c, /mnt/d, /mnt/e/, ....,  /mnt/i
spark.storage.hierarchyStore.reserved.quota = SSD 50GB, DISK, SSD 80GB, .... ,  DISK

And we suggest the  following configuration idea: 
I think we should set a  space reverser thread in block manager  to check if enough space is reserved for each SSD storage. The reserver is used to solve the no free SSD space problem when concurrently write blocks.  Just like: spark.ssd. reserver.interval.ms = 1000

If  the SSD capacity is small, the SSD may be cache the RDD or save the shuffle data.  Different job may compete the SSD resource (may be cache RDD or shuffle data). But the user want to give priority in use of the SSD to cache the RDD.  I think we should add the similar configuration to Flag for enabling the SSD storage to shuffle data.
spark.ssd.shuffle.enabled = false




> Store blocks in different speed storage devices by hierarchy way
> ----------------------------------------------------------------
>
>                 Key: SPARK-12196
>                 URL: https://issues.apache.org/jira/browse/SPARK-12196
>             Project: Spark
>          Issue Type: New Feature
>          Components: Spark Core
>            Reporter: yucai
>
> *Problem*
> Nowadays, users have both SSDs and HDDs. 
> SSDs have great performance, but capacity is small. HDDs have good capacity, but x2-x3 lower than SSDs.
> How can we get both good?
> *Solution*
> Our idea is to build hierarchy store: use SSDs as cache and HDDs as backup storage. 
> When Spark core allocates blocks for RDD (either shuffle or RDD cache), it gets blocks from SSDs first, and when SSD’s useable space is less than some threshold, getting blocks from HDDs.
> In our implementation, we actually go further. We support a way to build any level hierarchy store access all storage medias (NVM, SSD, HDD etc.).
> *Performance*
> 1. At the best case, our solution performs the same as all SSDs.
> 2. At the worst case, like all data are spilled to HDDs, no performance regression.
> 3. Compared with all HDDs, hierarchy store improves more than *_x1.86_* (it could be higher, CPU reaches bottleneck in our test environment).
> 4. Compared with Tachyon, our hierarchy store still *_x1.3_* faster. Because we support both RDD cache and shuffle and no extra inter process communication.
> *Usage*
> 1. Set the priority and threshold for each layer in spark.storage.hierarchyStore.
> {code}
> spark.storage.hierarchyStore='nvm 50GB,ssd 80GB'
> {code}
> It builds a 3 layers hierarchy store: the 1st is "nvm", the 2nd is "sdd", all the rest form the last layer.
> 2. Configure each layer's location, user just needs put the keyword like "nvm", "ssd", which are specified in step 1, into local dirs, like spark.local.dir or yarn.nodemanager.local-dirs.
> {code}
> spark.local.dir=/mnt/nvm1,/mnt/ssd1,/mnt/ssd2,/mnt/ssd3,/mnt/disk1,/mnt/disk2,/mnt/disk3,/mnt/disk4,/mnt/others
> {code}
> After then, restart your Spark application, it will allocate blocks from nvm first.
> When nvm's usable space is less than 50GB, it starts to allocate from ssd.
> When ssd's usable space is less than 80GB, it starts to allocate from the last layer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org