You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Salih Kardan <ka...@gmail.com> on 2014/06/04 10:52:45 UTC

How to change default storage levels

Hi

I'm using Spark 0.9.1 and Shark 0.9.1. My dataset does not fit into memory
I have in my cluster setup, so I want to use also disk for caching. I guess
MEMORY_ONLY is the default storage level in Spark. If that's the case how
could I change the storage level to  MEMORY_AND_DISK in Spark?

thanks
Salih

Re: How to change default storage levels

Posted by Andrew Ash <an...@andrewash.com>.
You can change storage level on an individual RDD with
.persist(StorageLevel.MEMORY_AND_DISK), but I don't think you can change
what the default persistency level is for RDDs.

Andrew


On Wed, Jun 4, 2014 at 1:52 AM, Salih Kardan <ka...@gmail.com> wrote:

> Hi
>
> I'm using Spark 0.9.1 and Shark 0.9.1. My dataset does not fit into memory
> I have in my cluster setup, so I want to use also disk for caching. I guess
> MEMORY_ONLY is the default storage level in Spark. If that's the case how
> could I change the storage level to  MEMORY_AND_DISK in Spark?
>
> thanks
> Salih
>