You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@ignite.apache.org by Erik Vanherck <Er...@inventivegroup.com> on 2016/01/11 07:50:04 UTC

Write Behind data safety guarantees ?

Hi,

I’m wondering what guarantees, if any, the write behind mode actually has when working with a volume of input which exceeds it’s specified volume to write to the backing store ?

For example, suppose I do this


final CacheConfiguration<Long, byte[]> rccfg = new CacheConfiguration<>();
rccfg.setBackups(2);
rccfg.setManagementEnabled(true);
rccfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC); // wait until backups are written
rccfg.setCacheMode(CacheMode.PARTITIONED);
rccfg.setEvictionPolicy(new LruEvictionPolicy<Long, byte[]>());
rccfg.setMemoryMode(CacheMemoryMode.OFFHEAP_TIERED);
rccfg.setLoadPreviousValue(false);
rccfg.setName("Resources-UserStore");
rccfg.setOffHeapMaxMemory(512 * 1024 * 1024); // in bytes
rccfg.setReadFromBackup(true);
rccfg.setStartSize(5000);
rccfg.setCacheStoreFactory(new ResourceCacheStoreFactory(fDiskStorage));
rccfg.setReadThrough(true);
rccfg.setWriteThrough(true);
rccfg.setWriteBehindEnabled(true);
rccfg.setWriteBehindFlushFrequency(2 * 60  * 1000); // in millis
rccfg.setWriteBehindFlushSize(100);
rccfg.setSwapEnabled(false);
rccfg.setRebalanceBatchSize(2 * 1024 * 1024);
rccfg.setRebalanceThrottle(200); // in millis

What happens if writes keep coming in, but the write behind threads can’t keep up ? Either because the Store is too slow to accept them all or because the flush size and flush frequency need to be broken ? Will it start loosing data if the memory fills up and it needs to evict entries or does it increase the frequency and or blocks ?

PS: took me a while to figure out that I had to enable WriteThrough to enable WriteBehind .. seemed counterintuitive to me.

Cheers,
Erik Vanherck

Re: Write Behind data safety guarantees ?

Posted by Erik Vanherck <Er...@inventivegroup.com>.
Cool ! That’s what I was hoping it would do.

Thx,
Erik

On 11 Jan 2016, at 08:27, Denis Magda <dm...@gridgain.com>> wrote:

Hi Erick,

In the situation when a new cache entry arrives to the write-behind store and there is no more space to queue it up for later flushing the store will perform a flush of a single already queued entry from the Thread that is writing the newly arrived entry.
It means that a cache.put(...) that runs into the write-behind space limit will be synchronous and no data will be lost.

Regards,
Denis

On 1/11/2016 9:50 AM, Erik Vanherck wrote:
Hi,

I’m wondering what guarantees, if any, the write behind mode actually has when working with a volume of input which exceeds it’s specified volume to write to the backing store ?

For example, suppose I do this


final CacheConfiguration<Long, byte[]> rccfg = new CacheConfiguration<>();
rccfg.setBackups(2);
rccfg.setManagementEnabled(true);
rccfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC); // wait until backups are written
rccfg.setCacheMode(CacheMode.PARTITIONED);
rccfg.setEvictionPolicy(new LruEvictionPolicy<Long, byte[]>());
rccfg.setMemoryMode(CacheMemoryMode.OFFHEAP_TIERED);
rccfg.setLoadPreviousValue(false);
rccfg.setName("Resources-UserStore");
rccfg.setOffHeapMaxMemory(512 * 1024 * 1024); // in bytes
rccfg.setReadFromBackup(true);
rccfg.setStartSize(5000);
rccfg.setCacheStoreFactory(new ResourceCacheStoreFactory(fDiskStorage));
rccfg.setReadThrough(true);
rccfg.setWriteThrough(true);
rccfg.setWriteBehindEnabled(true);
rccfg.setWriteBehindFlushFrequency(2 * 60  * 1000); // in millis
rccfg.setWriteBehindFlushSize(100);
rccfg.setSwapEnabled(false);
rccfg.setRebalanceBatchSize(2 * 1024 * 1024);
rccfg.setRebalanceThrottle(200); // in millis

What happens if writes keep coming in, but the write behind threads can’t keep up ? Either because the Store is too slow to accept them all or because the flush size and flush frequency need to be broken ? Will it start loosing data if the memory fills up and it needs to evict entries or does it increase the frequency and or blocks ?

PS: took me a while to figure out that I had to enable WriteThrough to enable WriteBehind .. seemed counterintuitive to me.

Cheers,
Erik Vanherck



Re: Write Behind data safety guarantees ?

Posted by Denis Magda <dm...@gridgain.com>.
Hi Erick,

In the situation when a new cache entry arrives to the write-behind 
store and there is no more space to queue it up for later flushing the 
store will perform a flush of a single already queued entry from the 
Thread that is writing the newly arrived entry.
It means that a cache.put(...) that runs into the write-behind space 
limit will be synchronous and no data will be lost.

Regards,
Denis

On 1/11/2016 9:50 AM, Erik Vanherck wrote:
> Hi,
>
> I’m wondering what guarantees, if any, the write behind mode actually 
> has when working with a volume of input which exceeds it’s specified 
> volume to write to the backing store ?
>
> For example, suppose I do this
>
>
> final CacheConfiguration<Long, byte[]> rccfg = new CacheConfiguration<>();
> rccfg.setBackups(2);
> rccfg.setManagementEnabled(true);
> rccfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);// 
> wait until backups are written
> rccfg.setCacheMode(CacheMode.PARTITIONED);
> rccfg.setEvictionPolicy(new LruEvictionPolicy<Long, byte[]>());
> rccfg.setMemoryMode(CacheMemoryMode.OFFHEAP_TIERED);
> rccfg.setLoadPreviousValue(false);
> rccfg.setName("Resources-UserStore");
> rccfg.setOffHeapMaxMemory(512 * 1024 * 1024); // in bytes
> rccfg.setReadFromBackup(true);
> rccfg.setStartSize(5000);
> rccfg.setCacheStoreFactory(new ResourceCacheStoreFactory(fDiskStorage));
> rccfg.setReadThrough(true);
> rccfg.setWriteThrough(true);
> rccfg.setWriteBehindEnabled(true);
> rccfg.setWriteBehindFlushFrequency(2 * 60  * 1000); // in millis
> rccfg.setWriteBehindFlushSize(100);
> rccfg.setSwapEnabled(false);
> rccfg.setRebalanceBatchSize(2 * 1024 * 1024);
> rccfg.setRebalanceThrottle(200);// in millis
>
> What happens if writes keep coming in, but the write behind threads 
> can’t keep up ? Either because the Store is too slow to accept them 
> all or because the flush size and flush frequency need to be broken ? 
> Will it start loosing data if the memory fills up and it needs to 
> evict entries or does it increase the frequency and or blocks ?
>
> PS: took me a while to figure out that I had to enable WriteThrough to 
> enable WriteBehind .. seemed counterintuitive to me.
>
> Cheers,
> Erik Vanherck