You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@ignite.apache.org by lmark58 <la...@principled.io> on 2018/01/10 01:03:52 UTC

IgniteOutOfMemoryException when using putAll instead of put

For testing I created a data region of 21 MB 

DataRegionConfiguration = 
    (new DataRegionConfiguration)
      .setName("testRegion")
      .setInitialSize(21 * 1024 * 1024)
      .setMaxSize(21 * 1024 * 1024)
      .setPersistenceEnabled(false)
      .setPageEvictionMode(DataPageEvictionMode.RANDOM_LRU)
      .setMetricsEnabled(true)
      .setEvictionThreshold(.9)

I then created a cache that uses that data region.

      val cfg = new CacheConfiguration[Int, String]
      cfg.setName("testCache")
      .setCacheMode(CacheMode.PARTITIONED) // The most efficient mode that
allows a client to read
      .setAtomicityMode(CacheAtomicityMode.ATOMIC)
     
.setExpiryPolicyFactory(CreatedExpiryPolicy.factoryOf(Duration.ETERNAL)) //
never throw away data
      .setDataRegionName("testRegion"))
      .setStatisticsEnabled(true)

       val myCache = ignite.getOrCreateCache(cfg)

This region is large enough to hold about 9900 entries where the String
value has a length of 1200.

If I put 20000 values into the cache one at a time using put, it works as I
expect.   There is no error and a subset of the values is retained in the
cache.

But if I do a putall on the 20000 values I get an IgniteOutOfMemoryException
( stack trace below).  Is this expected behavior?   The error suggests
enabling evictions, but they are already enabled.

This test is running just a single instance of ignite embedded in the test
program.  In production I will have much more memory, but I want to
understand if this is a bug, since there can always be a case where the
putall will require more memory than is currently available and if it does
not evict pages I could get this in prod.

[19:41:08,411][ERROR][main][GridDhtAtomicCache] <fubar> Unexpected exception
during cache update
class org.apache.ignite.IgniteException: Runtime failure on search row:
org.apache.ignite.internal.processors.cache.tree.SearchRow@262b2c86
	at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invoke(BPlusTree.java:1632)
	at
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl.java:1201)
	at
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:343)
	at
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerUpdate(GridCacheMapEntry.java:1693)
	at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateSingle(GridDhtAtomicCache.java:2419)
	at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update(GridDhtAtomicCache.java:1882)
	at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1735)
	at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1627)
	at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.sendSingleRequest(GridNearAtomicAbstractUpdateFuture.java:299)
	at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateFuture.map(GridNearAtomicUpdateFuture.java:812)
	at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateFuture.mapOnTopology(GridNearAtomicUpdateFuture.java:664)
	at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.map(GridNearAtomicAbstractUpdateFuture.java:248)
	at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAll0(GridDhtAtomicCache.java:1068)
	at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.putAll0(GridDhtAtomicCache.java:647)
	at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.putAll(GridCacheAdapter.java:2760)
	at
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.putAll(IgniteCacheProxyImpl.java:1068)
	at
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.putAll(GatewayProtectedCacheProxy.java:928)
	at IgniteMain$.$anonfun$main$8(IgniteMain.scala:64)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
	at scala.util.Try$.apply(Try.scala:209)
	at IgniteMain$.main(IgniteMain.scala:64)
	at IgniteMain.main(IgniteMain.scala)
Caused by: class org.apache.ignite.internal.mem.IgniteOutOfMemoryException:
Not enough memory allocated (consider increasing data region size or
enabling evictions) [policyName=RefData, size=22.0 MB]
	at
org.apache.ignite.internal.pagemem.impl.PageMemoryNoStoreImpl.allocatePage(PageMemoryNoStoreImpl.java:292)
	at
org.apache.ignite.internal.processors.cache.persistence.freelist.FreeListImpl.allocateDataPage(FreeListImpl.java:456)
	at
org.apache.ignite.internal.processors.cache.persistence.freelist.FreeListImpl.insertDataRow(FreeListImpl.java:494)
	at
org.apache.ignite.internal.processors.cache.persistence.RowStore.addRow(RowStore.java:90)
	at
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.createRow(IgniteCacheOffheapManagerImpl.java:1255)
	at
org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.update(GridCacheMapEntry.java:4408)
	at
org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.call(GridCacheMapEntry.java:4204)
	at
org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.call(GridCacheMapEntry.java:3918)
	at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Invoke.invokeClosure(BPlusTree.java:2988)
	at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Invoke.access$6200(BPlusTree.java:2882)
	at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invokeDown(BPlusTree.java:1713)
	at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invoke(BPlusTree.java:1602)
	... 21 more




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: IgniteOutOfMemoryException when using putAll instead of put

Posted by Alexey Popov <ta...@gmail.com>.
Hi Larry,

I checked the code.
The issue is specific to your test data.
You have relatively large initial entries (4.7 Kbytes) with the same index
length (it is a just String). Please note that the index can't fit into the
single page (4K).

The rest of entries during .get() (from Store) are relatively short (just
"foo" word).
It seems that Ignite can't make a correct eviction threshold estimation
(including index) in your case.

If you change .setEvictionThreshold(.9) to .setEvictionThreshold(.8) with
the same test data then everything works as expected.

Anyway, I will open a ticket for your reproducer.

Thank you,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: IgniteOutOfMemoryException when using putAll instead of put

Posted by Larry Mark <la...@principled.io>.
no problem, this is not a short term blocker, it is just something I need
to understand better to make sure that I do not configure things in a way
to get unexpected OOM in production.


On Mon, Jan 15, 2018 at 1:18 PM, Alexey Popov <ta...@gmail.com> wrote:

> Hi Larry,
>
> I am without my PC for a while. I will check the file you attached later
> this week.
>
> Thanks,
> Alexey
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>

Re: IgniteOutOfMemoryException when using putAll instead of put

Posted by Alexey Popov <ta...@gmail.com>.
Hi Larry,

I am without my PC for a while. I will check the file you attached later
this week.

Thanks,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: IgniteOutOfMemoryException when using putAll instead of put

Posted by Larry Mark <la...@principled.io>.
Alexey,

The runtime class is used so I can have a common method to create any cache
type and index the key and value types of the cache.

To simplify things, attached is a tar file that is a small program that
throws an OOM exception for me.  I get the OOM when loading from the cache
store on misses.   I would expect it to just evict memory pages when it
inserts a new value, and not care if the insert is because of a put or a
read from cache.

If you comment out line 79 in the IgniteConfigGenerator class then it does
not get the OOM.

This is a simple test program, in my production code the value in the cache
will be an object, and I am calling setIndexedTypes to create a table for
that value so I can use sql query calls.

Can you let me know if you get the same results, and if so why we get the
OOM?

Thanks,

Larry



On Fri, Jan 12, 2018 at 6:53 AM, Alexey Popov <ta...@gmail.com> wrote:

> Hi,
>
> You are right, "evicts=0" is related to cache evictions for on-heap caching
> [1]. It should be always 0 for you.
>
> I tried your case (with the same configs as you) and page evictions work
> fine with cache store enabled and indexed types. It seems that you have
> some
> misconfiguration.
>
> What are you trying to achieve by adding
> .setIndexedTypes(keytag.runtimeClass, valtag.runtimeClass) to String-value
> cache? and what is keytag.runtimeClass and valtag.runtimeClass?
>
> Could you please try with DummyClass with valid indexes enabled as below:
>
> /**
>  * DummyClass
>  */
> public class DummyClass {
>     /** Dummy string. */
>     public String dummyStr;
>
>     /** Dummy int. */
>     @QuerySqlField(index = true)
>     public Integer dummyInt;
>
>     public DummyClass(Integer dummyInt) {
>         this.dummyInt = dummyInt;
>         this.dummyStr = StringUtils.rightPad(dummyInt.toString(), 1024,
> '*');
>     }
> }
>
>         CacheConfiguration<Integer, DummyClass> cacheCfg = new
> CacheConfiguration<Integer, DummyClass>(CACHE_NAME)
>             .setCacheMode(CacheMode.PARTITIONED)
>             .setAtomicityMode(CacheAtomicityMode.ATOMIC)
>
> .setExpiryPolicyFactory(CreatedExpiryPolicy.factoryOf(Duration.ETERNAL))
>             .setDataRegionName(REG_NAME)
>             .setStatisticsEnabled(true)
>
> .setCacheStoreFactory(FactoryBuilder.factoryOf(
> DummyStoreFromAdapter.class))
>             .setReadThrough(true)
>             .setIndexedTypes(Integer.class, DummyClass.class);
>
> Thanks,
> Alexey
>
> [1] https://apacheignite.readme.io/docs/evictions#section-java-heap-cache
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>

Re: IgniteOutOfMemoryException when using putAll instead of put

Posted by Alexey Popov <ta...@gmail.com>.
Hi, 

You are right, "evicts=0" is related to cache evictions for on-heap caching
[1]. It should be always 0 for you.

I tried your case (with the same configs as you) and page evictions work
fine with cache store enabled and indexed types. It seems that you have some
misconfiguration.

What are you trying to achieve by adding
.setIndexedTypes(keytag.runtimeClass, valtag.runtimeClass) to String-value
cache? and what is keytag.runtimeClass and valtag.runtimeClass?

Could you please try with DummyClass with valid indexes enabled as below:

/**
 * DummyClass
 */
public class DummyClass {
    /** Dummy string. */
    public String dummyStr;

    /** Dummy int. */
    @QuerySqlField(index = true)
    public Integer dummyInt;

    public DummyClass(Integer dummyInt) {
        this.dummyInt = dummyInt;
        this.dummyStr = StringUtils.rightPad(dummyInt.toString(), 1024,
'*');
    }
}

        CacheConfiguration<Integer, DummyClass> cacheCfg = new
CacheConfiguration<Integer, DummyClass>(CACHE_NAME)
            .setCacheMode(CacheMode.PARTITIONED)
            .setAtomicityMode(CacheAtomicityMode.ATOMIC)
           
.setExpiryPolicyFactory(CreatedExpiryPolicy.factoryOf(Duration.ETERNAL))
            .setDataRegionName(REG_NAME)
            .setStatisticsEnabled(true)
           
.setCacheStoreFactory(FactoryBuilder.factoryOf(DummyStoreFromAdapter.class))
            .setReadThrough(true)
            .setIndexedTypes(Integer.class, DummyClass.class);

Thanks,
Alexey

[1] https://apacheignite.readme.io/docs/evictions#section-java-heap-cache



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: IgniteOutOfMemoryException when using putAll instead of put

Posted by Larry Mark <la...@principled.io>.
Here are the configurations

DataRegionConfiguration =
    (new DataRegionConfiguration)
      .setName("RefData")
      .setInitialSize(21 * 1024 * 1024)
      .setMaxSize(21 * 1024 * 1024)
      .setPersistenceEnabled(false)
      .setPageEvictionMode(DataPageEvictionMode.RANDOM_LRU)
      .setMetricsEnabled(true)
      .setEvictionThreshold(.9)

I then created a cache that uses that data region.

      val cfg = new CacheConfiguration[Int, String]
      cfg.setName("testCache")
      .setCacheMode(CacheMode.PARTITIONED)
      .setAtomicityMode(CacheAtomicityMode.ATOMIC)
      .setExpiryPolicyFactory(CreatedExpiryPolicy.factoryOf(Durati
on.ETERNAL))
// .setIndexedTypes(keytag.runtimeClass, valtag.runtimeClass) uncommenting
this causes the error
      .setDataRegionName("RefData"))
      .setStatisticsEnabled(true)

val cacheFactory = FactoryBuilder.factoryOf("CacheConstantString")
cfg.setCacheStoreFactory(cacheFactory)
  .setReadThrough(readThrough)

       val myCache = ignite.getOrCreateCache(cfg)

// This is my stubbed read through
class CacheConstantString[K,V] extends CacheStoreAdapter[K, V]  with
Logging {
  override def load(key: K): V = {
    "foo".asInstanceOf[V]
  }
}

It is the combination of both cache enabled and setting indexed types that
is the problem, it works fine without either, but I need the read through.
I never see any value in the metrics except evicts=0 and I know it must be
evicting pages because i am writing 35,000 keys into a data region that
only has enough space for 8000 or so.  This is a cache metric, is it
showing page evicts or key evicts?   Because I have an expiry of Eternal,
so I dont expect to see key evicts


On Thu, Jan 11, 2018 at 8:40 AM, Alexey Popov <ta...@gmail.com> wrote:

> Hi,
>
> Can you share your configuration for
> 1) cache
> 2) memory region?
>
> I see "evicts=0" in your stats that looks very strange. Are you sure you
> have a configured eviction policy in the data region (policyName=RefData)?
>
> Does this cache work fine (evicts some data) without Cache Store enabled?
>
> Thank you,
> Alexey
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>

Re: IgniteOutOfMemoryException when using putAll instead of put

Posted by Alexey Popov <ta...@gmail.com>.
Hi,

Can you share your configuration for
1) cache
2) memory region?

I see "evicts=0" in your stats that looks very strange. Are you sure you
have a configured eviction policy in the data region (policyName=RefData)?

Does this cache work fine (evicts some data) without Cache Store enabled?

Thank you,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: IgniteOutOfMemoryException when using putAll instead of put

Posted by Larry Mark <la...@principled.io>.
Thanks for the quick response.  I have observed similar behavior with 3rd
party persistence read through IF I set indexed types for the cache.

Test case - Load up the cache using put with 35,000 entries ( keys 1 ->
35,000).
Read every key using Get(key)

This is the use case that I want to use in my application, where I have an
active subset of my data in memory and if a key is accessed that is not in
memory, it is read in from a postgres database.    I am only doing read
through, not write through since there is a different path for the data to
get into postgres.

I can see from cache metrics ( shown below ) that I perform 34011 reads, of
which there are 6093 hits and 27,918 misses and then I get the OOM error.
This only happens if indexed types are set on the cache.  Is this expected
behavior?  If I am not using sql query on the cache , only using get and
put, does it matter if I do not set indexedTypes?  Does it help or hurt
performance in any way?

Cache metrics and stack trace shown below.

CacheMetricsSnapshot [reads=34011, puts=35000, hits=6093, misses=27918,
txCommits=0, txRollbacks=0, evicts=0, removes=0, putAvgTimeNanos=190.56557,
getAvgTimeNanos=53.07989, rmvAvgTimeNanos=0.0, commitAvgTimeNanos=0.0,
rollbackAvgTimeNanos=0.0, cacheName=fubar, offHeapGets=0, offHeapPuts=0,
offHeapRemoves=0, offHeapEvicts=0, offHeapHits=0, offHeapMisses=0,
offHeapEntriesCnt=33726, heapEntriesCnt=2, offHeapPrimaryEntriesCnt=33726,
offHeapBackupEntriesCnt=0, offHeapAllocatedSize=0, size=33726,
keySize=33726, isEmpty=false, dhtEvictQueueCurrSize=-1, txThreadMapSize=0,
txXidMapSize=0, txCommitQueueSize=0, txPrepareQueueSize=0,
txStartVerCountsSize=0, txCommittedVersionsSize=0,
txRolledbackVersionsSize=0, txDhtThreadMapSize=0, txDhtXidMapSize=-1,
txDhtCommitQueueSize=0, txDhtPrepareQueueSize=0, txDhtStartVerCountsSize=0,
txDhtCommittedVersionsSize=-1, txDhtRolledbackVersionsSize=-1,
isWriteBehindEnabled=false, writeBehindFlushSize=-1,
writeBehindFlushThreadCnt=-1, writeBehindFlushFreq=-1,
writeBehindStoreBatchSize=-1, writeBehindTotalCriticalOverflowCnt=-1,
writeBehindCriticalOverflowCnt=-1, writeBehindErrorRetryCnt=-1,
writeBehindBufSize=-1, totalPartitionsCnt=1024, rebalancingPartitionsCnt=0,
keysToRebalanceLeft=0, rebalancingKeysRate=0, rebalancingBytesRate=0,
rebalanceStartTime=0, rebalanceFinishTime=0, keyType=java.lang.Object,
valType=java.lang.Object, isStoreByVal=true, isStatisticsEnabled=true,
isManagementEnabled=false, isReadThrough=true, isWriteThrough=false]


[14:46:05,572][ERROR][sys-#52][GridPartitionedSingleGetFuture] Failed to
get values from dht cache [fut=GridFutureAdapter [ignoreInterrupts=false,
state=DONE, res=class o.a.i.IgniteCheckedException: Not enough memory
allocated (consider increasing data region size or enabling evictions)
[policyName=RefData, size=22.0 MB], hash=315679498]]
class org.apache.ignite.IgniteCheckedException: Not enough memory allocated
(consider increasing data region size or enabling evictions)
[policyName=RefData, size=22.0 MB]
at org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7252)
at
org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:975)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: class org.apache.ignite.internal.mem.IgniteOutOfMemoryException:
Not enough memory allocated (consider increasing data region size or
enabling evictions) [policyName=RefData, size=22.0 MB]
at
org.apache.ignite.internal.pagemem.impl.PageMemoryNoStoreImpl.allocatePage(PageMemoryNoStoreImpl.java:292)
at
org.apache.ignite.internal.processors.cache.persistence.DataStructure.allocatePageNoReuse(DataStructure.java:117)
at
org.apache.ignite.internal.processors.cache.persistence.DataStructure.allocatePage(DataStructure.java:105)
at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.access$8400(BPlusTree.java:81)
at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Put.insertWithSplit(BPlusTree.java:2703)
at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Put.insert(BPlusTree.java:2665)
at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Put.access$2500(BPlusTree.java:2547)
at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Insert.run0(BPlusTree.java:411)
at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Insert.run0(BPlusTree.java:392)
at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$GetPageHandler.run(BPlusTree.java:4697)
at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$GetPageHandler.run(BPlusTree.java:4682)
at
org.apache.ignite.internal.processors.cache.persistence.tree.util.PageHandler.writePage(PageHandler.java:342)
at
org.apache.ignite.internal.processors.cache.persistence.DataStructure.write(DataStructure.java:261)
at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.access$11100(BPlusTree.java:81)
at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Put.tryInsert(BPlusTree.java:2859)
at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Put.access$7600(BPlusTree.java:2547)
at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.putDown(BPlusTree.java:2285)
at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.putDown(BPlusTree.java:2266)
at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.putDown(BPlusTree.java:2266)
at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doPut(BPlusTree.java:2006)
at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.put(BPlusTree.java:1977)
at
org.apache.ignite.internal.processors.query.h2.database.H2TreeIndex.put(H2TreeIndex.java:197)
at
org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.addToIndex(GridH2Table.java:537)
at
org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.doUpdate(GridH2Table.java:488)
at
org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.update(GridH2Table.java:423)
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.store(IgniteH2Indexing.java:559)
at
org.apache.ignite.internal.processors.query.GridQueryProcessor.store(GridQueryProcessor.java:1747)
at
org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager.store(GridCacheQueryManager.java:425)
at
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.finishUpdate(IgniteCacheOffheapManagerImpl.java:1354)
at
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl.java:1209)
at
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:343)
at
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.storeValue(GridCacheMapEntry.java:3191)
at
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.versionedValue(GridCacheMapEntry.java:2726)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter$16$1.apply(GridCacheAdapter.java:2032)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter$16$1.apply(GridCacheAdapter.java:2011)
at
org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadAllFromStore(GridCacheStoreManagerAdapter.java:423)
at
org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadAll(GridCacheStoreManagerAdapter.java:389)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter$16.call(GridCacheAdapter.java:2011)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter$16.call(GridCacheAdapter.java:2009)
at
org.apache.ignite.internal.processors.cache.GridCacheContext$3.call(GridCacheContext.java:1412)
at
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6631)
at
org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:967)






On Wed, Jan 10, 2018 at 8:46 AM, Alexey Popov <ta...@gmail.com> wrote:

> Hi,
>
> You are right, cache.putAll() can't evict the entries from the batch it is
> working on, and you can get Ignite OOME.
> This is expected behavior because putAll get locks for all provided entry
> keys. That is critical:
> 1) for transactional caches and
> 2) any caches backed up by 3-rd party persistence store.
>
> There was an intention to optimize this behavior for atomic caches without
> cache store [1] but it seems it will not be implemented. So, you could rely
> on this behavior.
>
> [1] https://issues.apache.org/jira/browse/IGNITE-514.
>
> Thank you,
> Alexey
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>

Re: IgniteOutOfMemoryException when using putAll instead of put

Posted by Alexey Popov <ta...@gmail.com>.
Hi,

You are right, cache.putAll() can't evict the entries from the batch it is
working on, and you can get Ignite OOME.
This is expected behavior because putAll get locks for all provided entry
keys. That is critical:
1) for transactional caches and 
2) any caches backed up by 3-rd party persistence store.

There was an intention to optimize this behavior for atomic caches without
cache store [1] but it seems it will not be implemented. So, you could rely
on this behavior.  

[1] https://issues.apache.org/jira/browse/IGNITE-514.

Thank you,
Alexey




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/