You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@ignite.apache.org by Tolga Kavukcu <ka...@gmail.com> on 2017/02/09 11:50:28 UTC

Cache write behind optimization

Hi everyone,

I discovered that my write behind updates are executed within same thread
which i perform put operation. I can see that i happens under load, but
could anyone tell me  which parameters should i optimize. I dont want to
run writebehind operation in my threads.

<bean class="org.apache.ignite.configuration.CacheConfiguration"
name="DEFAULT">
        <property name="rebalanceThrottle" value="100"/>
        <property name="rebalanceBatchSize" value="#{2 * 1024 * 1024}"/>
        <property name="rebalanceMode" value="SYNC"/>
        <property name="atomicityMode" value="ATOMIC" />
        <property name="cacheMode" value="PARTITIONED" />
        <property name="memoryMode" value="OFFHEAP_TIERED" />
        <property name="backups" value="1" />
        <property name="affinity">
            <bean
class="org.apache.ignite.cache.affinity.fair.FairAffinityFunction">
                <constructor-arg index="0" type="int" value="128"/>
            </bean>
        </property>
        <property name="offHeapMaxMemory" value="0" />
        <property name="writeThrough" value="true" />
        <property name="writeBehindEnabled" value="true" />
        <property name="eagerTtl" value="false"/>
        <property name="writeBehindFlushFrequency" value="#{10 * 1000}" />
        <property name="writeBehindBatchSize" value="100000" />
        <property name="writeBehindFlushThreadCount" value="2" />
        <property name="writeBehindFlushSize" value="100000" />
        <property name="startSize" value="250000" />
        <property name="statisticsEnabled" value="true" />
    </bean>

java.lang.Thread.State: RUNNABLE
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at org.postgresql.core.VisibleBufferedInputStream.readMore(
VisibleBufferedInputStream.java:143)
at org.postgresql.core.VisibleBufferedInputStream.ensureBytes(
VisibleBufferedInputStream.java:112)
at org.postgresql.core.VisibleBufferedInputStream.read(
VisibleBufferedInputStream.java:70)
at org.postgresql.core.PGStream.ReceiveChar(PGStream.java:283)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(
QueryExecutorImpl.java:1818)
at org.postgresql.core.v3.QueryExecutorImpl.execute(
QueryExecutorImpl.java:377)
- locked <0x0000000795dc0038> (a org.postgresql.core.v3.QueryExecutorImpl)
at org.postgresql.jdbc.PgStatement.executeBatch(PgStatement.java:1026)
at org.postgresql.jdbc.PgPreparedStatement.executeBatch(
PgPreparedStatement.java:1661)
at com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeBatch(
NewProxyPreparedStatement.java:2544)
at com.intellica.evam.engine.db.store.ScenarioRecordStore.writeAll(
ScenarioRecordStore.java:114)
at
org.apache.ignite.internal.processors.cache.store.GridCacheWriteBehindStore.updateStore(
GridCacheWriteBehindStore.java:685)
at
org.apache.ignite.internal.processors.cache.store.GridCacheWriteBehindStore.applyBatch(
GridCacheWriteBehindStore.java:618)
at
org.apache.ignite.internal.processors.cache.store.GridCacheWriteBehindStore.flushSingleValue(
GridCacheWriteBehindStore.java:580)
at
org.apache.ignite.internal.processors.cache.store.GridCacheWriteBehindStore.updateCache(
GridCacheWriteBehindStore.java:538)
at
org.apache.ignite.internal.processors.cache.store.GridCacheWriteBehindStore.write(
GridCacheWriteBehindStore.java:454)
at
org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.put(
GridCacheStoreManagerAdapter.java:575)
at
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerUpdate(
GridCacheMapEntry.java:2436)
- locked <0x000000078f8c1460> (a
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicOffHeapCacheEntry)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateSingle(
GridDhtAtomicCache.java:2478)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(
GridDhtAtomicCache.java:1880)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(
GridDhtAtomicCache.java:1720)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.mapSingle(
GridNearAtomicAbstractUpdateFuture.java:259)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.map(
GridNearAtomicSingleUpdateFuture.java:508)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.mapOnTopology(
GridNearAtomicSingleUpdateFuture.java:438)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.map(
GridNearAtomicAbstractUpdateFuture.java:208)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update0(
GridDhtAtomicCache.java:1238)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.put0(
GridDhtAtomicCache.java:674)
at org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(
GridCacheAdapter.java:2243)
at org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(
GridCacheAdapter.java:2220)
at org.apache.ignite.internal.processors.cache.IgniteCacheProxy.put(
IgniteCacheProxy.java:1376)
at com.intellica.project.helper.ee.ConfigManagerHelperEE.setState(
ConfigManagerHelperEE.java:92)


Thanks.
-- 

*Tolga KAVUKÇU*

Re: Cache write behind optimization

Posted by Tolga Kavukcu <ka...@gmail.com>.
Hi Yakov,

I am trying to process data based on primary node calculation using
mapKeyToNode function of cache's affinity function. I expect there is no
remote access.  I will try to summarize problem into a reproducible code
piece.

Thanks for your help.

On Tue, Feb 21, 2017 at 11:09 AM, Yakov Zhdanov <yz...@apache.org> wrote:

> Tolga, this looks like you do cache.get() and key resides on remote node.
> So, yes, local node waits for response from remote node.
>
> --Yakov
>
> 2017-02-21 10:23 GMT+03:00 Tolga Kavukcu <ka...@gmail.com>:
>
>> Hi Val,Everyone
>>
>> I am able to overcome with write behind issue and can process exteremly
>> fast in single node. But when i switched to multinode with partitioned
>> mode. My threads waiting at some condition. There are 16 threads processing
>> data all waits at same trace. Adding the thread dump.
>>
>>  java.lang.Thread.State: WAITING (parking)
>> at sun.misc.Unsafe.park(Native Method)
>> - parking to wait for  <0x0000000711093898> (a
>> org.apache.ignite.internal.processors.cache.distributed.dht.
>> GridPartitionedSingleGetFuture)
>> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>> at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAn
>> dCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>> at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcqu
>> ireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>> at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquir
>> eSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>> at org.apache.ignite.internal.util.future.GridFutureAdapter.get
>> 0(GridFutureAdapter.java:161)
>> at org.apache.ignite.internal.util.future.GridFutureAdapter.get
>> (GridFutureAdapter.java:119)
>> at org.apache.ignite.internal.processors.cache.distributed.dht.
>> atomic.GridDhtAtomicCache.get0(GridDhtAtomicCache.java:487)
>> at org.apache.ignite.internal.processors.cache.GridCacheAdapter
>> .get(GridCacheAdapter.java:4629)
>> at org.apache.ignite.internal.processors.cache.GridCacheAdapter
>> .get(GridCacheAdapter.java:1386)
>> at org.apache.ignite.internal.processors.cache.IgniteCacheProxy
>> .get(IgniteCacheProxy.java:1118)
>> at com.intellica.evam.engine.cache.dao.ScenarioCacheDao.getCurr
>> entScenarioRecord(ScenarioCacheDao.java:35)
>>
>> What might be the reason of the problem. Does it waits for a response
>> from other node ?
>>
>> -Regards.
>>
>> On Fri, Feb 10, 2017 at 7:31 AM, Tolga Kavukcu <ka...@gmail.com>
>> wrote:
>>
>>> Hi Val,
>>>
>>> Thanks for your tip, with enough memory i believe write-behind queue can
>>> handle peak times.
>>>
>>> Thanks.
>>>
>>> Regards.
>>>
>>> On Thu, Feb 9, 2017 at 10:44 PM, vkulichenko <
>>> valentin.kulichenko@gmail.com> wrote:
>>>
>>>> Hi Tolga,
>>>>
>>>> There is a back-pressure mechanism to ensure that node doesn't run out
>>>> of
>>>> memory because of too long write behind queue. You can try increasing
>>>> writeBehindFlushSize property to relax it.
>>>>
>>>> -Val
>>>>
>>>>
>>>>
>>>> --
>>>> View this message in context: http://apache-ignite-users.705
>>>> 18.x6.nabble.com/Cache-write-behind-optimization-tp10527p10531.html
>>>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>>>
>>>
>>>
>>>
>>> --
>>>
>>> *Tolga KAVUKÇU*
>>>
>>
>>
>>
>> --
>>
>> *Tolga KAVUKÇU*
>>
>
>


-- 

*Tolga KAVUKÇU*

Re: Cache write behind optimization

Posted by Yakov Zhdanov <yz...@apache.org>.
Tolga, this looks like you do cache.get() and key resides on remote node.
So, yes, local node waits for response from remote node.

--Yakov

2017-02-21 10:23 GMT+03:00 Tolga Kavukcu <ka...@gmail.com>:

> Hi Val,Everyone
>
> I am able to overcome with write behind issue and can process exteremly
> fast in single node. But when i switched to multinode with partitioned
> mode. My threads waiting at some condition. There are 16 threads processing
> data all waits at same trace. Adding the thread dump.
>
>  java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0000000711093898> (a org.apache.ignite.internal.
> processors.cache.distributed.dht.GridPartitionedSingleGetFuture)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer.
> parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer.
> doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer.
> acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
> at org.apache.ignite.internal.util.future.GridFutureAdapter.
> get0(GridFutureAdapter.java:161)
> at org.apache.ignite.internal.util.future.GridFutureAdapter.
> get(GridFutureAdapter.java:119)
> at org.apache.ignite.internal.processors.cache.distributed.
> dht.atomic.GridDhtAtomicCache.get0(GridDhtAtomicCache.java:487)
> at org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(
> GridCacheAdapter.java:4629)
> at org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(
> GridCacheAdapter.java:1386)
> at org.apache.ignite.internal.processors.cache.IgniteCacheProxy.get(
> IgniteCacheProxy.java:1118)
> at com.intellica.evam.engine.cache.dao.ScenarioCacheDao.
> getCurrentScenarioRecord(ScenarioCacheDao.java:35)
>
> What might be the reason of the problem. Does it waits for a response from
> other node ?
>
> -Regards.
>
> On Fri, Feb 10, 2017 at 7:31 AM, Tolga Kavukcu <ka...@gmail.com>
> wrote:
>
>> Hi Val,
>>
>> Thanks for your tip, with enough memory i believe write-behind queue can
>> handle peak times.
>>
>> Thanks.
>>
>> Regards.
>>
>> On Thu, Feb 9, 2017 at 10:44 PM, vkulichenko <
>> valentin.kulichenko@gmail.com> wrote:
>>
>>> Hi Tolga,
>>>
>>> There is a back-pressure mechanism to ensure that node doesn't run out of
>>> memory because of too long write behind queue. You can try increasing
>>> writeBehindFlushSize property to relax it.
>>>
>>> -Val
>>>
>>>
>>>
>>> --
>>> View this message in context: http://apache-ignite-users.705
>>> 18.x6.nabble.com/Cache-write-behind-optimization-tp10527p10531.html
>>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>>
>>
>>
>>
>> --
>>
>> *Tolga KAVUKÇU*
>>
>
>
>
> --
>
> *Tolga KAVUKÇU*
>

Re: Cache write behind optimization

Posted by Tolga Kavukcu <ka...@gmail.com>.
Hi Val,Everyone

I am able to overcome with write behind issue and can process exteremly
fast in single node. But when i switched to multinode with partitioned
mode. My threads waiting at some condition. There are 16 threads processing
data all waits at same trace. Adding the thread dump.

 java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x0000000711093898> (a
org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:161)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:119)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.get0(GridDhtAtomicCache.java:487)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:4629)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:1386)
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxy.get(IgniteCacheProxy.java:1118)
at
com.intellica.evam.engine.cache.dao.ScenarioCacheDao.getCurrentScenarioRecord(ScenarioCacheDao.java:35)

What might be the reason of the problem. Does it waits for a response from
other node ?

-Regards.

On Fri, Feb 10, 2017 at 7:31 AM, Tolga Kavukcu <ka...@gmail.com>
wrote:

> Hi Val,
>
> Thanks for your tip, with enough memory i believe write-behind queue can
> handle peak times.
>
> Thanks.
>
> Regards.
>
> On Thu, Feb 9, 2017 at 10:44 PM, vkulichenko <
> valentin.kulichenko@gmail.com> wrote:
>
>> Hi Tolga,
>>
>> There is a back-pressure mechanism to ensure that node doesn't run out of
>> memory because of too long write behind queue. You can try increasing
>> writeBehindFlushSize property to relax it.
>>
>> -Val
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/Cache-write-behind-optimization-tp10527p10531.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>
>
> --
>
> *Tolga KAVUKÇU*
>



-- 

*Tolga KAVUKÇU*

Re: Cache write behind optimization

Posted by Tolga Kavukcu <ka...@gmail.com>.
Hi Val,

Thanks for your tip, with enough memory i believe write-behind queue can
handle peak times.

Thanks.

Regards.

On Thu, Feb 9, 2017 at 10:44 PM, vkulichenko <va...@gmail.com>
wrote:

> Hi Tolga,
>
> There is a back-pressure mechanism to ensure that node doesn't run out of
> memory because of too long write behind queue. You can try increasing
> writeBehindFlushSize property to relax it.
>
> -Val
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Cache-write-behind-optimization-tp10527p10531.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

*Tolga KAVUKÇU*

Re: Cache write behind optimization

Posted by vkulichenko <va...@gmail.com>.
Hi Tolga,

There is a back-pressure mechanism to ensure that node doesn't run out of
memory because of too long write behind queue. You can try increasing
writeBehindFlushSize property to relax it.

-Val



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Cache-write-behind-optimization-tp10527p10531.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.