You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@ignite.apache.org by Prasad Bhalerao <pr...@gmail.com> on 2020/02/24 14:27:52 UTC

Re: Read through not working as expected in case of Replicated cache

Hi,

Is this a bug or the cache is designed to work this way?

If it is as-designed, can this behavior be updated in ignite documentation?

Thanks,
Prasad

On Wed, Oct 30, 2019 at 7:19 PM Ilya Kasnacheev <il...@gmail.com>
wrote:

> Hello!
>
> I have discussed this with fellow Ignite developers, and they say read
> through for replicated cache would work where there is either:
>
> - writeThrough enabled and all changes do through it.
> - database contents do not change for already read keys.
>
> I can see that neither is met in your case, so you can expect the behavior
> that you are seeing.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> вт, 29 окт. 2019 г. в 18:18, Akash Shinde <ak...@gmail.com>:
>
>> I am using Ignite 2.6 version.
>>
>> I am starting 3 server nodes with a replicated cache and 1 client node.
>> Cache configuration is as follows.
>> Read-through true on but write-through is false. Load data by key is
>> implemented as given below in cache-loader.
>>
>> Steps to reproduce issue:
>> 1) Delete an entry from cache using IgniteCache.remove() method. (Entry
>> is just removed from cache but present in DB as write-through is false)
>> 2) Invoke IgniteCache.get() method for the same key in step 1.
>> 3) Now query the cache from client node. Every invocation returns
>> different results.
>> Sometimes it returns reloaded entry, sometime returns the results
>> without reloaded entry.
>>
>> Looks like read-through is not replicating the reloaded entry on all
>> nodes in case of REPLICATED cache.
>>
>> So to investigate further I changed the cache mode to PARTITIONED and set
>> the backup count to 3 i.e. total number of nodes present in cluster (to
>> mimic REPLICATED behavior).
>> This time it worked as expected.
>> Every invocation returned the same result with reloaded entry.
>>
>> *  private CacheConfiguration networkCacheCfg() {*
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> *    CacheConfiguration networkCacheCfg = new
>> CacheConfiguration<>(CacheName.NETWORK_CACHE.name
>> <http://CacheName.NETWORK_CACHE.name>());
>> networkCacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
>> networkCacheCfg.setWriteThrough(false);
>> networkCacheCfg.setReadThrough(true);
>> networkCacheCfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
>> networkCacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
>>   //networkCacheCfg.setBackups(3);
>> networkCacheCfg.setCacheMode(CacheMode.REPLICATED);
>> Factory<NetworkDataCacheLoader> storeFactory =
>> FactoryBuilder.factoryOf(NetworkDataCacheLoader.class);
>> networkCacheCfg.setCacheStoreFactory(storeFactory);
>> networkCacheCfg.setIndexedTypes(DefaultDataAffinityKey.class,
>> NetworkData.class);    networkCacheCfg.setSqlIndexMaxInlineSize(65);
>> RendezvousAffinityFunction affinityFunction = new
>> RendezvousAffinityFunction();
>> affinityFunction.setExcludeNeighbors(false);
>> networkCacheCfg.setAffinity(affinityFunction);
>> networkCacheCfg.setStatisticsEnabled(true);   //
>> networkCacheCfg.setNearConfiguration(nearCacheConfiguration());    return
>> networkCacheCfg;  }*
>>
>> @Override
>> public V load(K k) throws CacheLoaderException {
>>     V value = null;
>>     DataSource dataSource = springCtx.getBean(DataSource.class);
>>     try (Connection connection = dataSource.getConnection();
>>          PreparedStatement statement = connection.prepareStatement(loadByKeySql)) {
>>         //statement.setObject(1, k.getId());
>>         setPreparedStatement(statement,k);
>>         try (ResultSet rs = statement.executeQuery()) {
>>             if (rs.next()) {
>>                 value = rowMapper.mapRow(rs, 0);
>>             }
>>         }
>>     } catch (SQLException e) {
>>
>>         throw new CacheLoaderException(e.getMessage(), e);
>>     }
>>
>>     return value;
>> }
>>
>>
>> Thanks,
>>
>> Akash
>>
>>

Re: Read through not working as expected in case of Replicated cache

Posted by Ivan Pavlukhin <vo...@gmail.com>.
Hi Prasad,

Answering your questions:
1. Ignite documentation portal has a "suggest edits" link. You can
suggest an improvement there. Another option is to create a ticket to
improve documentation.
2. You can find both approaches in [1]. You can see a different
picture if cache.put is called.

[1] https://gist.github.com/pavlukhin/a94489c6296ace497be950598d7493c5

Best regards,
Ivan Pavlukhin

пн, 2 мар. 2020 г. в 12:57, Prasad Bhalerao <pr...@gmail.com>:
>
> Hi Ivan,
>
> Thank you for the clarification.
>
> So the behavior is same for REPLICATED as well as PARTITIONED cache.
>
> 1) Can we please have this behavior documented on Ignite web page? This will just help users to avoid confusion and design their cache effectively.
>
> 2)  You said "You can check it using IgniteCache.localPeek method (ask if more details how to do it are needed)".  Can you please explain this in detail?
>
>
> Regard,
> Prasad
>
> On Mon, Mar 2, 2020 at 2:45 PM Ivan Pavlukhin <vo...@gmail.com> wrote:
>>
>> Hi Prasad,
>>
>> AFAIK, when value is read through it is not sent to backup nodes. You
>> can check it using IgniteCache.localPeek method (ask if more details
>> how to do it are needed).
>>
>> I usually think about read-through cache for a following case. There
>> is an underlying storage with "real" data, cache is used to speedup an
>> access. Some kind of invalidation mechanism might be used but it is
>> assumed fine to read values from cache which are not consistent with
>> the backing storage at some point.
>>
>> Consequently it seems there is no need to distribute values from an
>> underlying storage over all replicas because if a value is absent a
>> reader will receive an actual value from the underlying storage.
>>
>> Best regards,
>> Ivan Pavlukhin
>>
>> пн, 2 мар. 2020 г. в 10:41, Prasad Bhalerao <pr...@gmail.com>:
>> >
>> > Hi Ivan/Denis,
>> >
>> > Are you saying that when a value is loaded to cache from an underlying
>> > storage using read-through approach, value is loaded only on primary node
>> > and does not get replicated on its back nodes?
>> >
>> > I am under the impression that when a value is loaded in a cache using
>> > read-through approach, this key/value pair gets replicated on all back-up
>> > nodes as well, irrespective of REPLICATED OR PARTITIONED cache.
>> > Please correct me if I am wrong.
>> >
>> > I think the key/value must get replicated on all backup nodes when it is
>> > read through underlying storage otherwise user will have to add the same
>> > key/value explicitly using cache.put(key,value) operation so that it will
>> > get replicated on all of its backup nodes.  This is what I am doing right
>> > now as a workaround to solve this issue.
>> >
>> > I will try to explain my use case again.
>> >
>> > I have few replicated caches for which read-through is enabled but
>> > write-through is disabled. The underlying tables for these caches are
>> > updated by different systems. Whenever these tables are updated by 3rd
>> > party system I want to reload the "cache entries".
>> >
>> > I achieve this using below given steps:
>> > 1) 3rd party systems sends an update message (which contains the key) to
>> > our service by invoking our REST api.
>> > 2) Delete an entry from cache using cache().remove(key) method. (Entry is
>> > just removed from cache but present in DB as write-through is false)
>> > 3) Invoke cache().get(key) method for the same key in step 2 to reload an
>> > entry.
>> >
>> > Thanks,
>> > Prasad
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > Prasad
>> >
>> > On Sat, Feb 29, 2020 at 4:49 AM Denis Magda <dm...@apache.org> wrote:
>> >
>> > > Ivan, thanks for stepping in.
>> > >
>> > > Prasad, is Ivan's assumption correct that you query the data with SQL under
>> > > the observed circumstances? My guess is that you were referring to the
>> > > key-value APIs as long as the issue is gone when the write-through is
>> > > enabled.
>> > >
>> > > -
>> > > Denis
>> > >
>> > >
>> > > On Fri, Feb 28, 2020 at 2:30 PM Ivan Pavlukhin <vo...@gmail.com>
>> > > wrote:
>> > >
>> > > > As I understand the thing here is in combination of read-through and
>> > > > SQL. SQL queries do not read from underlying storage when read-through
>> > > > is configured. And an observed result happens because query from a
>> > > > client node over REPLICATED cache picks random server node (kind of
>> > > > load-balancing) to retrieve data. Following happens in the described
>> > > > case:
>> > > > 1. Value is loaded to a cache from an underlying storage on a primary
>> > > > node when cache.get is called.
>> > > > 2. Query is executed multiple times and when the chose node is the
>> > > > primary node then the value is observed. On other nodes the value is
>> > > > absent.
>> > > >
>> > > > Actually, behavior for PARTITIONED cache is similar, but an
>> > > > inconsistency is not observed because SQL queries read data from the
>> > > > primary node there. If the primary node leaves a cluster then an SQL
>> > > > query will not see the value anymore. So, the same inconsistency will
>> > > > appear.
>> > > >
>> > > > Best regards,
>> > > > Ivan Pavlukhin
>> > > >
>> > > > пт, 28 февр. 2020 г. в 13:23, Prasad Bhalerao <
>> > > > prasadbhalerao1983@gmail.com>:
>> > > > >
>> > > > > Can someone please comment on this?
>> > > > >
>> > > > > On Wed, Feb 26, 2020 at 6:04 AM Denis Magda <dm...@apache.org> wrote:
>> > > > >
>> > > > > > Ignite Dev team,
>> > > > > >
>> > > > > > This sounds like an issue in our replicated cache implementation
>> > > rather
>> > > > > > than an expected behavior. Especially, if partitioned caches don't
>> > > have
>> > > > > > such a specificity.
>> > > > > >
>> > > > > > Who can explain why write-through needs to be enabled for replicated
>> > > > caches
>> > > > > > to reload an entry from an underlying database properly/consistently?
>> > > > > >
>> > > > > > -
>> > > > > > Denis
>> > > > > >
>> > > > > >
>> > > > > > On Tue, Feb 25, 2020 at 5:11 AM Ilya Kasnacheev <
>> > > > ilya.kasnacheev@gmail.com
>> > > > > > >
>> > > > > > wrote:
>> > > > > >
>> > > > > > > Hello!
>> > > > > > >
>> > > > > > > I think this is by design. You may suggest edits on readme.io.
>> > > > > > >
>> > > > > > > Regards,
>> > > > > > > --
>> > > > > > > Ilya Kasnacheev
>> > > > > > >
>> > > > > > >
>> > > > > > > пн, 24 февр. 2020 г. в 17:28, Prasad Bhalerao <
>> > > > > > > prasadbhalerao1983@gmail.com>:
>> > > > > > >
>> > > > > > >> Hi,
>> > > > > > >>
>> > > > > > >> Is this a bug or the cache is designed to work this way?
>> > > > > > >>
>> > > > > > >> If it is as-designed, can this behavior be updated in ignite
>> > > > > > >> documentation?
>> > > > > > >>
>> > > > > > >> Thanks,
>> > > > > > >> Prasad
>> > > > > > >>
>> > > > > > >> On Wed, Oct 30, 2019 at 7:19 PM Ilya Kasnacheev <
>> > > > > > >> ilya.kasnacheev@gmail.com> wrote:
>> > > > > > >>
>> > > > > > >>> Hello!
>> > > > > > >>>
>> > > > > > >>> I have discussed this with fellow Ignite developers, and they say
>> > > > read
>> > > > > > >>> through for replicated cache would work where there is either:
>> > > > > > >>>
>> > > > > > >>> - writeThrough enabled and all changes do through it.
>> > > > > > >>> - database contents do not change for already read keys.
>> > > > > > >>>
>> > > > > > >>> I can see that neither is met in your case, so you can expect the
>> > > > > > >>> behavior that you are seeing.
>> > > > > > >>>
>> > > > > > >>> Regards,
>> > > > > > >>> --
>> > > > > > >>> Ilya Kasnacheev
>> > > > > > >>>
>> > > > > > >>>
>> > > > > > >>> вт, 29 окт. 2019 г. в 18:18, Akash Shinde <akashshinde@gmail.com
>> > > >:
>> > > > > > >>>
>> > > > > > >>>> I am using Ignite 2.6 version.
>> > > > > > >>>>
>> > > > > > >>>> I am starting 3 server nodes with a replicated cache and 1
>> > > client
>> > > > > > node.
>> > > > > > >>>> Cache configuration is as follows.
>> > > > > > >>>> Read-through true on but write-through is false. Load data by
>> > > key
>> > > > is
>> > > > > > >>>> implemented as given below in cache-loader.
>> > > > > > >>>>
>> > > > > > >>>> Steps to reproduce issue:
>> > > > > > >>>> 1) Delete an entry from cache using IgniteCache.remove() method.
>> > > > > > (Entry
>> > > > > > >>>> is just removed from cache but present in DB as write-through is
>> > > > > > false)
>> > > > > > >>>> 2) Invoke IgniteCache.get() method for the same key in step 1.
>> > > > > > >>>> 3) Now query the cache from client node. Every invocation
>> > > returns
>> > > > > > >>>> different results.
>> > > > > > >>>> Sometimes it returns reloaded entry, sometime returns the
>> > > results
>> > > > > > >>>> without reloaded entry.
>> > > > > > >>>>
>> > > > > > >>>> Looks like read-through is not replicating the reloaded entry on
>> > > > all
>> > > > > > >>>> nodes in case of REPLICATED cache.
>> > > > > > >>>>
>> > > > > > >>>> So to investigate further I changed the cache mode to
>> > > PARTITIONED
>> > > > and
>> > > > > > >>>> set the backup count to 3 i.e. total number of nodes present in
>> > > > > > cluster (to
>> > > > > > >>>> mimic REPLICATED behavior).
>> > > > > > >>>> This time it worked as expected.
>> > > > > > >>>> Every invocation returned the same result with reloaded entry.
>> > > > > > >>>>
>> > > > > > >>>> *  private CacheConfiguration networkCacheCfg() {*
>> > > > > > >>>>
>> > > > > > >>>>
>> > > > > > >>>>
>> > > > > > >>>>
>> > > > > > >>>>
>> > > > > > >>>>
>> > > > > > >>>>
>> > > > > > >>>>
>> > > > > > >>>>
>> > > > > > >>>>
>> > > > > > >>>>
>> > > > > > >>>>
>> > > > > > >>>>
>> > > > > > >>>>
>> > > > > > >>>>
>> > > > > > >>>>
>> > > > > > >>>>
>> > > > > > >>>>
>> > > > > > >>>>
>> > > > > > >>>> *    CacheConfiguration networkCacheCfg = new
>> > > > > > >>>> CacheConfiguration<>(CacheName.NETWORK_CACHE.name
>> > > > > > >>>> <http://CacheName.NETWORK_CACHE.name>());
>> > > > > > >>>>
>> > > > networkCacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
>> > > > > > >>>> networkCacheCfg.setWriteThrough(false);
>> > > > > > >>>> networkCacheCfg.setReadThrough(true);
>> > > > > > >>>> networkCacheCfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
>> > > > > > >>>>
>> > > > > >
>> > > >
>> > > networkCacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
>> > > > > > >>>>   //networkCacheCfg.setBackups(3);
>> > > > > > >>>> networkCacheCfg.setCacheMode(CacheMode.REPLICATED);
>> > > > > > >>>> Factory<NetworkDataCacheLoader> storeFactory =
>> > > > > > >>>> FactoryBuilder.factoryOf(NetworkDataCacheLoader.class);
>> > > > > > >>>> networkCacheCfg.setCacheStoreFactory(storeFactory);
>> > > > > > >>>> networkCacheCfg.setIndexedTypes(DefaultDataAffinityKey.class,
>> > > > > > >>>> NetworkData.class);
>> > > > networkCacheCfg.setSqlIndexMaxInlineSize(65);
>> > > > > > >>>> RendezvousAffinityFunction affinityFunction = new
>> > > > > > >>>> RendezvousAffinityFunction();
>> > > > > > >>>> affinityFunction.setExcludeNeighbors(false);
>> > > > > > >>>> networkCacheCfg.setAffinity(affinityFunction);
>> > > > > > >>>> networkCacheCfg.setStatisticsEnabled(true);   //
>> > > > > > >>>> networkCacheCfg.setNearConfiguration(nearCacheConfiguration());
>> > > > > > return
>> > > > > > >>>> networkCacheCfg;  }*
>> > > > > > >>>>
>> > > > > > >>>> @Override
>> > > > > > >>>> public V load(K k) throws CacheLoaderException {
>> > > > > > >>>>     V value = null;
>> > > > > > >>>>     DataSource dataSource = springCtx.getBean(DataSource.class);
>> > > > > > >>>>     try (Connection connection = dataSource.getConnection();
>> > > > > > >>>>          PreparedStatement statement =
>> > > > > > connection.prepareStatement(loadByKeySql)) {
>> > > > > > >>>>         //statement.setObject(1, k.getId());
>> > > > > > >>>>         setPreparedStatement(statement,k);
>> > > > > > >>>>         try (ResultSet rs = statement.executeQuery()) {
>> > > > > > >>>>             if (rs.next()) {
>> > > > > > >>>>                 value = rowMapper.mapRow(rs, 0);
>> > > > > > >>>>             }
>> > > > > > >>>>         }
>> > > > > > >>>>     } catch (SQLException e) {
>> > > > > > >>>>
>> > > > > > >>>>         throw new CacheLoaderException(e.getMessage(), e);
>> > > > > > >>>>     }
>> > > > > > >>>>
>> > > > > > >>>>     return value;
>> > > > > > >>>> }
>> > > > > > >>>>
>> > > > > > >>>>
>> > > > > > >>>> Thanks,
>> > > > > > >>>>
>> > > > > > >>>> Akash
>> > > > > > >>>>
>> > > > > > >>>>
>> > > > > >
>> > > >
>> > >

Re: Read through not working as expected in case of Replicated cache

Posted by Prasad Bhalerao <pr...@gmail.com>.
Hi Ivan,

Thank you for the clarification.

So the behavior is same for REPLICATED as well as PARTITIONED cache.

1) Can we please have this behavior documented on Ignite web page? This
will just help users to avoid confusion and design their cache effectively.

2)  You said "You can check it using IgniteCache.localPeek method (ask if
more details how to do it are needed)".  Can you please explain this in
detail?


Regard,
Prasad

On Mon, Mar 2, 2020 at 2:45 PM Ivan Pavlukhin <vo...@gmail.com> wrote:

> Hi Prasad,
>
> AFAIK, when value is read through it is not sent to backup nodes. You
> can check it using IgniteCache.localPeek method (ask if more details
> how to do it are needed).
>
> I usually think about read-through cache for a following case. There
> is an underlying storage with "real" data, cache is used to speedup an
> access. Some kind of invalidation mechanism might be used but it is
> assumed fine to read values from cache which are not consistent with
> the backing storage at some point.
>
> Consequently it seems there is no need to distribute values from an
> underlying storage over all replicas because if a value is absent a
> reader will receive an actual value from the underlying storage.
>
> Best regards,
> Ivan Pavlukhin
>
> пн, 2 мар. 2020 г. в 10:41, Prasad Bhalerao <prasadbhalerao1983@gmail.com
> >:
> >
> > Hi Ivan/Denis,
> >
> > Are you saying that when a value is loaded to cache from an underlying
> > storage using read-through approach, value is loaded only on primary node
> > and does not get replicated on its back nodes?
> >
> > I am under the impression that when a value is loaded in a cache using
> > read-through approach, this key/value pair gets replicated on all back-up
> > nodes as well, irrespective of REPLICATED OR PARTITIONED cache.
> > Please correct me if I am wrong.
> >
> > I think the key/value must get replicated on all backup nodes when it is
> > read through underlying storage otherwise user will have to add the same
> > key/value explicitly using cache.put(key,value) operation so that it will
> > get replicated on all of its backup nodes.  This is what I am doing right
> > now as a workaround to solve this issue.
> >
> > I will try to explain my use case again.
> >
> > I have few replicated caches for which read-through is enabled but
> > write-through is disabled. The underlying tables for these caches are
> > updated by different systems. Whenever these tables are updated by 3rd
> > party system I want to reload the "cache entries".
> >
> > I achieve this using below given steps:
> > 1) 3rd party systems sends an update message (which contains the key) to
> > our service by invoking our REST api.
> > 2) Delete an entry from cache using cache().remove(key) method. (Entry is
> > just removed from cache but present in DB as write-through is false)
> > 3) Invoke cache().get(key) method for the same key in step 2 to reload an
> > entry.
> >
> > Thanks,
> > Prasad
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > Prasad
> >
> > On Sat, Feb 29, 2020 at 4:49 AM Denis Magda <dm...@apache.org> wrote:
> >
> > > Ivan, thanks for stepping in.
> > >
> > > Prasad, is Ivan's assumption correct that you query the data with SQL
> under
> > > the observed circumstances? My guess is that you were referring to the
> > > key-value APIs as long as the issue is gone when the write-through is
> > > enabled.
> > >
> > > -
> > > Denis
> > >
> > >
> > > On Fri, Feb 28, 2020 at 2:30 PM Ivan Pavlukhin <vo...@gmail.com>
> > > wrote:
> > >
> > > > As I understand the thing here is in combination of read-through and
> > > > SQL. SQL queries do not read from underlying storage when
> read-through
> > > > is configured. And an observed result happens because query from a
> > > > client node over REPLICATED cache picks random server node (kind of
> > > > load-balancing) to retrieve data. Following happens in the described
> > > > case:
> > > > 1. Value is loaded to a cache from an underlying storage on a primary
> > > > node when cache.get is called.
> > > > 2. Query is executed multiple times and when the chose node is the
> > > > primary node then the value is observed. On other nodes the value is
> > > > absent.
> > > >
> > > > Actually, behavior for PARTITIONED cache is similar, but an
> > > > inconsistency is not observed because SQL queries read data from the
> > > > primary node there. If the primary node leaves a cluster then an SQL
> > > > query will not see the value anymore. So, the same inconsistency will
> > > > appear.
> > > >
> > > > Best regards,
> > > > Ivan Pavlukhin
> > > >
> > > > пт, 28 февр. 2020 г. в 13:23, Prasad Bhalerao <
> > > > prasadbhalerao1983@gmail.com>:
> > > > >
> > > > > Can someone please comment on this?
> > > > >
> > > > > On Wed, Feb 26, 2020 at 6:04 AM Denis Magda <dm...@apache.org>
> wrote:
> > > > >
> > > > > > Ignite Dev team,
> > > > > >
> > > > > > This sounds like an issue in our replicated cache implementation
> > > rather
> > > > > > than an expected behavior. Especially, if partitioned caches
> don't
> > > have
> > > > > > such a specificity.
> > > > > >
> > > > > > Who can explain why write-through needs to be enabled for
> replicated
> > > > caches
> > > > > > to reload an entry from an underlying database
> properly/consistently?
> > > > > >
> > > > > > -
> > > > > > Denis
> > > > > >
> > > > > >
> > > > > > On Tue, Feb 25, 2020 at 5:11 AM Ilya Kasnacheev <
> > > > ilya.kasnacheev@gmail.com
> > > > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Hello!
> > > > > > >
> > > > > > > I think this is by design. You may suggest edits on readme.io.
> > > > > > >
> > > > > > > Regards,
> > > > > > > --
> > > > > > > Ilya Kasnacheev
> > > > > > >
> > > > > > >
> > > > > > > пн, 24 февр. 2020 г. в 17:28, Prasad Bhalerao <
> > > > > > > prasadbhalerao1983@gmail.com>:
> > > > > > >
> > > > > > >> Hi,
> > > > > > >>
> > > > > > >> Is this a bug or the cache is designed to work this way?
> > > > > > >>
> > > > > > >> If it is as-designed, can this behavior be updated in ignite
> > > > > > >> documentation?
> > > > > > >>
> > > > > > >> Thanks,
> > > > > > >> Prasad
> > > > > > >>
> > > > > > >> On Wed, Oct 30, 2019 at 7:19 PM Ilya Kasnacheev <
> > > > > > >> ilya.kasnacheev@gmail.com> wrote:
> > > > > > >>
> > > > > > >>> Hello!
> > > > > > >>>
> > > > > > >>> I have discussed this with fellow Ignite developers, and
> they say
> > > > read
> > > > > > >>> through for replicated cache would work where there is
> either:
> > > > > > >>>
> > > > > > >>> - writeThrough enabled and all changes do through it.
> > > > > > >>> - database contents do not change for already read keys.
> > > > > > >>>
> > > > > > >>> I can see that neither is met in your case, so you can
> expect the
> > > > > > >>> behavior that you are seeing.
> > > > > > >>>
> > > > > > >>> Regards,
> > > > > > >>> --
> > > > > > >>> Ilya Kasnacheev
> > > > > > >>>
> > > > > > >>>
> > > > > > >>> вт, 29 окт. 2019 г. в 18:18, Akash Shinde <
> akashshinde@gmail.com
> > > >:
> > > > > > >>>
> > > > > > >>>> I am using Ignite 2.6 version.
> > > > > > >>>>
> > > > > > >>>> I am starting 3 server nodes with a replicated cache and 1
> > > client
> > > > > > node.
> > > > > > >>>> Cache configuration is as follows.
> > > > > > >>>> Read-through true on but write-through is false. Load data
> by
> > > key
> > > > is
> > > > > > >>>> implemented as given below in cache-loader.
> > > > > > >>>>
> > > > > > >>>> Steps to reproduce issue:
> > > > > > >>>> 1) Delete an entry from cache using IgniteCache.remove()
> method.
> > > > > > (Entry
> > > > > > >>>> is just removed from cache but present in DB as
> write-through is
> > > > > > false)
> > > > > > >>>> 2) Invoke IgniteCache.get() method for the same key in step
> 1.
> > > > > > >>>> 3) Now query the cache from client node. Every invocation
> > > returns
> > > > > > >>>> different results.
> > > > > > >>>> Sometimes it returns reloaded entry, sometime returns the
> > > results
> > > > > > >>>> without reloaded entry.
> > > > > > >>>>
> > > > > > >>>> Looks like read-through is not replicating the reloaded
> entry on
> > > > all
> > > > > > >>>> nodes in case of REPLICATED cache.
> > > > > > >>>>
> > > > > > >>>> So to investigate further I changed the cache mode to
> > > PARTITIONED
> > > > and
> > > > > > >>>> set the backup count to 3 i.e. total number of nodes
> present in
> > > > > > cluster (to
> > > > > > >>>> mimic REPLICATED behavior).
> > > > > > >>>> This time it worked as expected.
> > > > > > >>>> Every invocation returned the same result with reloaded
> entry.
> > > > > > >>>>
> > > > > > >>>> *  private CacheConfiguration networkCacheCfg() {*
> > > > > > >>>>
> > > > > > >>>>
> > > > > > >>>>
> > > > > > >>>>
> > > > > > >>>>
> > > > > > >>>>
> > > > > > >>>>
> > > > > > >>>>
> > > > > > >>>>
> > > > > > >>>>
> > > > > > >>>>
> > > > > > >>>>
> > > > > > >>>>
> > > > > > >>>>
> > > > > > >>>>
> > > > > > >>>>
> > > > > > >>>>
> > > > > > >>>>
> > > > > > >>>>
> > > > > > >>>> *    CacheConfiguration networkCacheCfg = new
> > > > > > >>>> CacheConfiguration<>(CacheName.NETWORK_CACHE.name
> > > > > > >>>> <http://CacheName.NETWORK_CACHE.name>());
> > > > > > >>>>
> > > > networkCacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
> > > > > > >>>> networkCacheCfg.setWriteThrough(false);
> > > > > > >>>> networkCacheCfg.setReadThrough(true);
> > > > > > >>>> networkCacheCfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
> > > > > > >>>>
> > > > > >
> > > >
> > >
> networkCacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
> > > > > > >>>>   //networkCacheCfg.setBackups(3);
> > > > > > >>>> networkCacheCfg.setCacheMode(CacheMode.REPLICATED);
> > > > > > >>>> Factory<NetworkDataCacheLoader> storeFactory =
> > > > > > >>>> FactoryBuilder.factoryOf(NetworkDataCacheLoader.class);
> > > > > > >>>> networkCacheCfg.setCacheStoreFactory(storeFactory);
> > > > > > >>>>
> networkCacheCfg.setIndexedTypes(DefaultDataAffinityKey.class,
> > > > > > >>>> NetworkData.class);
> > > > networkCacheCfg.setSqlIndexMaxInlineSize(65);
> > > > > > >>>> RendezvousAffinityFunction affinityFunction = new
> > > > > > >>>> RendezvousAffinityFunction();
> > > > > > >>>> affinityFunction.setExcludeNeighbors(false);
> > > > > > >>>> networkCacheCfg.setAffinity(affinityFunction);
> > > > > > >>>> networkCacheCfg.setStatisticsEnabled(true);   //
> > > > > > >>>>
> networkCacheCfg.setNearConfiguration(nearCacheConfiguration());
> > > > > > return
> > > > > > >>>> networkCacheCfg;  }*
> > > > > > >>>>
> > > > > > >>>> @Override
> > > > > > >>>> public V load(K k) throws CacheLoaderException {
> > > > > > >>>>     V value = null;
> > > > > > >>>>     DataSource dataSource =
> springCtx.getBean(DataSource.class);
> > > > > > >>>>     try (Connection connection = dataSource.getConnection();
> > > > > > >>>>          PreparedStatement statement =
> > > > > > connection.prepareStatement(loadByKeySql)) {
> > > > > > >>>>         //statement.setObject(1, k.getId());
> > > > > > >>>>         setPreparedStatement(statement,k);
> > > > > > >>>>         try (ResultSet rs = statement.executeQuery()) {
> > > > > > >>>>             if (rs.next()) {
> > > > > > >>>>                 value = rowMapper.mapRow(rs, 0);
> > > > > > >>>>             }
> > > > > > >>>>         }
> > > > > > >>>>     } catch (SQLException e) {
> > > > > > >>>>
> > > > > > >>>>         throw new CacheLoaderException(e.getMessage(), e);
> > > > > > >>>>     }
> > > > > > >>>>
> > > > > > >>>>     return value;
> > > > > > >>>> }
> > > > > > >>>>
> > > > > > >>>>
> > > > > > >>>> Thanks,
> > > > > > >>>>
> > > > > > >>>> Akash
> > > > > > >>>>
> > > > > > >>>>
> > > > > >
> > > >
> > >
>

Re: Read through not working as expected in case of Replicated cache

Posted by Prasad Bhalerao <pr...@gmail.com>.
Hi Ivan,

Thank you for the clarification.

So the behavior is same for REPLICATED as well as PARTITIONED cache.

1) Can we please have this behavior documented on Ignite web page? This
will just help users to avoid confusion and design their cache effectively.

2)  You said "You can check it using IgniteCache.localPeek method (ask if
more details how to do it are needed)".  Can you please explain this in
detail?


Regard,
Prasad

On Mon, Mar 2, 2020 at 2:45 PM Ivan Pavlukhin <vo...@gmail.com> wrote:

> Hi Prasad,
>
> AFAIK, when value is read through it is not sent to backup nodes. You
> can check it using IgniteCache.localPeek method (ask if more details
> how to do it are needed).
>
> I usually think about read-through cache for a following case. There
> is an underlying storage with "real" data, cache is used to speedup an
> access. Some kind of invalidation mechanism might be used but it is
> assumed fine to read values from cache which are not consistent with
> the backing storage at some point.
>
> Consequently it seems there is no need to distribute values from an
> underlying storage over all replicas because if a value is absent a
> reader will receive an actual value from the underlying storage.
>
> Best regards,
> Ivan Pavlukhin
>
> пн, 2 мар. 2020 г. в 10:41, Prasad Bhalerao <prasadbhalerao1983@gmail.com
> >:
> >
> > Hi Ivan/Denis,
> >
> > Are you saying that when a value is loaded to cache from an underlying
> > storage using read-through approach, value is loaded only on primary node
> > and does not get replicated on its back nodes?
> >
> > I am under the impression that when a value is loaded in a cache using
> > read-through approach, this key/value pair gets replicated on all back-up
> > nodes as well, irrespective of REPLICATED OR PARTITIONED cache.
> > Please correct me if I am wrong.
> >
> > I think the key/value must get replicated on all backup nodes when it is
> > read through underlying storage otherwise user will have to add the same
> > key/value explicitly using cache.put(key,value) operation so that it will
> > get replicated on all of its backup nodes.  This is what I am doing right
> > now as a workaround to solve this issue.
> >
> > I will try to explain my use case again.
> >
> > I have few replicated caches for which read-through is enabled but
> > write-through is disabled. The underlying tables for these caches are
> > updated by different systems. Whenever these tables are updated by 3rd
> > party system I want to reload the "cache entries".
> >
> > I achieve this using below given steps:
> > 1) 3rd party systems sends an update message (which contains the key) to
> > our service by invoking our REST api.
> > 2) Delete an entry from cache using cache().remove(key) method. (Entry is
> > just removed from cache but present in DB as write-through is false)
> > 3) Invoke cache().get(key) method for the same key in step 2 to reload an
> > entry.
> >
> > Thanks,
> > Prasad
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > Prasad
> >
> > On Sat, Feb 29, 2020 at 4:49 AM Denis Magda <dm...@apache.org> wrote:
> >
> > > Ivan, thanks for stepping in.
> > >
> > > Prasad, is Ivan's assumption correct that you query the data with SQL
> under
> > > the observed circumstances? My guess is that you were referring to the
> > > key-value APIs as long as the issue is gone when the write-through is
> > > enabled.
> > >
> > > -
> > > Denis
> > >
> > >
> > > On Fri, Feb 28, 2020 at 2:30 PM Ivan Pavlukhin <vo...@gmail.com>
> > > wrote:
> > >
> > > > As I understand the thing here is in combination of read-through and
> > > > SQL. SQL queries do not read from underlying storage when
> read-through
> > > > is configured. And an observed result happens because query from a
> > > > client node over REPLICATED cache picks random server node (kind of
> > > > load-balancing) to retrieve data. Following happens in the described
> > > > case:
> > > > 1. Value is loaded to a cache from an underlying storage on a primary
> > > > node when cache.get is called.
> > > > 2. Query is executed multiple times and when the chose node is the
> > > > primary node then the value is observed. On other nodes the value is
> > > > absent.
> > > >
> > > > Actually, behavior for PARTITIONED cache is similar, but an
> > > > inconsistency is not observed because SQL queries read data from the
> > > > primary node there. If the primary node leaves a cluster then an SQL
> > > > query will not see the value anymore. So, the same inconsistency will
> > > > appear.
> > > >
> > > > Best regards,
> > > > Ivan Pavlukhin
> > > >
> > > > пт, 28 февр. 2020 г. в 13:23, Prasad Bhalerao <
> > > > prasadbhalerao1983@gmail.com>:
> > > > >
> > > > > Can someone please comment on this?
> > > > >
> > > > > On Wed, Feb 26, 2020 at 6:04 AM Denis Magda <dm...@apache.org>
> wrote:
> > > > >
> > > > > > Ignite Dev team,
> > > > > >
> > > > > > This sounds like an issue in our replicated cache implementation
> > > rather
> > > > > > than an expected behavior. Especially, if partitioned caches
> don't
> > > have
> > > > > > such a specificity.
> > > > > >
> > > > > > Who can explain why write-through needs to be enabled for
> replicated
> > > > caches
> > > > > > to reload an entry from an underlying database
> properly/consistently?
> > > > > >
> > > > > > -
> > > > > > Denis
> > > > > >
> > > > > >
> > > > > > On Tue, Feb 25, 2020 at 5:11 AM Ilya Kasnacheev <
> > > > ilya.kasnacheev@gmail.com
> > > > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Hello!
> > > > > > >
> > > > > > > I think this is by design. You may suggest edits on readme.io.
> > > > > > >
> > > > > > > Regards,
> > > > > > > --
> > > > > > > Ilya Kasnacheev
> > > > > > >
> > > > > > >
> > > > > > > пн, 24 февр. 2020 г. в 17:28, Prasad Bhalerao <
> > > > > > > prasadbhalerao1983@gmail.com>:
> > > > > > >
> > > > > > >> Hi,
> > > > > > >>
> > > > > > >> Is this a bug or the cache is designed to work this way?
> > > > > > >>
> > > > > > >> If it is as-designed, can this behavior be updated in ignite
> > > > > > >> documentation?
> > > > > > >>
> > > > > > >> Thanks,
> > > > > > >> Prasad
> > > > > > >>
> > > > > > >> On Wed, Oct 30, 2019 at 7:19 PM Ilya Kasnacheev <
> > > > > > >> ilya.kasnacheev@gmail.com> wrote:
> > > > > > >>
> > > > > > >>> Hello!
> > > > > > >>>
> > > > > > >>> I have discussed this with fellow Ignite developers, and
> they say
> > > > read
> > > > > > >>> through for replicated cache would work where there is
> either:
> > > > > > >>>
> > > > > > >>> - writeThrough enabled and all changes do through it.
> > > > > > >>> - database contents do not change for already read keys.
> > > > > > >>>
> > > > > > >>> I can see that neither is met in your case, so you can
> expect the
> > > > > > >>> behavior that you are seeing.
> > > > > > >>>
> > > > > > >>> Regards,
> > > > > > >>> --
> > > > > > >>> Ilya Kasnacheev
> > > > > > >>>
> > > > > > >>>
> > > > > > >>> вт, 29 окт. 2019 г. в 18:18, Akash Shinde <
> akashshinde@gmail.com
> > > >:
> > > > > > >>>
> > > > > > >>>> I am using Ignite 2.6 version.
> > > > > > >>>>
> > > > > > >>>> I am starting 3 server nodes with a replicated cache and 1
> > > client
> > > > > > node.
> > > > > > >>>> Cache configuration is as follows.
> > > > > > >>>> Read-through true on but write-through is false. Load data
> by
> > > key
> > > > is
> > > > > > >>>> implemented as given below in cache-loader.
> > > > > > >>>>
> > > > > > >>>> Steps to reproduce issue:
> > > > > > >>>> 1) Delete an entry from cache using IgniteCache.remove()
> method.
> > > > > > (Entry
> > > > > > >>>> is just removed from cache but present in DB as
> write-through is
> > > > > > false)
> > > > > > >>>> 2) Invoke IgniteCache.get() method for the same key in step
> 1.
> > > > > > >>>> 3) Now query the cache from client node. Every invocation
> > > returns
> > > > > > >>>> different results.
> > > > > > >>>> Sometimes it returns reloaded entry, sometime returns the
> > > results
> > > > > > >>>> without reloaded entry.
> > > > > > >>>>
> > > > > > >>>> Looks like read-through is not replicating the reloaded
> entry on
> > > > all
> > > > > > >>>> nodes in case of REPLICATED cache.
> > > > > > >>>>
> > > > > > >>>> So to investigate further I changed the cache mode to
> > > PARTITIONED
> > > > and
> > > > > > >>>> set the backup count to 3 i.e. total number of nodes
> present in
> > > > > > cluster (to
> > > > > > >>>> mimic REPLICATED behavior).
> > > > > > >>>> This time it worked as expected.
> > > > > > >>>> Every invocation returned the same result with reloaded
> entry.
> > > > > > >>>>
> > > > > > >>>> *  private CacheConfiguration networkCacheCfg() {*
> > > > > > >>>>
> > > > > > >>>>
> > > > > > >>>>
> > > > > > >>>>
> > > > > > >>>>
> > > > > > >>>>
> > > > > > >>>>
> > > > > > >>>>
> > > > > > >>>>
> > > > > > >>>>
> > > > > > >>>>
> > > > > > >>>>
> > > > > > >>>>
> > > > > > >>>>
> > > > > > >>>>
> > > > > > >>>>
> > > > > > >>>>
> > > > > > >>>>
> > > > > > >>>>
> > > > > > >>>> *    CacheConfiguration networkCacheCfg = new
> > > > > > >>>> CacheConfiguration<>(CacheName.NETWORK_CACHE.name
> > > > > > >>>> <http://CacheName.NETWORK_CACHE.name>());
> > > > > > >>>>
> > > > networkCacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
> > > > > > >>>> networkCacheCfg.setWriteThrough(false);
> > > > > > >>>> networkCacheCfg.setReadThrough(true);
> > > > > > >>>> networkCacheCfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
> > > > > > >>>>
> > > > > >
> > > >
> > >
> networkCacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
> > > > > > >>>>   //networkCacheCfg.setBackups(3);
> > > > > > >>>> networkCacheCfg.setCacheMode(CacheMode.REPLICATED);
> > > > > > >>>> Factory<NetworkDataCacheLoader> storeFactory =
> > > > > > >>>> FactoryBuilder.factoryOf(NetworkDataCacheLoader.class);
> > > > > > >>>> networkCacheCfg.setCacheStoreFactory(storeFactory);
> > > > > > >>>>
> networkCacheCfg.setIndexedTypes(DefaultDataAffinityKey.class,
> > > > > > >>>> NetworkData.class);
> > > > networkCacheCfg.setSqlIndexMaxInlineSize(65);
> > > > > > >>>> RendezvousAffinityFunction affinityFunction = new
> > > > > > >>>> RendezvousAffinityFunction();
> > > > > > >>>> affinityFunction.setExcludeNeighbors(false);
> > > > > > >>>> networkCacheCfg.setAffinity(affinityFunction);
> > > > > > >>>> networkCacheCfg.setStatisticsEnabled(true);   //
> > > > > > >>>>
> networkCacheCfg.setNearConfiguration(nearCacheConfiguration());
> > > > > > return
> > > > > > >>>> networkCacheCfg;  }*
> > > > > > >>>>
> > > > > > >>>> @Override
> > > > > > >>>> public V load(K k) throws CacheLoaderException {
> > > > > > >>>>     V value = null;
> > > > > > >>>>     DataSource dataSource =
> springCtx.getBean(DataSource.class);
> > > > > > >>>>     try (Connection connection = dataSource.getConnection();
> > > > > > >>>>          PreparedStatement statement =
> > > > > > connection.prepareStatement(loadByKeySql)) {
> > > > > > >>>>         //statement.setObject(1, k.getId());
> > > > > > >>>>         setPreparedStatement(statement,k);
> > > > > > >>>>         try (ResultSet rs = statement.executeQuery()) {
> > > > > > >>>>             if (rs.next()) {
> > > > > > >>>>                 value = rowMapper.mapRow(rs, 0);
> > > > > > >>>>             }
> > > > > > >>>>         }
> > > > > > >>>>     } catch (SQLException e) {
> > > > > > >>>>
> > > > > > >>>>         throw new CacheLoaderException(e.getMessage(), e);
> > > > > > >>>>     }
> > > > > > >>>>
> > > > > > >>>>     return value;
> > > > > > >>>> }
> > > > > > >>>>
> > > > > > >>>>
> > > > > > >>>> Thanks,
> > > > > > >>>>
> > > > > > >>>> Akash
> > > > > > >>>>
> > > > > > >>>>
> > > > > >
> > > >
> > >
>

Re: Read through not working as expected in case of Replicated cache

Posted by Ivan Pavlukhin <vo...@gmail.com>.
Hi Prasad,

AFAIK, when value is read through it is not sent to backup nodes. You
can check it using IgniteCache.localPeek method (ask if more details
how to do it are needed).

I usually think about read-through cache for a following case. There
is an underlying storage with "real" data, cache is used to speedup an
access. Some kind of invalidation mechanism might be used but it is
assumed fine to read values from cache which are not consistent with
the backing storage at some point.

Consequently it seems there is no need to distribute values from an
underlying storage over all replicas because if a value is absent a
reader will receive an actual value from the underlying storage.

Best regards,
Ivan Pavlukhin

пн, 2 мар. 2020 г. в 10:41, Prasad Bhalerao <pr...@gmail.com>:
>
> Hi Ivan/Denis,
>
> Are you saying that when a value is loaded to cache from an underlying
> storage using read-through approach, value is loaded only on primary node
> and does not get replicated on its back nodes?
>
> I am under the impression that when a value is loaded in a cache using
> read-through approach, this key/value pair gets replicated on all back-up
> nodes as well, irrespective of REPLICATED OR PARTITIONED cache.
> Please correct me if I am wrong.
>
> I think the key/value must get replicated on all backup nodes when it is
> read through underlying storage otherwise user will have to add the same
> key/value explicitly using cache.put(key,value) operation so that it will
> get replicated on all of its backup nodes.  This is what I am doing right
> now as a workaround to solve this issue.
>
> I will try to explain my use case again.
>
> I have few replicated caches for which read-through is enabled but
> write-through is disabled. The underlying tables for these caches are
> updated by different systems. Whenever these tables are updated by 3rd
> party system I want to reload the "cache entries".
>
> I achieve this using below given steps:
> 1) 3rd party systems sends an update message (which contains the key) to
> our service by invoking our REST api.
> 2) Delete an entry from cache using cache().remove(key) method. (Entry is
> just removed from cache but present in DB as write-through is false)
> 3) Invoke cache().get(key) method for the same key in step 2 to reload an
> entry.
>
> Thanks,
> Prasad
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> Prasad
>
> On Sat, Feb 29, 2020 at 4:49 AM Denis Magda <dm...@apache.org> wrote:
>
> > Ivan, thanks for stepping in.
> >
> > Prasad, is Ivan's assumption correct that you query the data with SQL under
> > the observed circumstances? My guess is that you were referring to the
> > key-value APIs as long as the issue is gone when the write-through is
> > enabled.
> >
> > -
> > Denis
> >
> >
> > On Fri, Feb 28, 2020 at 2:30 PM Ivan Pavlukhin <vo...@gmail.com>
> > wrote:
> >
> > > As I understand the thing here is in combination of read-through and
> > > SQL. SQL queries do not read from underlying storage when read-through
> > > is configured. And an observed result happens because query from a
> > > client node over REPLICATED cache picks random server node (kind of
> > > load-balancing) to retrieve data. Following happens in the described
> > > case:
> > > 1. Value is loaded to a cache from an underlying storage on a primary
> > > node when cache.get is called.
> > > 2. Query is executed multiple times and when the chose node is the
> > > primary node then the value is observed. On other nodes the value is
> > > absent.
> > >
> > > Actually, behavior for PARTITIONED cache is similar, but an
> > > inconsistency is not observed because SQL queries read data from the
> > > primary node there. If the primary node leaves a cluster then an SQL
> > > query will not see the value anymore. So, the same inconsistency will
> > > appear.
> > >
> > > Best regards,
> > > Ivan Pavlukhin
> > >
> > > пт, 28 февр. 2020 г. в 13:23, Prasad Bhalerao <
> > > prasadbhalerao1983@gmail.com>:
> > > >
> > > > Can someone please comment on this?
> > > >
> > > > On Wed, Feb 26, 2020 at 6:04 AM Denis Magda <dm...@apache.org> wrote:
> > > >
> > > > > Ignite Dev team,
> > > > >
> > > > > This sounds like an issue in our replicated cache implementation
> > rather
> > > > > than an expected behavior. Especially, if partitioned caches don't
> > have
> > > > > such a specificity.
> > > > >
> > > > > Who can explain why write-through needs to be enabled for replicated
> > > caches
> > > > > to reload an entry from an underlying database properly/consistently?
> > > > >
> > > > > -
> > > > > Denis
> > > > >
> > > > >
> > > > > On Tue, Feb 25, 2020 at 5:11 AM Ilya Kasnacheev <
> > > ilya.kasnacheev@gmail.com
> > > > > >
> > > > > wrote:
> > > > >
> > > > > > Hello!
> > > > > >
> > > > > > I think this is by design. You may suggest edits on readme.io.
> > > > > >
> > > > > > Regards,
> > > > > > --
> > > > > > Ilya Kasnacheev
> > > > > >
> > > > > >
> > > > > > пн, 24 февр. 2020 г. в 17:28, Prasad Bhalerao <
> > > > > > prasadbhalerao1983@gmail.com>:
> > > > > >
> > > > > >> Hi,
> > > > > >>
> > > > > >> Is this a bug or the cache is designed to work this way?
> > > > > >>
> > > > > >> If it is as-designed, can this behavior be updated in ignite
> > > > > >> documentation?
> > > > > >>
> > > > > >> Thanks,
> > > > > >> Prasad
> > > > > >>
> > > > > >> On Wed, Oct 30, 2019 at 7:19 PM Ilya Kasnacheev <
> > > > > >> ilya.kasnacheev@gmail.com> wrote:
> > > > > >>
> > > > > >>> Hello!
> > > > > >>>
> > > > > >>> I have discussed this with fellow Ignite developers, and they say
> > > read
> > > > > >>> through for replicated cache would work where there is either:
> > > > > >>>
> > > > > >>> - writeThrough enabled and all changes do through it.
> > > > > >>> - database contents do not change for already read keys.
> > > > > >>>
> > > > > >>> I can see that neither is met in your case, so you can expect the
> > > > > >>> behavior that you are seeing.
> > > > > >>>
> > > > > >>> Regards,
> > > > > >>> --
> > > > > >>> Ilya Kasnacheev
> > > > > >>>
> > > > > >>>
> > > > > >>> вт, 29 окт. 2019 г. в 18:18, Akash Shinde <akashshinde@gmail.com
> > >:
> > > > > >>>
> > > > > >>>> I am using Ignite 2.6 version.
> > > > > >>>>
> > > > > >>>> I am starting 3 server nodes with a replicated cache and 1
> > client
> > > > > node.
> > > > > >>>> Cache configuration is as follows.
> > > > > >>>> Read-through true on but write-through is false. Load data by
> > key
> > > is
> > > > > >>>> implemented as given below in cache-loader.
> > > > > >>>>
> > > > > >>>> Steps to reproduce issue:
> > > > > >>>> 1) Delete an entry from cache using IgniteCache.remove() method.
> > > > > (Entry
> > > > > >>>> is just removed from cache but present in DB as write-through is
> > > > > false)
> > > > > >>>> 2) Invoke IgniteCache.get() method for the same key in step 1.
> > > > > >>>> 3) Now query the cache from client node. Every invocation
> > returns
> > > > > >>>> different results.
> > > > > >>>> Sometimes it returns reloaded entry, sometime returns the
> > results
> > > > > >>>> without reloaded entry.
> > > > > >>>>
> > > > > >>>> Looks like read-through is not replicating the reloaded entry on
> > > all
> > > > > >>>> nodes in case of REPLICATED cache.
> > > > > >>>>
> > > > > >>>> So to investigate further I changed the cache mode to
> > PARTITIONED
> > > and
> > > > > >>>> set the backup count to 3 i.e. total number of nodes present in
> > > > > cluster (to
> > > > > >>>> mimic REPLICATED behavior).
> > > > > >>>> This time it worked as expected.
> > > > > >>>> Every invocation returned the same result with reloaded entry.
> > > > > >>>>
> > > > > >>>> *  private CacheConfiguration networkCacheCfg() {*
> > > > > >>>>
> > > > > >>>>
> > > > > >>>>
> > > > > >>>>
> > > > > >>>>
> > > > > >>>>
> > > > > >>>>
> > > > > >>>>
> > > > > >>>>
> > > > > >>>>
> > > > > >>>>
> > > > > >>>>
> > > > > >>>>
> > > > > >>>>
> > > > > >>>>
> > > > > >>>>
> > > > > >>>>
> > > > > >>>>
> > > > > >>>>
> > > > > >>>> *    CacheConfiguration networkCacheCfg = new
> > > > > >>>> CacheConfiguration<>(CacheName.NETWORK_CACHE.name
> > > > > >>>> <http://CacheName.NETWORK_CACHE.name>());
> > > > > >>>>
> > > networkCacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
> > > > > >>>> networkCacheCfg.setWriteThrough(false);
> > > > > >>>> networkCacheCfg.setReadThrough(true);
> > > > > >>>> networkCacheCfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
> > > > > >>>>
> > > > >
> > >
> > networkCacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
> > > > > >>>>   //networkCacheCfg.setBackups(3);
> > > > > >>>> networkCacheCfg.setCacheMode(CacheMode.REPLICATED);
> > > > > >>>> Factory<NetworkDataCacheLoader> storeFactory =
> > > > > >>>> FactoryBuilder.factoryOf(NetworkDataCacheLoader.class);
> > > > > >>>> networkCacheCfg.setCacheStoreFactory(storeFactory);
> > > > > >>>> networkCacheCfg.setIndexedTypes(DefaultDataAffinityKey.class,
> > > > > >>>> NetworkData.class);
> > > networkCacheCfg.setSqlIndexMaxInlineSize(65);
> > > > > >>>> RendezvousAffinityFunction affinityFunction = new
> > > > > >>>> RendezvousAffinityFunction();
> > > > > >>>> affinityFunction.setExcludeNeighbors(false);
> > > > > >>>> networkCacheCfg.setAffinity(affinityFunction);
> > > > > >>>> networkCacheCfg.setStatisticsEnabled(true);   //
> > > > > >>>> networkCacheCfg.setNearConfiguration(nearCacheConfiguration());
> > > > > return
> > > > > >>>> networkCacheCfg;  }*
> > > > > >>>>
> > > > > >>>> @Override
> > > > > >>>> public V load(K k) throws CacheLoaderException {
> > > > > >>>>     V value = null;
> > > > > >>>>     DataSource dataSource = springCtx.getBean(DataSource.class);
> > > > > >>>>     try (Connection connection = dataSource.getConnection();
> > > > > >>>>          PreparedStatement statement =
> > > > > connection.prepareStatement(loadByKeySql)) {
> > > > > >>>>         //statement.setObject(1, k.getId());
> > > > > >>>>         setPreparedStatement(statement,k);
> > > > > >>>>         try (ResultSet rs = statement.executeQuery()) {
> > > > > >>>>             if (rs.next()) {
> > > > > >>>>                 value = rowMapper.mapRow(rs, 0);
> > > > > >>>>             }
> > > > > >>>>         }
> > > > > >>>>     } catch (SQLException e) {
> > > > > >>>>
> > > > > >>>>         throw new CacheLoaderException(e.getMessage(), e);
> > > > > >>>>     }
> > > > > >>>>
> > > > > >>>>     return value;
> > > > > >>>> }
> > > > > >>>>
> > > > > >>>>
> > > > > >>>> Thanks,
> > > > > >>>>
> > > > > >>>> Akash
> > > > > >>>>
> > > > > >>>>
> > > > >
> > >
> >

Re: Read through not working as expected in case of Replicated cache

Posted by Ivan Pavlukhin <vo...@gmail.com>.
Hi Prasad,

AFAIK, when value is read through it is not sent to backup nodes. You
can check it using IgniteCache.localPeek method (ask if more details
how to do it are needed).

I usually think about read-through cache for a following case. There
is an underlying storage with "real" data, cache is used to speedup an
access. Some kind of invalidation mechanism might be used but it is
assumed fine to read values from cache which are not consistent with
the backing storage at some point.

Consequently it seems there is no need to distribute values from an
underlying storage over all replicas because if a value is absent a
reader will receive an actual value from the underlying storage.

Best regards,
Ivan Pavlukhin

пн, 2 мар. 2020 г. в 10:41, Prasad Bhalerao <pr...@gmail.com>:
>
> Hi Ivan/Denis,
>
> Are you saying that when a value is loaded to cache from an underlying
> storage using read-through approach, value is loaded only on primary node
> and does not get replicated on its back nodes?
>
> I am under the impression that when a value is loaded in a cache using
> read-through approach, this key/value pair gets replicated on all back-up
> nodes as well, irrespective of REPLICATED OR PARTITIONED cache.
> Please correct me if I am wrong.
>
> I think the key/value must get replicated on all backup nodes when it is
> read through underlying storage otherwise user will have to add the same
> key/value explicitly using cache.put(key,value) operation so that it will
> get replicated on all of its backup nodes.  This is what I am doing right
> now as a workaround to solve this issue.
>
> I will try to explain my use case again.
>
> I have few replicated caches for which read-through is enabled but
> write-through is disabled. The underlying tables for these caches are
> updated by different systems. Whenever these tables are updated by 3rd
> party system I want to reload the "cache entries".
>
> I achieve this using below given steps:
> 1) 3rd party systems sends an update message (which contains the key) to
> our service by invoking our REST api.
> 2) Delete an entry from cache using cache().remove(key) method. (Entry is
> just removed from cache but present in DB as write-through is false)
> 3) Invoke cache().get(key) method for the same key in step 2 to reload an
> entry.
>
> Thanks,
> Prasad
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> Prasad
>
> On Sat, Feb 29, 2020 at 4:49 AM Denis Magda <dm...@apache.org> wrote:
>
> > Ivan, thanks for stepping in.
> >
> > Prasad, is Ivan's assumption correct that you query the data with SQL under
> > the observed circumstances? My guess is that you were referring to the
> > key-value APIs as long as the issue is gone when the write-through is
> > enabled.
> >
> > -
> > Denis
> >
> >
> > On Fri, Feb 28, 2020 at 2:30 PM Ivan Pavlukhin <vo...@gmail.com>
> > wrote:
> >
> > > As I understand the thing here is in combination of read-through and
> > > SQL. SQL queries do not read from underlying storage when read-through
> > > is configured. And an observed result happens because query from a
> > > client node over REPLICATED cache picks random server node (kind of
> > > load-balancing) to retrieve data. Following happens in the described
> > > case:
> > > 1. Value is loaded to a cache from an underlying storage on a primary
> > > node when cache.get is called.
> > > 2. Query is executed multiple times and when the chose node is the
> > > primary node then the value is observed. On other nodes the value is
> > > absent.
> > >
> > > Actually, behavior for PARTITIONED cache is similar, but an
> > > inconsistency is not observed because SQL queries read data from the
> > > primary node there. If the primary node leaves a cluster then an SQL
> > > query will not see the value anymore. So, the same inconsistency will
> > > appear.
> > >
> > > Best regards,
> > > Ivan Pavlukhin
> > >
> > > пт, 28 февр. 2020 г. в 13:23, Prasad Bhalerao <
> > > prasadbhalerao1983@gmail.com>:
> > > >
> > > > Can someone please comment on this?
> > > >
> > > > On Wed, Feb 26, 2020 at 6:04 AM Denis Magda <dm...@apache.org> wrote:
> > > >
> > > > > Ignite Dev team,
> > > > >
> > > > > This sounds like an issue in our replicated cache implementation
> > rather
> > > > > than an expected behavior. Especially, if partitioned caches don't
> > have
> > > > > such a specificity.
> > > > >
> > > > > Who can explain why write-through needs to be enabled for replicated
> > > caches
> > > > > to reload an entry from an underlying database properly/consistently?
> > > > >
> > > > > -
> > > > > Denis
> > > > >
> > > > >
> > > > > On Tue, Feb 25, 2020 at 5:11 AM Ilya Kasnacheev <
> > > ilya.kasnacheev@gmail.com
> > > > > >
> > > > > wrote:
> > > > >
> > > > > > Hello!
> > > > > >
> > > > > > I think this is by design. You may suggest edits on readme.io.
> > > > > >
> > > > > > Regards,
> > > > > > --
> > > > > > Ilya Kasnacheev
> > > > > >
> > > > > >
> > > > > > пн, 24 февр. 2020 г. в 17:28, Prasad Bhalerao <
> > > > > > prasadbhalerao1983@gmail.com>:
> > > > > >
> > > > > >> Hi,
> > > > > >>
> > > > > >> Is this a bug or the cache is designed to work this way?
> > > > > >>
> > > > > >> If it is as-designed, can this behavior be updated in ignite
> > > > > >> documentation?
> > > > > >>
> > > > > >> Thanks,
> > > > > >> Prasad
> > > > > >>
> > > > > >> On Wed, Oct 30, 2019 at 7:19 PM Ilya Kasnacheev <
> > > > > >> ilya.kasnacheev@gmail.com> wrote:
> > > > > >>
> > > > > >>> Hello!
> > > > > >>>
> > > > > >>> I have discussed this with fellow Ignite developers, and they say
> > > read
> > > > > >>> through for replicated cache would work where there is either:
> > > > > >>>
> > > > > >>> - writeThrough enabled and all changes do through it.
> > > > > >>> - database contents do not change for already read keys.
> > > > > >>>
> > > > > >>> I can see that neither is met in your case, so you can expect the
> > > > > >>> behavior that you are seeing.
> > > > > >>>
> > > > > >>> Regards,
> > > > > >>> --
> > > > > >>> Ilya Kasnacheev
> > > > > >>>
> > > > > >>>
> > > > > >>> вт, 29 окт. 2019 г. в 18:18, Akash Shinde <akashshinde@gmail.com
> > >:
> > > > > >>>
> > > > > >>>> I am using Ignite 2.6 version.
> > > > > >>>>
> > > > > >>>> I am starting 3 server nodes with a replicated cache and 1
> > client
> > > > > node.
> > > > > >>>> Cache configuration is as follows.
> > > > > >>>> Read-through true on but write-through is false. Load data by
> > key
> > > is
> > > > > >>>> implemented as given below in cache-loader.
> > > > > >>>>
> > > > > >>>> Steps to reproduce issue:
> > > > > >>>> 1) Delete an entry from cache using IgniteCache.remove() method.
> > > > > (Entry
> > > > > >>>> is just removed from cache but present in DB as write-through is
> > > > > false)
> > > > > >>>> 2) Invoke IgniteCache.get() method for the same key in step 1.
> > > > > >>>> 3) Now query the cache from client node. Every invocation
> > returns
> > > > > >>>> different results.
> > > > > >>>> Sometimes it returns reloaded entry, sometime returns the
> > results
> > > > > >>>> without reloaded entry.
> > > > > >>>>
> > > > > >>>> Looks like read-through is not replicating the reloaded entry on
> > > all
> > > > > >>>> nodes in case of REPLICATED cache.
> > > > > >>>>
> > > > > >>>> So to investigate further I changed the cache mode to
> > PARTITIONED
> > > and
> > > > > >>>> set the backup count to 3 i.e. total number of nodes present in
> > > > > cluster (to
> > > > > >>>> mimic REPLICATED behavior).
> > > > > >>>> This time it worked as expected.
> > > > > >>>> Every invocation returned the same result with reloaded entry.
> > > > > >>>>
> > > > > >>>> *  private CacheConfiguration networkCacheCfg() {*
> > > > > >>>>
> > > > > >>>>
> > > > > >>>>
> > > > > >>>>
> > > > > >>>>
> > > > > >>>>
> > > > > >>>>
> > > > > >>>>
> > > > > >>>>
> > > > > >>>>
> > > > > >>>>
> > > > > >>>>
> > > > > >>>>
> > > > > >>>>
> > > > > >>>>
> > > > > >>>>
> > > > > >>>>
> > > > > >>>>
> > > > > >>>>
> > > > > >>>> *    CacheConfiguration networkCacheCfg = new
> > > > > >>>> CacheConfiguration<>(CacheName.NETWORK_CACHE.name
> > > > > >>>> <http://CacheName.NETWORK_CACHE.name>());
> > > > > >>>>
> > > networkCacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
> > > > > >>>> networkCacheCfg.setWriteThrough(false);
> > > > > >>>> networkCacheCfg.setReadThrough(true);
> > > > > >>>> networkCacheCfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
> > > > > >>>>
> > > > >
> > >
> > networkCacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
> > > > > >>>>   //networkCacheCfg.setBackups(3);
> > > > > >>>> networkCacheCfg.setCacheMode(CacheMode.REPLICATED);
> > > > > >>>> Factory<NetworkDataCacheLoader> storeFactory =
> > > > > >>>> FactoryBuilder.factoryOf(NetworkDataCacheLoader.class);
> > > > > >>>> networkCacheCfg.setCacheStoreFactory(storeFactory);
> > > > > >>>> networkCacheCfg.setIndexedTypes(DefaultDataAffinityKey.class,
> > > > > >>>> NetworkData.class);
> > > networkCacheCfg.setSqlIndexMaxInlineSize(65);
> > > > > >>>> RendezvousAffinityFunction affinityFunction = new
> > > > > >>>> RendezvousAffinityFunction();
> > > > > >>>> affinityFunction.setExcludeNeighbors(false);
> > > > > >>>> networkCacheCfg.setAffinity(affinityFunction);
> > > > > >>>> networkCacheCfg.setStatisticsEnabled(true);   //
> > > > > >>>> networkCacheCfg.setNearConfiguration(nearCacheConfiguration());
> > > > > return
> > > > > >>>> networkCacheCfg;  }*
> > > > > >>>>
> > > > > >>>> @Override
> > > > > >>>> public V load(K k) throws CacheLoaderException {
> > > > > >>>>     V value = null;
> > > > > >>>>     DataSource dataSource = springCtx.getBean(DataSource.class);
> > > > > >>>>     try (Connection connection = dataSource.getConnection();
> > > > > >>>>          PreparedStatement statement =
> > > > > connection.prepareStatement(loadByKeySql)) {
> > > > > >>>>         //statement.setObject(1, k.getId());
> > > > > >>>>         setPreparedStatement(statement,k);
> > > > > >>>>         try (ResultSet rs = statement.executeQuery()) {
> > > > > >>>>             if (rs.next()) {
> > > > > >>>>                 value = rowMapper.mapRow(rs, 0);
> > > > > >>>>             }
> > > > > >>>>         }
> > > > > >>>>     } catch (SQLException e) {
> > > > > >>>>
> > > > > >>>>         throw new CacheLoaderException(e.getMessage(), e);
> > > > > >>>>     }
> > > > > >>>>
> > > > > >>>>     return value;
> > > > > >>>> }
> > > > > >>>>
> > > > > >>>>
> > > > > >>>> Thanks,
> > > > > >>>>
> > > > > >>>> Akash
> > > > > >>>>
> > > > > >>>>
> > > > >
> > >
> >

Re: Read through not working as expected in case of Replicated cache

Posted by Prasad Bhalerao <pr...@gmail.com>.
Hi Ivan/Denis,

Are you saying that when a value is loaded to cache from an underlying
storage using read-through approach, value is loaded only on primary node
and does not get replicated on its back nodes?

I am under the impression that when a value is loaded in a cache using
read-through approach, this key/value pair gets replicated on all back-up
nodes as well, irrespective of REPLICATED OR PARTITIONED cache.
Please correct me if I am wrong.

I think the key/value must get replicated on all backup nodes when it is
read through underlying storage otherwise user will have to add the same
key/value explicitly using cache.put(key,value) operation so that it will
get replicated on all of its backup nodes.  This is what I am doing right
now as a workaround to solve this issue.

I will try to explain my use case again.

I have few replicated caches for which read-through is enabled but
write-through is disabled. The underlying tables for these caches are
updated by different systems. Whenever these tables are updated by 3rd
party system I want to reload the "cache entries".

I achieve this using below given steps:
1) 3rd party systems sends an update message (which contains the key) to
our service by invoking our REST api.
2) Delete an entry from cache using cache().remove(key) method. (Entry is
just removed from cache but present in DB as write-through is false)
3) Invoke cache().get(key) method for the same key in step 2 to reload an
entry.

Thanks,
Prasad





















Prasad

On Sat, Feb 29, 2020 at 4:49 AM Denis Magda <dm...@apache.org> wrote:

> Ivan, thanks for stepping in.
>
> Prasad, is Ivan's assumption correct that you query the data with SQL under
> the observed circumstances? My guess is that you were referring to the
> key-value APIs as long as the issue is gone when the write-through is
> enabled.
>
> -
> Denis
>
>
> On Fri, Feb 28, 2020 at 2:30 PM Ivan Pavlukhin <vo...@gmail.com>
> wrote:
>
> > As I understand the thing here is in combination of read-through and
> > SQL. SQL queries do not read from underlying storage when read-through
> > is configured. And an observed result happens because query from a
> > client node over REPLICATED cache picks random server node (kind of
> > load-balancing) to retrieve data. Following happens in the described
> > case:
> > 1. Value is loaded to a cache from an underlying storage on a primary
> > node when cache.get is called.
> > 2. Query is executed multiple times and when the chose node is the
> > primary node then the value is observed. On other nodes the value is
> > absent.
> >
> > Actually, behavior for PARTITIONED cache is similar, but an
> > inconsistency is not observed because SQL queries read data from the
> > primary node there. If the primary node leaves a cluster then an SQL
> > query will not see the value anymore. So, the same inconsistency will
> > appear.
> >
> > Best regards,
> > Ivan Pavlukhin
> >
> > пт, 28 февр. 2020 г. в 13:23, Prasad Bhalerao <
> > prasadbhalerao1983@gmail.com>:
> > >
> > > Can someone please comment on this?
> > >
> > > On Wed, Feb 26, 2020 at 6:04 AM Denis Magda <dm...@apache.org> wrote:
> > >
> > > > Ignite Dev team,
> > > >
> > > > This sounds like an issue in our replicated cache implementation
> rather
> > > > than an expected behavior. Especially, if partitioned caches don't
> have
> > > > such a specificity.
> > > >
> > > > Who can explain why write-through needs to be enabled for replicated
> > caches
> > > > to reload an entry from an underlying database properly/consistently?
> > > >
> > > > -
> > > > Denis
> > > >
> > > >
> > > > On Tue, Feb 25, 2020 at 5:11 AM Ilya Kasnacheev <
> > ilya.kasnacheev@gmail.com
> > > > >
> > > > wrote:
> > > >
> > > > > Hello!
> > > > >
> > > > > I think this is by design. You may suggest edits on readme.io.
> > > > >
> > > > > Regards,
> > > > > --
> > > > > Ilya Kasnacheev
> > > > >
> > > > >
> > > > > пн, 24 февр. 2020 г. в 17:28, Prasad Bhalerao <
> > > > > prasadbhalerao1983@gmail.com>:
> > > > >
> > > > >> Hi,
> > > > >>
> > > > >> Is this a bug or the cache is designed to work this way?
> > > > >>
> > > > >> If it is as-designed, can this behavior be updated in ignite
> > > > >> documentation?
> > > > >>
> > > > >> Thanks,
> > > > >> Prasad
> > > > >>
> > > > >> On Wed, Oct 30, 2019 at 7:19 PM Ilya Kasnacheev <
> > > > >> ilya.kasnacheev@gmail.com> wrote:
> > > > >>
> > > > >>> Hello!
> > > > >>>
> > > > >>> I have discussed this with fellow Ignite developers, and they say
> > read
> > > > >>> through for replicated cache would work where there is either:
> > > > >>>
> > > > >>> - writeThrough enabled and all changes do through it.
> > > > >>> - database contents do not change for already read keys.
> > > > >>>
> > > > >>> I can see that neither is met in your case, so you can expect the
> > > > >>> behavior that you are seeing.
> > > > >>>
> > > > >>> Regards,
> > > > >>> --
> > > > >>> Ilya Kasnacheev
> > > > >>>
> > > > >>>
> > > > >>> вт, 29 окт. 2019 г. в 18:18, Akash Shinde <akashshinde@gmail.com
> >:
> > > > >>>
> > > > >>>> I am using Ignite 2.6 version.
> > > > >>>>
> > > > >>>> I am starting 3 server nodes with a replicated cache and 1
> client
> > > > node.
> > > > >>>> Cache configuration is as follows.
> > > > >>>> Read-through true on but write-through is false. Load data by
> key
> > is
> > > > >>>> implemented as given below in cache-loader.
> > > > >>>>
> > > > >>>> Steps to reproduce issue:
> > > > >>>> 1) Delete an entry from cache using IgniteCache.remove() method.
> > > > (Entry
> > > > >>>> is just removed from cache but present in DB as write-through is
> > > > false)
> > > > >>>> 2) Invoke IgniteCache.get() method for the same key in step 1.
> > > > >>>> 3) Now query the cache from client node. Every invocation
> returns
> > > > >>>> different results.
> > > > >>>> Sometimes it returns reloaded entry, sometime returns the
> results
> > > > >>>> without reloaded entry.
> > > > >>>>
> > > > >>>> Looks like read-through is not replicating the reloaded entry on
> > all
> > > > >>>> nodes in case of REPLICATED cache.
> > > > >>>>
> > > > >>>> So to investigate further I changed the cache mode to
> PARTITIONED
> > and
> > > > >>>> set the backup count to 3 i.e. total number of nodes present in
> > > > cluster (to
> > > > >>>> mimic REPLICATED behavior).
> > > > >>>> This time it worked as expected.
> > > > >>>> Every invocation returned the same result with reloaded entry.
> > > > >>>>
> > > > >>>> *  private CacheConfiguration networkCacheCfg() {*
> > > > >>>>
> > > > >>>>
> > > > >>>>
> > > > >>>>
> > > > >>>>
> > > > >>>>
> > > > >>>>
> > > > >>>>
> > > > >>>>
> > > > >>>>
> > > > >>>>
> > > > >>>>
> > > > >>>>
> > > > >>>>
> > > > >>>>
> > > > >>>>
> > > > >>>>
> > > > >>>>
> > > > >>>>
> > > > >>>> *    CacheConfiguration networkCacheCfg = new
> > > > >>>> CacheConfiguration<>(CacheName.NETWORK_CACHE.name
> > > > >>>> <http://CacheName.NETWORK_CACHE.name>());
> > > > >>>>
> > networkCacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
> > > > >>>> networkCacheCfg.setWriteThrough(false);
> > > > >>>> networkCacheCfg.setReadThrough(true);
> > > > >>>> networkCacheCfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
> > > > >>>>
> > > >
> >
> networkCacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
> > > > >>>>   //networkCacheCfg.setBackups(3);
> > > > >>>> networkCacheCfg.setCacheMode(CacheMode.REPLICATED);
> > > > >>>> Factory<NetworkDataCacheLoader> storeFactory =
> > > > >>>> FactoryBuilder.factoryOf(NetworkDataCacheLoader.class);
> > > > >>>> networkCacheCfg.setCacheStoreFactory(storeFactory);
> > > > >>>> networkCacheCfg.setIndexedTypes(DefaultDataAffinityKey.class,
> > > > >>>> NetworkData.class);
> > networkCacheCfg.setSqlIndexMaxInlineSize(65);
> > > > >>>> RendezvousAffinityFunction affinityFunction = new
> > > > >>>> RendezvousAffinityFunction();
> > > > >>>> affinityFunction.setExcludeNeighbors(false);
> > > > >>>> networkCacheCfg.setAffinity(affinityFunction);
> > > > >>>> networkCacheCfg.setStatisticsEnabled(true);   //
> > > > >>>> networkCacheCfg.setNearConfiguration(nearCacheConfiguration());
> > > > return
> > > > >>>> networkCacheCfg;  }*
> > > > >>>>
> > > > >>>> @Override
> > > > >>>> public V load(K k) throws CacheLoaderException {
> > > > >>>>     V value = null;
> > > > >>>>     DataSource dataSource = springCtx.getBean(DataSource.class);
> > > > >>>>     try (Connection connection = dataSource.getConnection();
> > > > >>>>          PreparedStatement statement =
> > > > connection.prepareStatement(loadByKeySql)) {
> > > > >>>>         //statement.setObject(1, k.getId());
> > > > >>>>         setPreparedStatement(statement,k);
> > > > >>>>         try (ResultSet rs = statement.executeQuery()) {
> > > > >>>>             if (rs.next()) {
> > > > >>>>                 value = rowMapper.mapRow(rs, 0);
> > > > >>>>             }
> > > > >>>>         }
> > > > >>>>     } catch (SQLException e) {
> > > > >>>>
> > > > >>>>         throw new CacheLoaderException(e.getMessage(), e);
> > > > >>>>     }
> > > > >>>>
> > > > >>>>     return value;
> > > > >>>> }
> > > > >>>>
> > > > >>>>
> > > > >>>> Thanks,
> > > > >>>>
> > > > >>>> Akash
> > > > >>>>
> > > > >>>>
> > > >
> >
>

Re: Read through not working as expected in case of Replicated cache

Posted by Denis Magda <dm...@apache.org>.
Ivan, thanks for stepping in.

Prasad, is Ivan's assumption correct that you query the data with SQL under
the observed circumstances? My guess is that you were referring to the
key-value APIs as long as the issue is gone when the write-through is
enabled.

-
Denis


On Fri, Feb 28, 2020 at 2:30 PM Ivan Pavlukhin <vo...@gmail.com> wrote:

> As I understand the thing here is in combination of read-through and
> SQL. SQL queries do not read from underlying storage when read-through
> is configured. And an observed result happens because query from a
> client node over REPLICATED cache picks random server node (kind of
> load-balancing) to retrieve data. Following happens in the described
> case:
> 1. Value is loaded to a cache from an underlying storage on a primary
> node when cache.get is called.
> 2. Query is executed multiple times and when the chose node is the
> primary node then the value is observed. On other nodes the value is
> absent.
>
> Actually, behavior for PARTITIONED cache is similar, but an
> inconsistency is not observed because SQL queries read data from the
> primary node there. If the primary node leaves a cluster then an SQL
> query will not see the value anymore. So, the same inconsistency will
> appear.
>
> Best regards,
> Ivan Pavlukhin
>
> пт, 28 февр. 2020 г. в 13:23, Prasad Bhalerao <
> prasadbhalerao1983@gmail.com>:
> >
> > Can someone please comment on this?
> >
> > On Wed, Feb 26, 2020 at 6:04 AM Denis Magda <dm...@apache.org> wrote:
> >
> > > Ignite Dev team,
> > >
> > > This sounds like an issue in our replicated cache implementation rather
> > > than an expected behavior. Especially, if partitioned caches don't have
> > > such a specificity.
> > >
> > > Who can explain why write-through needs to be enabled for replicated
> caches
> > > to reload an entry from an underlying database properly/consistently?
> > >
> > > -
> > > Denis
> > >
> > >
> > > On Tue, Feb 25, 2020 at 5:11 AM Ilya Kasnacheev <
> ilya.kasnacheev@gmail.com
> > > >
> > > wrote:
> > >
> > > > Hello!
> > > >
> > > > I think this is by design. You may suggest edits on readme.io.
> > > >
> > > > Regards,
> > > > --
> > > > Ilya Kasnacheev
> > > >
> > > >
> > > > пн, 24 февр. 2020 г. в 17:28, Prasad Bhalerao <
> > > > prasadbhalerao1983@gmail.com>:
> > > >
> > > >> Hi,
> > > >>
> > > >> Is this a bug or the cache is designed to work this way?
> > > >>
> > > >> If it is as-designed, can this behavior be updated in ignite
> > > >> documentation?
> > > >>
> > > >> Thanks,
> > > >> Prasad
> > > >>
> > > >> On Wed, Oct 30, 2019 at 7:19 PM Ilya Kasnacheev <
> > > >> ilya.kasnacheev@gmail.com> wrote:
> > > >>
> > > >>> Hello!
> > > >>>
> > > >>> I have discussed this with fellow Ignite developers, and they say
> read
> > > >>> through for replicated cache would work where there is either:
> > > >>>
> > > >>> - writeThrough enabled and all changes do through it.
> > > >>> - database contents do not change for already read keys.
> > > >>>
> > > >>> I can see that neither is met in your case, so you can expect the
> > > >>> behavior that you are seeing.
> > > >>>
> > > >>> Regards,
> > > >>> --
> > > >>> Ilya Kasnacheev
> > > >>>
> > > >>>
> > > >>> вт, 29 окт. 2019 г. в 18:18, Akash Shinde <ak...@gmail.com>:
> > > >>>
> > > >>>> I am using Ignite 2.6 version.
> > > >>>>
> > > >>>> I am starting 3 server nodes with a replicated cache and 1 client
> > > node.
> > > >>>> Cache configuration is as follows.
> > > >>>> Read-through true on but write-through is false. Load data by key
> is
> > > >>>> implemented as given below in cache-loader.
> > > >>>>
> > > >>>> Steps to reproduce issue:
> > > >>>> 1) Delete an entry from cache using IgniteCache.remove() method.
> > > (Entry
> > > >>>> is just removed from cache but present in DB as write-through is
> > > false)
> > > >>>> 2) Invoke IgniteCache.get() method for the same key in step 1.
> > > >>>> 3) Now query the cache from client node. Every invocation returns
> > > >>>> different results.
> > > >>>> Sometimes it returns reloaded entry, sometime returns the results
> > > >>>> without reloaded entry.
> > > >>>>
> > > >>>> Looks like read-through is not replicating the reloaded entry on
> all
> > > >>>> nodes in case of REPLICATED cache.
> > > >>>>
> > > >>>> So to investigate further I changed the cache mode to PARTITIONED
> and
> > > >>>> set the backup count to 3 i.e. total number of nodes present in
> > > cluster (to
> > > >>>> mimic REPLICATED behavior).
> > > >>>> This time it worked as expected.
> > > >>>> Every invocation returned the same result with reloaded entry.
> > > >>>>
> > > >>>> *  private CacheConfiguration networkCacheCfg() {*
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>> *    CacheConfiguration networkCacheCfg = new
> > > >>>> CacheConfiguration<>(CacheName.NETWORK_CACHE.name
> > > >>>> <http://CacheName.NETWORK_CACHE.name>());
> > > >>>>
> networkCacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
> > > >>>> networkCacheCfg.setWriteThrough(false);
> > > >>>> networkCacheCfg.setReadThrough(true);
> > > >>>> networkCacheCfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
> > > >>>>
> > >
> networkCacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
> > > >>>>   //networkCacheCfg.setBackups(3);
> > > >>>> networkCacheCfg.setCacheMode(CacheMode.REPLICATED);
> > > >>>> Factory<NetworkDataCacheLoader> storeFactory =
> > > >>>> FactoryBuilder.factoryOf(NetworkDataCacheLoader.class);
> > > >>>> networkCacheCfg.setCacheStoreFactory(storeFactory);
> > > >>>> networkCacheCfg.setIndexedTypes(DefaultDataAffinityKey.class,
> > > >>>> NetworkData.class);
> networkCacheCfg.setSqlIndexMaxInlineSize(65);
> > > >>>> RendezvousAffinityFunction affinityFunction = new
> > > >>>> RendezvousAffinityFunction();
> > > >>>> affinityFunction.setExcludeNeighbors(false);
> > > >>>> networkCacheCfg.setAffinity(affinityFunction);
> > > >>>> networkCacheCfg.setStatisticsEnabled(true);   //
> > > >>>> networkCacheCfg.setNearConfiguration(nearCacheConfiguration());
> > > return
> > > >>>> networkCacheCfg;  }*
> > > >>>>
> > > >>>> @Override
> > > >>>> public V load(K k) throws CacheLoaderException {
> > > >>>>     V value = null;
> > > >>>>     DataSource dataSource = springCtx.getBean(DataSource.class);
> > > >>>>     try (Connection connection = dataSource.getConnection();
> > > >>>>          PreparedStatement statement =
> > > connection.prepareStatement(loadByKeySql)) {
> > > >>>>         //statement.setObject(1, k.getId());
> > > >>>>         setPreparedStatement(statement,k);
> > > >>>>         try (ResultSet rs = statement.executeQuery()) {
> > > >>>>             if (rs.next()) {
> > > >>>>                 value = rowMapper.mapRow(rs, 0);
> > > >>>>             }
> > > >>>>         }
> > > >>>>     } catch (SQLException e) {
> > > >>>>
> > > >>>>         throw new CacheLoaderException(e.getMessage(), e);
> > > >>>>     }
> > > >>>>
> > > >>>>     return value;
> > > >>>> }
> > > >>>>
> > > >>>>
> > > >>>> Thanks,
> > > >>>>
> > > >>>> Akash
> > > >>>>
> > > >>>>
> > >
>

Re: Read through not working as expected in case of Replicated cache

Posted by Ivan Pavlukhin <vo...@gmail.com>.
As I understand the thing here is in combination of read-through and
SQL. SQL queries do not read from underlying storage when read-through
is configured. And an observed result happens because query from a
client node over REPLICATED cache picks random server node (kind of
load-balancing) to retrieve data. Following happens in the described
case:
1. Value is loaded to a cache from an underlying storage on a primary
node when cache.get is called.
2. Query is executed multiple times and when the chose node is the
primary node then the value is observed. On other nodes the value is
absent.

Actually, behavior for PARTITIONED cache is similar, but an
inconsistency is not observed because SQL queries read data from the
primary node there. If the primary node leaves a cluster then an SQL
query will not see the value anymore. So, the same inconsistency will
appear.

Best regards,
Ivan Pavlukhin

пт, 28 февр. 2020 г. в 13:23, Prasad Bhalerao <pr...@gmail.com>:
>
> Can someone please comment on this?
>
> On Wed, Feb 26, 2020 at 6:04 AM Denis Magda <dm...@apache.org> wrote:
>
> > Ignite Dev team,
> >
> > This sounds like an issue in our replicated cache implementation rather
> > than an expected behavior. Especially, if partitioned caches don't have
> > such a specificity.
> >
> > Who can explain why write-through needs to be enabled for replicated caches
> > to reload an entry from an underlying database properly/consistently?
> >
> > -
> > Denis
> >
> >
> > On Tue, Feb 25, 2020 at 5:11 AM Ilya Kasnacheev <ilya.kasnacheev@gmail.com
> > >
> > wrote:
> >
> > > Hello!
> > >
> > > I think this is by design. You may suggest edits on readme.io.
> > >
> > > Regards,
> > > --
> > > Ilya Kasnacheev
> > >
> > >
> > > пн, 24 февр. 2020 г. в 17:28, Prasad Bhalerao <
> > > prasadbhalerao1983@gmail.com>:
> > >
> > >> Hi,
> > >>
> > >> Is this a bug or the cache is designed to work this way?
> > >>
> > >> If it is as-designed, can this behavior be updated in ignite
> > >> documentation?
> > >>
> > >> Thanks,
> > >> Prasad
> > >>
> > >> On Wed, Oct 30, 2019 at 7:19 PM Ilya Kasnacheev <
> > >> ilya.kasnacheev@gmail.com> wrote:
> > >>
> > >>> Hello!
> > >>>
> > >>> I have discussed this with fellow Ignite developers, and they say read
> > >>> through for replicated cache would work where there is either:
> > >>>
> > >>> - writeThrough enabled and all changes do through it.
> > >>> - database contents do not change for already read keys.
> > >>>
> > >>> I can see that neither is met in your case, so you can expect the
> > >>> behavior that you are seeing.
> > >>>
> > >>> Regards,
> > >>> --
> > >>> Ilya Kasnacheev
> > >>>
> > >>>
> > >>> вт, 29 окт. 2019 г. в 18:18, Akash Shinde <ak...@gmail.com>:
> > >>>
> > >>>> I am using Ignite 2.6 version.
> > >>>>
> > >>>> I am starting 3 server nodes with a replicated cache and 1 client
> > node.
> > >>>> Cache configuration is as follows.
> > >>>> Read-through true on but write-through is false. Load data by key is
> > >>>> implemented as given below in cache-loader.
> > >>>>
> > >>>> Steps to reproduce issue:
> > >>>> 1) Delete an entry from cache using IgniteCache.remove() method.
> > (Entry
> > >>>> is just removed from cache but present in DB as write-through is
> > false)
> > >>>> 2) Invoke IgniteCache.get() method for the same key in step 1.
> > >>>> 3) Now query the cache from client node. Every invocation returns
> > >>>> different results.
> > >>>> Sometimes it returns reloaded entry, sometime returns the results
> > >>>> without reloaded entry.
> > >>>>
> > >>>> Looks like read-through is not replicating the reloaded entry on all
> > >>>> nodes in case of REPLICATED cache.
> > >>>>
> > >>>> So to investigate further I changed the cache mode to PARTITIONED and
> > >>>> set the backup count to 3 i.e. total number of nodes present in
> > cluster (to
> > >>>> mimic REPLICATED behavior).
> > >>>> This time it worked as expected.
> > >>>> Every invocation returned the same result with reloaded entry.
> > >>>>
> > >>>> *  private CacheConfiguration networkCacheCfg() {*
> > >>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>> *    CacheConfiguration networkCacheCfg = new
> > >>>> CacheConfiguration<>(CacheName.NETWORK_CACHE.name
> > >>>> <http://CacheName.NETWORK_CACHE.name>());
> > >>>> networkCacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
> > >>>> networkCacheCfg.setWriteThrough(false);
> > >>>> networkCacheCfg.setReadThrough(true);
> > >>>> networkCacheCfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
> > >>>>
> > networkCacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
> > >>>>   //networkCacheCfg.setBackups(3);
> > >>>> networkCacheCfg.setCacheMode(CacheMode.REPLICATED);
> > >>>> Factory<NetworkDataCacheLoader> storeFactory =
> > >>>> FactoryBuilder.factoryOf(NetworkDataCacheLoader.class);
> > >>>> networkCacheCfg.setCacheStoreFactory(storeFactory);
> > >>>> networkCacheCfg.setIndexedTypes(DefaultDataAffinityKey.class,
> > >>>> NetworkData.class);    networkCacheCfg.setSqlIndexMaxInlineSize(65);
> > >>>> RendezvousAffinityFunction affinityFunction = new
> > >>>> RendezvousAffinityFunction();
> > >>>> affinityFunction.setExcludeNeighbors(false);
> > >>>> networkCacheCfg.setAffinity(affinityFunction);
> > >>>> networkCacheCfg.setStatisticsEnabled(true);   //
> > >>>> networkCacheCfg.setNearConfiguration(nearCacheConfiguration());
> > return
> > >>>> networkCacheCfg;  }*
> > >>>>
> > >>>> @Override
> > >>>> public V load(K k) throws CacheLoaderException {
> > >>>>     V value = null;
> > >>>>     DataSource dataSource = springCtx.getBean(DataSource.class);
> > >>>>     try (Connection connection = dataSource.getConnection();
> > >>>>          PreparedStatement statement =
> > connection.prepareStatement(loadByKeySql)) {
> > >>>>         //statement.setObject(1, k.getId());
> > >>>>         setPreparedStatement(statement,k);
> > >>>>         try (ResultSet rs = statement.executeQuery()) {
> > >>>>             if (rs.next()) {
> > >>>>                 value = rowMapper.mapRow(rs, 0);
> > >>>>             }
> > >>>>         }
> > >>>>     } catch (SQLException e) {
> > >>>>
> > >>>>         throw new CacheLoaderException(e.getMessage(), e);
> > >>>>     }
> > >>>>
> > >>>>     return value;
> > >>>> }
> > >>>>
> > >>>>
> > >>>> Thanks,
> > >>>>
> > >>>> Akash
> > >>>>
> > >>>>
> >

Re: Read through not working as expected in case of Replicated cache

Posted by Prasad Bhalerao <pr...@gmail.com>.
Can someone please comment on this?

On Wed, Feb 26, 2020 at 6:04 AM Denis Magda <dm...@apache.org> wrote:

> Ignite Dev team,
>
> This sounds like an issue in our replicated cache implementation rather
> than an expected behavior. Especially, if partitioned caches don't have
> such a specificity.
>
> Who can explain why write-through needs to be enabled for replicated caches
> to reload an entry from an underlying database properly/consistently?
>
> -
> Denis
>
>
> On Tue, Feb 25, 2020 at 5:11 AM Ilya Kasnacheev <ilya.kasnacheev@gmail.com
> >
> wrote:
>
> > Hello!
> >
> > I think this is by design. You may suggest edits on readme.io.
> >
> > Regards,
> > --
> > Ilya Kasnacheev
> >
> >
> > пн, 24 февр. 2020 г. в 17:28, Prasad Bhalerao <
> > prasadbhalerao1983@gmail.com>:
> >
> >> Hi,
> >>
> >> Is this a bug or the cache is designed to work this way?
> >>
> >> If it is as-designed, can this behavior be updated in ignite
> >> documentation?
> >>
> >> Thanks,
> >> Prasad
> >>
> >> On Wed, Oct 30, 2019 at 7:19 PM Ilya Kasnacheev <
> >> ilya.kasnacheev@gmail.com> wrote:
> >>
> >>> Hello!
> >>>
> >>> I have discussed this with fellow Ignite developers, and they say read
> >>> through for replicated cache would work where there is either:
> >>>
> >>> - writeThrough enabled and all changes do through it.
> >>> - database contents do not change for already read keys.
> >>>
> >>> I can see that neither is met in your case, so you can expect the
> >>> behavior that you are seeing.
> >>>
> >>> Regards,
> >>> --
> >>> Ilya Kasnacheev
> >>>
> >>>
> >>> вт, 29 окт. 2019 г. в 18:18, Akash Shinde <ak...@gmail.com>:
> >>>
> >>>> I am using Ignite 2.6 version.
> >>>>
> >>>> I am starting 3 server nodes with a replicated cache and 1 client
> node.
> >>>> Cache configuration is as follows.
> >>>> Read-through true on but write-through is false. Load data by key is
> >>>> implemented as given below in cache-loader.
> >>>>
> >>>> Steps to reproduce issue:
> >>>> 1) Delete an entry from cache using IgniteCache.remove() method.
> (Entry
> >>>> is just removed from cache but present in DB as write-through is
> false)
> >>>> 2) Invoke IgniteCache.get() method for the same key in step 1.
> >>>> 3) Now query the cache from client node. Every invocation returns
> >>>> different results.
> >>>> Sometimes it returns reloaded entry, sometime returns the results
> >>>> without reloaded entry.
> >>>>
> >>>> Looks like read-through is not replicating the reloaded entry on all
> >>>> nodes in case of REPLICATED cache.
> >>>>
> >>>> So to investigate further I changed the cache mode to PARTITIONED and
> >>>> set the backup count to 3 i.e. total number of nodes present in
> cluster (to
> >>>> mimic REPLICATED behavior).
> >>>> This time it worked as expected.
> >>>> Every invocation returned the same result with reloaded entry.
> >>>>
> >>>> *  private CacheConfiguration networkCacheCfg() {*
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> *    CacheConfiguration networkCacheCfg = new
> >>>> CacheConfiguration<>(CacheName.NETWORK_CACHE.name
> >>>> <http://CacheName.NETWORK_CACHE.name>());
> >>>> networkCacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
> >>>> networkCacheCfg.setWriteThrough(false);
> >>>> networkCacheCfg.setReadThrough(true);
> >>>> networkCacheCfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
> >>>>
> networkCacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
> >>>>   //networkCacheCfg.setBackups(3);
> >>>> networkCacheCfg.setCacheMode(CacheMode.REPLICATED);
> >>>> Factory<NetworkDataCacheLoader> storeFactory =
> >>>> FactoryBuilder.factoryOf(NetworkDataCacheLoader.class);
> >>>> networkCacheCfg.setCacheStoreFactory(storeFactory);
> >>>> networkCacheCfg.setIndexedTypes(DefaultDataAffinityKey.class,
> >>>> NetworkData.class);    networkCacheCfg.setSqlIndexMaxInlineSize(65);
> >>>> RendezvousAffinityFunction affinityFunction = new
> >>>> RendezvousAffinityFunction();
> >>>> affinityFunction.setExcludeNeighbors(false);
> >>>> networkCacheCfg.setAffinity(affinityFunction);
> >>>> networkCacheCfg.setStatisticsEnabled(true);   //
> >>>> networkCacheCfg.setNearConfiguration(nearCacheConfiguration());
> return
> >>>> networkCacheCfg;  }*
> >>>>
> >>>> @Override
> >>>> public V load(K k) throws CacheLoaderException {
> >>>>     V value = null;
> >>>>     DataSource dataSource = springCtx.getBean(DataSource.class);
> >>>>     try (Connection connection = dataSource.getConnection();
> >>>>          PreparedStatement statement =
> connection.prepareStatement(loadByKeySql)) {
> >>>>         //statement.setObject(1, k.getId());
> >>>>         setPreparedStatement(statement,k);
> >>>>         try (ResultSet rs = statement.executeQuery()) {
> >>>>             if (rs.next()) {
> >>>>                 value = rowMapper.mapRow(rs, 0);
> >>>>             }
> >>>>         }
> >>>>     } catch (SQLException e) {
> >>>>
> >>>>         throw new CacheLoaderException(e.getMessage(), e);
> >>>>     }
> >>>>
> >>>>     return value;
> >>>> }
> >>>>
> >>>>
> >>>> Thanks,
> >>>>
> >>>> Akash
> >>>>
> >>>>
>

Re: Read through not working as expected in case of Replicated cache

Posted by Denis Magda <dm...@apache.org>.
Ignite Dev team,

This sounds like an issue in our replicated cache implementation rather
than an expected behavior. Especially, if partitioned caches don't have
such a specificity.

Who can explain why write-through needs to be enabled for replicated caches
to reload an entry from an underlying database properly/consistently?

-
Denis


On Tue, Feb 25, 2020 at 5:11 AM Ilya Kasnacheev <il...@gmail.com>
wrote:

> Hello!
>
> I think this is by design. You may suggest edits on readme.io.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 24 февр. 2020 г. в 17:28, Prasad Bhalerao <
> prasadbhalerao1983@gmail.com>:
>
>> Hi,
>>
>> Is this a bug or the cache is designed to work this way?
>>
>> If it is as-designed, can this behavior be updated in ignite
>> documentation?
>>
>> Thanks,
>> Prasad
>>
>> On Wed, Oct 30, 2019 at 7:19 PM Ilya Kasnacheev <
>> ilya.kasnacheev@gmail.com> wrote:
>>
>>> Hello!
>>>
>>> I have discussed this with fellow Ignite developers, and they say read
>>> through for replicated cache would work where there is either:
>>>
>>> - writeThrough enabled and all changes do through it.
>>> - database contents do not change for already read keys.
>>>
>>> I can see that neither is met in your case, so you can expect the
>>> behavior that you are seeing.
>>>
>>> Regards,
>>> --
>>> Ilya Kasnacheev
>>>
>>>
>>> вт, 29 окт. 2019 г. в 18:18, Akash Shinde <ak...@gmail.com>:
>>>
>>>> I am using Ignite 2.6 version.
>>>>
>>>> I am starting 3 server nodes with a replicated cache and 1 client node.
>>>> Cache configuration is as follows.
>>>> Read-through true on but write-through is false. Load data by key is
>>>> implemented as given below in cache-loader.
>>>>
>>>> Steps to reproduce issue:
>>>> 1) Delete an entry from cache using IgniteCache.remove() method. (Entry
>>>> is just removed from cache but present in DB as write-through is false)
>>>> 2) Invoke IgniteCache.get() method for the same key in step 1.
>>>> 3) Now query the cache from client node. Every invocation returns
>>>> different results.
>>>> Sometimes it returns reloaded entry, sometime returns the results
>>>> without reloaded entry.
>>>>
>>>> Looks like read-through is not replicating the reloaded entry on all
>>>> nodes in case of REPLICATED cache.
>>>>
>>>> So to investigate further I changed the cache mode to PARTITIONED and
>>>> set the backup count to 3 i.e. total number of nodes present in cluster (to
>>>> mimic REPLICATED behavior).
>>>> This time it worked as expected.
>>>> Every invocation returned the same result with reloaded entry.
>>>>
>>>> *  private CacheConfiguration networkCacheCfg() {*
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> *    CacheConfiguration networkCacheCfg = new
>>>> CacheConfiguration<>(CacheName.NETWORK_CACHE.name
>>>> <http://CacheName.NETWORK_CACHE.name>());
>>>> networkCacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
>>>> networkCacheCfg.setWriteThrough(false);
>>>> networkCacheCfg.setReadThrough(true);
>>>> networkCacheCfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
>>>> networkCacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
>>>>   //networkCacheCfg.setBackups(3);
>>>> networkCacheCfg.setCacheMode(CacheMode.REPLICATED);
>>>> Factory<NetworkDataCacheLoader> storeFactory =
>>>> FactoryBuilder.factoryOf(NetworkDataCacheLoader.class);
>>>> networkCacheCfg.setCacheStoreFactory(storeFactory);
>>>> networkCacheCfg.setIndexedTypes(DefaultDataAffinityKey.class,
>>>> NetworkData.class);    networkCacheCfg.setSqlIndexMaxInlineSize(65);
>>>> RendezvousAffinityFunction affinityFunction = new
>>>> RendezvousAffinityFunction();
>>>> affinityFunction.setExcludeNeighbors(false);
>>>> networkCacheCfg.setAffinity(affinityFunction);
>>>> networkCacheCfg.setStatisticsEnabled(true);   //
>>>> networkCacheCfg.setNearConfiguration(nearCacheConfiguration());    return
>>>> networkCacheCfg;  }*
>>>>
>>>> @Override
>>>> public V load(K k) throws CacheLoaderException {
>>>>     V value = null;
>>>>     DataSource dataSource = springCtx.getBean(DataSource.class);
>>>>     try (Connection connection = dataSource.getConnection();
>>>>          PreparedStatement statement = connection.prepareStatement(loadByKeySql)) {
>>>>         //statement.setObject(1, k.getId());
>>>>         setPreparedStatement(statement,k);
>>>>         try (ResultSet rs = statement.executeQuery()) {
>>>>             if (rs.next()) {
>>>>                 value = rowMapper.mapRow(rs, 0);
>>>>             }
>>>>         }
>>>>     } catch (SQLException e) {
>>>>
>>>>         throw new CacheLoaderException(e.getMessage(), e);
>>>>     }
>>>>
>>>>     return value;
>>>> }
>>>>
>>>>
>>>> Thanks,
>>>>
>>>> Akash
>>>>
>>>>

Re: Read through not working as expected in case of Replicated cache

Posted by Denis Magda <dm...@apache.org>.
Ignite Dev team,

This sounds like an issue in our replicated cache implementation rather
than an expected behavior. Especially, if partitioned caches don't have
such a specificity.

Who can explain why write-through needs to be enabled for replicated caches
to reload an entry from an underlying database properly/consistently?

-
Denis


On Tue, Feb 25, 2020 at 5:11 AM Ilya Kasnacheev <il...@gmail.com>
wrote:

> Hello!
>
> I think this is by design. You may suggest edits on readme.io.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 24 февр. 2020 г. в 17:28, Prasad Bhalerao <
> prasadbhalerao1983@gmail.com>:
>
>> Hi,
>>
>> Is this a bug or the cache is designed to work this way?
>>
>> If it is as-designed, can this behavior be updated in ignite
>> documentation?
>>
>> Thanks,
>> Prasad
>>
>> On Wed, Oct 30, 2019 at 7:19 PM Ilya Kasnacheev <
>> ilya.kasnacheev@gmail.com> wrote:
>>
>>> Hello!
>>>
>>> I have discussed this with fellow Ignite developers, and they say read
>>> through for replicated cache would work where there is either:
>>>
>>> - writeThrough enabled and all changes do through it.
>>> - database contents do not change for already read keys.
>>>
>>> I can see that neither is met in your case, so you can expect the
>>> behavior that you are seeing.
>>>
>>> Regards,
>>> --
>>> Ilya Kasnacheev
>>>
>>>
>>> вт, 29 окт. 2019 г. в 18:18, Akash Shinde <ak...@gmail.com>:
>>>
>>>> I am using Ignite 2.6 version.
>>>>
>>>> I am starting 3 server nodes with a replicated cache and 1 client node.
>>>> Cache configuration is as follows.
>>>> Read-through true on but write-through is false. Load data by key is
>>>> implemented as given below in cache-loader.
>>>>
>>>> Steps to reproduce issue:
>>>> 1) Delete an entry from cache using IgniteCache.remove() method. (Entry
>>>> is just removed from cache but present in DB as write-through is false)
>>>> 2) Invoke IgniteCache.get() method for the same key in step 1.
>>>> 3) Now query the cache from client node. Every invocation returns
>>>> different results.
>>>> Sometimes it returns reloaded entry, sometime returns the results
>>>> without reloaded entry.
>>>>
>>>> Looks like read-through is not replicating the reloaded entry on all
>>>> nodes in case of REPLICATED cache.
>>>>
>>>> So to investigate further I changed the cache mode to PARTITIONED and
>>>> set the backup count to 3 i.e. total number of nodes present in cluster (to
>>>> mimic REPLICATED behavior).
>>>> This time it worked as expected.
>>>> Every invocation returned the same result with reloaded entry.
>>>>
>>>> *  private CacheConfiguration networkCacheCfg() {*
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> *    CacheConfiguration networkCacheCfg = new
>>>> CacheConfiguration<>(CacheName.NETWORK_CACHE.name
>>>> <http://CacheName.NETWORK_CACHE.name>());
>>>> networkCacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
>>>> networkCacheCfg.setWriteThrough(false);
>>>> networkCacheCfg.setReadThrough(true);
>>>> networkCacheCfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
>>>> networkCacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
>>>>   //networkCacheCfg.setBackups(3);
>>>> networkCacheCfg.setCacheMode(CacheMode.REPLICATED);
>>>> Factory<NetworkDataCacheLoader> storeFactory =
>>>> FactoryBuilder.factoryOf(NetworkDataCacheLoader.class);
>>>> networkCacheCfg.setCacheStoreFactory(storeFactory);
>>>> networkCacheCfg.setIndexedTypes(DefaultDataAffinityKey.class,
>>>> NetworkData.class);    networkCacheCfg.setSqlIndexMaxInlineSize(65);
>>>> RendezvousAffinityFunction affinityFunction = new
>>>> RendezvousAffinityFunction();
>>>> affinityFunction.setExcludeNeighbors(false);
>>>> networkCacheCfg.setAffinity(affinityFunction);
>>>> networkCacheCfg.setStatisticsEnabled(true);   //
>>>> networkCacheCfg.setNearConfiguration(nearCacheConfiguration());    return
>>>> networkCacheCfg;  }*
>>>>
>>>> @Override
>>>> public V load(K k) throws CacheLoaderException {
>>>>     V value = null;
>>>>     DataSource dataSource = springCtx.getBean(DataSource.class);
>>>>     try (Connection connection = dataSource.getConnection();
>>>>          PreparedStatement statement = connection.prepareStatement(loadByKeySql)) {
>>>>         //statement.setObject(1, k.getId());
>>>>         setPreparedStatement(statement,k);
>>>>         try (ResultSet rs = statement.executeQuery()) {
>>>>             if (rs.next()) {
>>>>                 value = rowMapper.mapRow(rs, 0);
>>>>             }
>>>>         }
>>>>     } catch (SQLException e) {
>>>>
>>>>         throw new CacheLoaderException(e.getMessage(), e);
>>>>     }
>>>>
>>>>     return value;
>>>> }
>>>>
>>>>
>>>> Thanks,
>>>>
>>>> Akash
>>>>
>>>>

Re: Read through not working as expected in case of Replicated cache

Posted by Ilya Kasnacheev <il...@gmail.com>.
Hello!

I think this is by design. You may suggest edits on readme.io.

Regards,
-- 
Ilya Kasnacheev


пн, 24 февр. 2020 г. в 17:28, Prasad Bhalerao <prasadbhalerao1983@gmail.com
>:

> Hi,
>
> Is this a bug or the cache is designed to work this way?
>
> If it is as-designed, can this behavior be updated in ignite documentation?
>
> Thanks,
> Prasad
>
> On Wed, Oct 30, 2019 at 7:19 PM Ilya Kasnacheev <il...@gmail.com>
> wrote:
>
>> Hello!
>>
>> I have discussed this with fellow Ignite developers, and they say read
>> through for replicated cache would work where there is either:
>>
>> - writeThrough enabled and all changes do through it.
>> - database contents do not change for already read keys.
>>
>> I can see that neither is met in your case, so you can expect the
>> behavior that you are seeing.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> вт, 29 окт. 2019 г. в 18:18, Akash Shinde <ak...@gmail.com>:
>>
>>> I am using Ignite 2.6 version.
>>>
>>> I am starting 3 server nodes with a replicated cache and 1 client node.
>>> Cache configuration is as follows.
>>> Read-through true on but write-through is false. Load data by key is
>>> implemented as given below in cache-loader.
>>>
>>> Steps to reproduce issue:
>>> 1) Delete an entry from cache using IgniteCache.remove() method. (Entry
>>> is just removed from cache but present in DB as write-through is false)
>>> 2) Invoke IgniteCache.get() method for the same key in step 1.
>>> 3) Now query the cache from client node. Every invocation returns
>>> different results.
>>> Sometimes it returns reloaded entry, sometime returns the results
>>> without reloaded entry.
>>>
>>> Looks like read-through is not replicating the reloaded entry on all
>>> nodes in case of REPLICATED cache.
>>>
>>> So to investigate further I changed the cache mode to PARTITIONED and
>>> set the backup count to 3 i.e. total number of nodes present in cluster (to
>>> mimic REPLICATED behavior).
>>> This time it worked as expected.
>>> Every invocation returned the same result with reloaded entry.
>>>
>>> *  private CacheConfiguration networkCacheCfg() {*
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> *    CacheConfiguration networkCacheCfg = new
>>> CacheConfiguration<>(CacheName.NETWORK_CACHE.name
>>> <http://CacheName.NETWORK_CACHE.name>());
>>> networkCacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
>>> networkCacheCfg.setWriteThrough(false);
>>> networkCacheCfg.setReadThrough(true);
>>> networkCacheCfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
>>> networkCacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
>>>   //networkCacheCfg.setBackups(3);
>>> networkCacheCfg.setCacheMode(CacheMode.REPLICATED);
>>> Factory<NetworkDataCacheLoader> storeFactory =
>>> FactoryBuilder.factoryOf(NetworkDataCacheLoader.class);
>>> networkCacheCfg.setCacheStoreFactory(storeFactory);
>>> networkCacheCfg.setIndexedTypes(DefaultDataAffinityKey.class,
>>> NetworkData.class);    networkCacheCfg.setSqlIndexMaxInlineSize(65);
>>> RendezvousAffinityFunction affinityFunction = new
>>> RendezvousAffinityFunction();
>>> affinityFunction.setExcludeNeighbors(false);
>>> networkCacheCfg.setAffinity(affinityFunction);
>>> networkCacheCfg.setStatisticsEnabled(true);   //
>>> networkCacheCfg.setNearConfiguration(nearCacheConfiguration());    return
>>> networkCacheCfg;  }*
>>>
>>> @Override
>>> public V load(K k) throws CacheLoaderException {
>>>     V value = null;
>>>     DataSource dataSource = springCtx.getBean(DataSource.class);
>>>     try (Connection connection = dataSource.getConnection();
>>>          PreparedStatement statement = connection.prepareStatement(loadByKeySql)) {
>>>         //statement.setObject(1, k.getId());
>>>         setPreparedStatement(statement,k);
>>>         try (ResultSet rs = statement.executeQuery()) {
>>>             if (rs.next()) {
>>>                 value = rowMapper.mapRow(rs, 0);
>>>             }
>>>         }
>>>     } catch (SQLException e) {
>>>
>>>         throw new CacheLoaderException(e.getMessage(), e);
>>>     }
>>>
>>>     return value;
>>> }
>>>
>>>
>>> Thanks,
>>>
>>> Akash
>>>
>>>