You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ignite.apache.org by "Tanmay Ambre (Jira)" <ji...@apache.org> on 2021/07/06 16:58:00 UTC

[jira] [Updated] (IGNITE-15068) Ignite giving stale data - when data is inserted/updated and immediately read (from the same client node)

     [ https://issues.apache.org/jira/browse/IGNITE-15068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Tanmay Ambre updated IGNITE-15068:
----------------------------------
    Description: 
Hi, 

We have a 18 node (server) cluster for Ignite. We have multiple client nodes that connect to ignite and use the cache api to get and put data. 

One of the client node is a Kafka streams application. This application receives an event and for each event:
 # we lookup the data using the key in ignite. i.e. cache.get(key)
 # we update the data if we find an entry in ignite i.e. cache.put(key, value)
 # we insert the data if we don't find an entry in ignite i.e. cache.put(key, value)

This application processes more than 30 million events per day. In some scenarios we get multiple events for the same key "consecutively" i.e. the time difference between consecutive events is not more  than 10 to 20 milliseconds. We are sure they are processed sequentially as they go to the same Kafka partition. 

What we have observed is sometimes, the step #1 gives us an old copy of the data (not the one which was updated as part of #2). 

 

for e.g. when we get the first event for the same key, we have following value:

Key = 1, value = \{version: 1}

when we get the second even we update ignite as:

Key = 1, value = \{version: 2} //increment version by 1

when we get the third event and when we lookup the data in ignite instead of getting \{version: 2}, we get {color:#ff0000}{version: 1}{color}

also sometimes, when we get the second event - we {color:#ff0000}don't even find the entry in ignite{color} (i.e. \{version:1})

Our caches have the following configuration 

backups = 1

atomicityMode = ATOMIC

writeSynchronizationPolicy = PRIMARY_SYNC

readFromBackup = true (default - we are not setting this)

 

Is there something wrong? Should we set "readFromBackup = false"?

unfortunately it is very difficult to replicate since it happens just a between 50 to 100 times in a day (i.e. out of 30 million events that we are getting). 

 

 

  was:
Hi, 

We have a 18 node (server) cluster for Ignite. We have multiple client nodes that connect to ignite and use the cache api to get and put data. 

One of the client node is a Kafka streams application. This application receives an event for this event:
 # we lookup the data using the key in ignite. i.e. cache.get(key)
 # we update the data if we find an entry in ignite i.e. cache.put(key, value)
 # we insert the data if we don't find an entry in ignite i.e. cache.put(key, value)

This application processes more than 30 million events per day. In some scenarios we get multiple events for the same key "consecutively" i.e. the time difference between consecutive events is not more  than 10 to 20 milliseconds. We are sure they are processed sequentially as they go to the same Kafka partition. 

What we have observed is sometimes, the step #1 gives us an old copy of the data (not the one which was updated as part of #2). 

 

for e.g. when we get the first event for the same key, we have following value:

Key = 1, value = \{version: 1}

when we get the second even we update ignite as:

Key = 1, value = \{version: 2} //increment version by 1

when we get the third event and when we lookup the data in ignite instead of getting \{version: 2}, we get {color:#FF0000}{version: 1}{color}

also sometimes, when we get the second event - we {color:#FF0000}don't even find the entry in ignite{color} (i.e. \{version:1})

Our caches have the following configuration 

backups = 1

atomicityMode = ATOMIC

writeSynchronizationPolicy = PRIMARY_SYNC

readFromBackup = true (default - we are not setting this)

 

Is there something wrong? Should we set "readFromBackup = false"?

unfortunately it is very difficult to replicate since it happens just a between 50 to 100 times in a day (i.e. out of 30 million events that we are getting). 

 

 


> Ignite giving stale data - when data is inserted/updated and immediately read (from the same client node)
> ---------------------------------------------------------------------------------------------------------
>
>                 Key: IGNITE-15068
>                 URL: https://issues.apache.org/jira/browse/IGNITE-15068
>             Project: Ignite
>          Issue Type: Task
>          Components: cache
>    Affects Versions: 2.9.1
>            Reporter: Tanmay Ambre
>            Priority: Major
>
> Hi, 
> We have a 18 node (server) cluster for Ignite. We have multiple client nodes that connect to ignite and use the cache api to get and put data. 
> One of the client node is a Kafka streams application. This application receives an event and for each event:
>  # we lookup the data using the key in ignite. i.e. cache.get(key)
>  # we update the data if we find an entry in ignite i.e. cache.put(key, value)
>  # we insert the data if we don't find an entry in ignite i.e. cache.put(key, value)
> This application processes more than 30 million events per day. In some scenarios we get multiple events for the same key "consecutively" i.e. the time difference between consecutive events is not more  than 10 to 20 milliseconds. We are sure they are processed sequentially as they go to the same Kafka partition. 
> What we have observed is sometimes, the step #1 gives us an old copy of the data (not the one which was updated as part of #2). 
>  
> for e.g. when we get the first event for the same key, we have following value:
> Key = 1, value = \{version: 1}
> when we get the second even we update ignite as:
> Key = 1, value = \{version: 2} //increment version by 1
> when we get the third event and when we lookup the data in ignite instead of getting \{version: 2}, we get {color:#ff0000}{version: 1}{color}
> also sometimes, when we get the second event - we {color:#ff0000}don't even find the entry in ignite{color} (i.e. \{version:1})
> Our caches have the following configuration 
> backups = 1
> atomicityMode = ATOMIC
> writeSynchronizationPolicy = PRIMARY_SYNC
> readFromBackup = true (default - we are not setting this)
>  
> Is there something wrong? Should we set "readFromBackup = false"?
> unfortunately it is very difficult to replicate since it happens just a between 50 to 100 times in a day (i.e. out of 30 million events that we are getting). 
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)