You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@ignite.apache.org by Sumit Deshinge <su...@gmail.com> on 2022/01/28 10:18:43 UTC

Server cache reads less number of entries than number of entries put by client cache

Hi,

We are running apache ignite 2.11 with cache configuration FULL_SYNC and
REPLICATED mode.

Our use case is :

1. *Multiple thin clients are adding data* into a cache using putAll
operation.
2. *Simultaneously the server is reading the data* using server cache
iterator.
3. *While iterating over the cache, data is removed from the cache and
added into new cache using a transaction*, i.e. transaction with remove and
put operations. We have transaction *concurrency - pessimistic and
isolation levels - repeatable_read*.

But we are seeing few missing entries at server side, i.e. server is not
able to read all the data put by client. E.g. in one of the run, all thin
clients put 5000 entries, server is able to read only 4999 entries. Here we
saw 1 entry was not read by server.

*Another observation is that, if we remove the transaction in the second
step above, or use optimistic transaction with serializable isolation
level, then this issue is not observed*.

What could be the possible problem in this use case
with pessimistic concurrency and repeatable_read isolation level? This is
particularly important as this configuration is resulting in data loss.

-- 
Regards,
Sumit Deshinge

Re: Server cache reads less number of entries than number of entries put by client cache

Posted by Pavel Tupitsyn <pt...@apache.org>.
Can you please share the code?

On Fri, Feb 4, 2022 at 8:36 AM Sumit Deshinge <su...@gmail.com>
wrote:

> That's correct, but this is when we use Pessimistic + Repeatable_Read
> transaction at Server side where we are using an iterator to read, remove
> and insert into new cache in the transaction. And this issue is not
> observed every time, but intermittently.
>
> And note that, if I don't use transactions or use Optimistic +
> Serializable transactions, then everything works fine.
>
> On Thu, Feb 3, 2022 at 11:24 PM Pavel Tupitsyn <pt...@apache.org>
> wrote:
>
>> So we have the following situation:
>> * Put 5000 unique keys with putAll
>> * Use cache iterator, observe less than 5000 keys
>>
>> Is that correct?
>>
>> On Thu, Feb 3, 2022 at 7:53 PM Sumit Deshinge <su...@gmail.com>
>> wrote:
>>
>>> Yes, that I am sure, because the keys are generated using ignite uuid,
>>> which internally is based on hostname, and all the clients are hosted on
>>> machines with unique hostnames.
>>>
>>> On Wed, Feb 2, 2022 at 3:23 PM Pavel Tupitsyn <pt...@apache.org>
>>> wrote:
>>>
>>>> Are you sure that all entry keys are unique?
>>>> E.g. if you do 5000 puts but some keys are the same, the result will be
>>>> less than 5000 entries.
>>>>
>>>> On Wed, Feb 2, 2022 at 12:27 PM Sumit Deshinge <
>>>> sumit.deshinge@gmail.com> wrote:
>>>>
>>>>> No, cache does not have entries. Somehow the number of entries
>>>>> returned are less than the number of entries put by all thin clients.
>>>>>
>>>>> On Wed, Feb 2, 2022 at 1:33 PM Pavel Tupitsyn <pt...@apache.org>
>>>>> wrote:
>>>>>
>>>>>> Do you mean that cache has some entries, but the iterator does not
>>>>>> return them?
>>>>>>
>>>>>> On Wed, Feb 2, 2022 at 10:38 AM Sumit Deshinge <
>>>>>> sumit.deshinge@broadcom.com> wrote:
>>>>>>
>>>>>>> Hi Pavel,
>>>>>>>
>>>>>>> I am trying to remove the data from one cache (on which I am
>>>>>>> iterating) to another cache in transaction.
>>>>>>> When the iterator says no further elements, I again try getting a
>>>>>>> new iterator after few seconds, to check if there is any new data available.
>>>>>>>
>>>>>>> In this process, I am missing one or two entries. But if I remove
>>>>>>> the transaction or add optmistic+serializable instead
>>>>>>> of pessimistic+repeatable_read transaction type, then this loss of data is
>>>>>>> not observed with the same steps mentioned.
>>>>>>>
>>>>>>>
>>>>>>> On Wed, Feb 2, 2022 at 1:00 PM Pavel Tupitsyn <pt...@apache.org>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> > While iterating over the cache, data is removed from the cache
>>>>>>>>
>>>>>>>> Sumit, as I understand, you read data while you also remove it, so
>>>>>>>> it is not clear what the expectation is.
>>>>>>>>
>>>>>>>> On Wed, Feb 2, 2022 at 10:28 AM Sumit Deshinge <
>>>>>>>> sumit.deshinge@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Thank you Surinder and Pavel. I will give this approach a try.
>>>>>>>>> But even in case of iterator, when I try refreshing the iterator
>>>>>>>>> once it reached to last record, i.e. new iterator, it does not give all the
>>>>>>>>> entries as described in the first email steps.
>>>>>>>>>
>>>>>>>>> On Fri, Jan 28, 2022 at 4:08 PM Pavel Tupitsyn <
>>>>>>>>> ptupitsyn@apache.org> wrote:
>>>>>>>>>
>>>>>>>>>> Cache iterator does not guarantee that you'll see all entries if
>>>>>>>>>> there are concurrent updates, I think you are facing a race condition.
>>>>>>>>>> Please try ContinuousQuery as Surinder suggests, it will catch
>>>>>>>>>> all data changes.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Fri, Jan 28, 2022 at 1:32 PM Surinder Mehra <
>>>>>>>>>> rednirus@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Just curious, why can't we use continuous query here with
>>>>>>>>>>> "appropriate" event type to write to another cache. So your listener will
>>>>>>>>>>> do below things
>>>>>>>>>>> 1. Write entry to another cache
>>>>>>>>>>> 2. remove entry from source cache
>>>>>>>>>>>
>>>>>>>>>>> Just an idea, please correct if I am wrong
>>>>>>>>>>>
>>>>>>>>>>> On Fri, Jan 28, 2022 at 3:49 PM Sumit Deshinge <
>>>>>>>>>>> sumit.deshinge@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Hi,
>>>>>>>>>>>>
>>>>>>>>>>>> We are running apache ignite 2.11 with cache configuration
>>>>>>>>>>>> FULL_SYNC and REPLICATED mode.
>>>>>>>>>>>>
>>>>>>>>>>>> Our use case is :
>>>>>>>>>>>>
>>>>>>>>>>>> 1. *Multiple thin clients are adding data* into a cache using
>>>>>>>>>>>> putAll operation.
>>>>>>>>>>>> 2. *Simultaneously the server is reading the data* using
>>>>>>>>>>>> server cache iterator.
>>>>>>>>>>>> 3. *While iterating over the cache, data is removed from the
>>>>>>>>>>>> cache and added into new cache using a transaction*, i.e.
>>>>>>>>>>>> transaction with remove and put operations. We have transaction *concurrency
>>>>>>>>>>>> - pessimistic and isolation levels - repeatable_read*.
>>>>>>>>>>>>
>>>>>>>>>>>> But we are seeing few missing entries at server side, i.e.
>>>>>>>>>>>> server is not able to read all the data put by client. E.g. in one of the
>>>>>>>>>>>> run, all thin clients put 5000 entries, server is able to read only 4999
>>>>>>>>>>>> entries. Here we saw 1 entry was not read by server.
>>>>>>>>>>>>
>>>>>>>>>>>> *Another observation is that, if we remove the transaction in
>>>>>>>>>>>> the second step above, or use optimistic transaction with serializable
>>>>>>>>>>>> isolation level, then this issue is not observed*.
>>>>>>>>>>>>
>>>>>>>>>>>> What could be the possible problem in this use case
>>>>>>>>>>>> with pessimistic concurrency and repeatable_read isolation level? This is
>>>>>>>>>>>> particularly important as this configuration is resulting in data loss.
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> Regards,
>>>>>>>>>>>> Sumit Deshinge
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Regards,
>>>>>>>>> Sumit Deshinge
>>>>>>>>>
>>>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>>
>>>>>>> Sumit Deshinge
>>>>>>>
>>>>>>> R&D Engineer | Symantec Enterprise Division
>>>>>>>
>>>>>>> Broadcom Software
>>>>>>>
>>>>>>> Email: Sumit Deshinge <su...@broadcom.com>
>>>>>>>
>>>>>>>
>>>>>>> This electronic communication and the information and any files
>>>>>>> transmitted with it, or attached to it, are confidential and are intended
>>>>>>> solely for the use of the individual or entity to whom it is addressed and
>>>>>>> may contain information that is confidential, legally privileged, protected
>>>>>>> by privacy laws, or otherwise restricted from disclosure to anyone else. If
>>>>>>> you are not the intended recipient or the person responsible for delivering
>>>>>>> the e-mail to the intended recipient, you are hereby notified that any use,
>>>>>>> copying, distributing, dissemination, forwarding, printing, or copying of
>>>>>>> this e-mail is strictly prohibited. If you received this e-mail in error,
>>>>>>> please return the e-mail to the sender, delete it from your computer, and
>>>>>>> destroy any printed copy of it.
>>>>>>
>>>>>>
>>>>>
>>>>> --
>>>>> Regards,
>>>>> Sumit Deshinge
>>>>>
>>>>>
>>>
>>> --
>>> Regards,
>>> Sumit Deshinge
>>>
>>>
>
> --
> Regards,
> Sumit Deshinge
>
>

Re: Server cache reads less number of entries than number of entries put by client cache

Posted by Sumit Deshinge <su...@gmail.com>.
That's correct, but this is when we use Pessimistic + Repeatable_Read
transaction at Server side where we are using an iterator to read, remove
and insert into new cache in the transaction. And this issue is not
observed every time, but intermittently.

And note that, if I don't use transactions or use Optimistic + Serializable
transactions, then everything works fine.

On Thu, Feb 3, 2022 at 11:24 PM Pavel Tupitsyn <pt...@apache.org> wrote:

> So we have the following situation:
> * Put 5000 unique keys with putAll
> * Use cache iterator, observe less than 5000 keys
>
> Is that correct?
>
> On Thu, Feb 3, 2022 at 7:53 PM Sumit Deshinge <su...@gmail.com>
> wrote:
>
>> Yes, that I am sure, because the keys are generated using ignite uuid,
>> which internally is based on hostname, and all the clients are hosted on
>> machines with unique hostnames.
>>
>> On Wed, Feb 2, 2022 at 3:23 PM Pavel Tupitsyn <pt...@apache.org>
>> wrote:
>>
>>> Are you sure that all entry keys are unique?
>>> E.g. if you do 5000 puts but some keys are the same, the result will be
>>> less than 5000 entries.
>>>
>>> On Wed, Feb 2, 2022 at 12:27 PM Sumit Deshinge <su...@gmail.com>
>>> wrote:
>>>
>>>> No, cache does not have entries. Somehow the number of entries returned
>>>> are less than the number of entries put by all thin clients.
>>>>
>>>> On Wed, Feb 2, 2022 at 1:33 PM Pavel Tupitsyn <pt...@apache.org>
>>>> wrote:
>>>>
>>>>> Do you mean that cache has some entries, but the iterator does not
>>>>> return them?
>>>>>
>>>>> On Wed, Feb 2, 2022 at 10:38 AM Sumit Deshinge <
>>>>> sumit.deshinge@broadcom.com> wrote:
>>>>>
>>>>>> Hi Pavel,
>>>>>>
>>>>>> I am trying to remove the data from one cache (on which I am
>>>>>> iterating) to another cache in transaction.
>>>>>> When the iterator says no further elements, I again try getting a new
>>>>>> iterator after few seconds, to check if there is any new data available.
>>>>>>
>>>>>> In this process, I am missing one or two entries. But if I remove the
>>>>>> transaction or add optmistic+serializable instead
>>>>>> of pessimistic+repeatable_read transaction type, then this loss of data is
>>>>>> not observed with the same steps mentioned.
>>>>>>
>>>>>>
>>>>>> On Wed, Feb 2, 2022 at 1:00 PM Pavel Tupitsyn <pt...@apache.org>
>>>>>> wrote:
>>>>>>
>>>>>>> > While iterating over the cache, data is removed from the cache
>>>>>>>
>>>>>>> Sumit, as I understand, you read data while you also remove it, so
>>>>>>> it is not clear what the expectation is.
>>>>>>>
>>>>>>> On Wed, Feb 2, 2022 at 10:28 AM Sumit Deshinge <
>>>>>>> sumit.deshinge@gmail.com> wrote:
>>>>>>>
>>>>>>>> Thank you Surinder and Pavel. I will give this approach a try.
>>>>>>>> But even in case of iterator, when I try refreshing the iterator
>>>>>>>> once it reached to last record, i.e. new iterator, it does not give all the
>>>>>>>> entries as described in the first email steps.
>>>>>>>>
>>>>>>>> On Fri, Jan 28, 2022 at 4:08 PM Pavel Tupitsyn <
>>>>>>>> ptupitsyn@apache.org> wrote:
>>>>>>>>
>>>>>>>>> Cache iterator does not guarantee that you'll see all entries if
>>>>>>>>> there are concurrent updates, I think you are facing a race condition.
>>>>>>>>> Please try ContinuousQuery as Surinder suggests, it will catch all
>>>>>>>>> data changes.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Fri, Jan 28, 2022 at 1:32 PM Surinder Mehra <re...@gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Just curious, why can't we use continuous query here with
>>>>>>>>>> "appropriate" event type to write to another cache. So your listener will
>>>>>>>>>> do below things
>>>>>>>>>> 1. Write entry to another cache
>>>>>>>>>> 2. remove entry from source cache
>>>>>>>>>>
>>>>>>>>>> Just an idea, please correct if I am wrong
>>>>>>>>>>
>>>>>>>>>> On Fri, Jan 28, 2022 at 3:49 PM Sumit Deshinge <
>>>>>>>>>> sumit.deshinge@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Hi,
>>>>>>>>>>>
>>>>>>>>>>> We are running apache ignite 2.11 with cache configuration
>>>>>>>>>>> FULL_SYNC and REPLICATED mode.
>>>>>>>>>>>
>>>>>>>>>>> Our use case is :
>>>>>>>>>>>
>>>>>>>>>>> 1. *Multiple thin clients are adding data* into a cache using
>>>>>>>>>>> putAll operation.
>>>>>>>>>>> 2. *Simultaneously the server is reading the data* using server
>>>>>>>>>>> cache iterator.
>>>>>>>>>>> 3. *While iterating over the cache, data is removed from the
>>>>>>>>>>> cache and added into new cache using a transaction*, i.e.
>>>>>>>>>>> transaction with remove and put operations. We have transaction *concurrency
>>>>>>>>>>> - pessimistic and isolation levels - repeatable_read*.
>>>>>>>>>>>
>>>>>>>>>>> But we are seeing few missing entries at server side, i.e.
>>>>>>>>>>> server is not able to read all the data put by client. E.g. in one of the
>>>>>>>>>>> run, all thin clients put 5000 entries, server is able to read only 4999
>>>>>>>>>>> entries. Here we saw 1 entry was not read by server.
>>>>>>>>>>>
>>>>>>>>>>> *Another observation is that, if we remove the transaction in
>>>>>>>>>>> the second step above, or use optimistic transaction with serializable
>>>>>>>>>>> isolation level, then this issue is not observed*.
>>>>>>>>>>>
>>>>>>>>>>> What could be the possible problem in this use case
>>>>>>>>>>> with pessimistic concurrency and repeatable_read isolation level? This is
>>>>>>>>>>> particularly important as this configuration is resulting in data loss.
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> Regards,
>>>>>>>>>>> Sumit Deshinge
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Regards,
>>>>>>>> Sumit Deshinge
>>>>>>>>
>>>>>>>>
>>>>>>
>>>>>> --
>>>>>>
>>>>>> Sumit Deshinge
>>>>>>
>>>>>> R&D Engineer | Symantec Enterprise Division
>>>>>>
>>>>>> Broadcom Software
>>>>>>
>>>>>> Email: Sumit Deshinge <su...@broadcom.com>
>>>>>>
>>>>>>
>>>>>> This electronic communication and the information and any files
>>>>>> transmitted with it, or attached to it, are confidential and are intended
>>>>>> solely for the use of the individual or entity to whom it is addressed and
>>>>>> may contain information that is confidential, legally privileged, protected
>>>>>> by privacy laws, or otherwise restricted from disclosure to anyone else. If
>>>>>> you are not the intended recipient or the person responsible for delivering
>>>>>> the e-mail to the intended recipient, you are hereby notified that any use,
>>>>>> copying, distributing, dissemination, forwarding, printing, or copying of
>>>>>> this e-mail is strictly prohibited. If you received this e-mail in error,
>>>>>> please return the e-mail to the sender, delete it from your computer, and
>>>>>> destroy any printed copy of it.
>>>>>
>>>>>
>>>>
>>>> --
>>>> Regards,
>>>> Sumit Deshinge
>>>>
>>>>
>>
>> --
>> Regards,
>> Sumit Deshinge
>>
>>

-- 
Regards,
Sumit Deshinge

Re: Server cache reads less number of entries than number of entries put by client cache

Posted by Pavel Tupitsyn <pt...@apache.org>.
So we have the following situation:
* Put 5000 unique keys with putAll
* Use cache iterator, observe less than 5000 keys

Is that correct?

On Thu, Feb 3, 2022 at 7:53 PM Sumit Deshinge <su...@gmail.com>
wrote:

> Yes, that I am sure, because the keys are generated using ignite uuid,
> which internally is based on hostname, and all the clients are hosted on
> machines with unique hostnames.
>
> On Wed, Feb 2, 2022 at 3:23 PM Pavel Tupitsyn <pt...@apache.org>
> wrote:
>
>> Are you sure that all entry keys are unique?
>> E.g. if you do 5000 puts but some keys are the same, the result will be
>> less than 5000 entries.
>>
>> On Wed, Feb 2, 2022 at 12:27 PM Sumit Deshinge <su...@gmail.com>
>> wrote:
>>
>>> No, cache does not have entries. Somehow the number of entries returned
>>> are less than the number of entries put by all thin clients.
>>>
>>> On Wed, Feb 2, 2022 at 1:33 PM Pavel Tupitsyn <pt...@apache.org>
>>> wrote:
>>>
>>>> Do you mean that cache has some entries, but the iterator does not
>>>> return them?
>>>>
>>>> On Wed, Feb 2, 2022 at 10:38 AM Sumit Deshinge <
>>>> sumit.deshinge@broadcom.com> wrote:
>>>>
>>>>> Hi Pavel,
>>>>>
>>>>> I am trying to remove the data from one cache (on which I am
>>>>> iterating) to another cache in transaction.
>>>>> When the iterator says no further elements, I again try getting a new
>>>>> iterator after few seconds, to check if there is any new data available.
>>>>>
>>>>> In this process, I am missing one or two entries. But if I remove the
>>>>> transaction or add optmistic+serializable instead
>>>>> of pessimistic+repeatable_read transaction type, then this loss of data is
>>>>> not observed with the same steps mentioned.
>>>>>
>>>>>
>>>>> On Wed, Feb 2, 2022 at 1:00 PM Pavel Tupitsyn <pt...@apache.org>
>>>>> wrote:
>>>>>
>>>>>> > While iterating over the cache, data is removed from the cache
>>>>>>
>>>>>> Sumit, as I understand, you read data while you also remove it, so it
>>>>>> is not clear what the expectation is.
>>>>>>
>>>>>> On Wed, Feb 2, 2022 at 10:28 AM Sumit Deshinge <
>>>>>> sumit.deshinge@gmail.com> wrote:
>>>>>>
>>>>>>> Thank you Surinder and Pavel. I will give this approach a try.
>>>>>>> But even in case of iterator, when I try refreshing the iterator
>>>>>>> once it reached to last record, i.e. new iterator, it does not give all the
>>>>>>> entries as described in the first email steps.
>>>>>>>
>>>>>>> On Fri, Jan 28, 2022 at 4:08 PM Pavel Tupitsyn <pt...@apache.org>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Cache iterator does not guarantee that you'll see all entries if
>>>>>>>> there are concurrent updates, I think you are facing a race condition.
>>>>>>>> Please try ContinuousQuery as Surinder suggests, it will catch all
>>>>>>>> data changes.
>>>>>>>>
>>>>>>>>
>>>>>>>> On Fri, Jan 28, 2022 at 1:32 PM Surinder Mehra <re...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Just curious, why can't we use continuous query here with
>>>>>>>>> "appropriate" event type to write to another cache. So your listener will
>>>>>>>>> do below things
>>>>>>>>> 1. Write entry to another cache
>>>>>>>>> 2. remove entry from source cache
>>>>>>>>>
>>>>>>>>> Just an idea, please correct if I am wrong
>>>>>>>>>
>>>>>>>>> On Fri, Jan 28, 2022 at 3:49 PM Sumit Deshinge <
>>>>>>>>> sumit.deshinge@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Hi,
>>>>>>>>>>
>>>>>>>>>> We are running apache ignite 2.11 with cache configuration
>>>>>>>>>> FULL_SYNC and REPLICATED mode.
>>>>>>>>>>
>>>>>>>>>> Our use case is :
>>>>>>>>>>
>>>>>>>>>> 1. *Multiple thin clients are adding data* into a cache using
>>>>>>>>>> putAll operation.
>>>>>>>>>> 2. *Simultaneously the server is reading the data* using server
>>>>>>>>>> cache iterator.
>>>>>>>>>> 3. *While iterating over the cache, data is removed from the
>>>>>>>>>> cache and added into new cache using a transaction*, i.e.
>>>>>>>>>> transaction with remove and put operations. We have transaction *concurrency
>>>>>>>>>> - pessimistic and isolation levels - repeatable_read*.
>>>>>>>>>>
>>>>>>>>>> But we are seeing few missing entries at server side, i.e. server
>>>>>>>>>> is not able to read all the data put by client. E.g. in one of the run, all
>>>>>>>>>> thin clients put 5000 entries, server is able to read only 4999 entries.
>>>>>>>>>> Here we saw 1 entry was not read by server.
>>>>>>>>>>
>>>>>>>>>> *Another observation is that, if we remove the transaction in the
>>>>>>>>>> second step above, or use optimistic transaction with serializable
>>>>>>>>>> isolation level, then this issue is not observed*.
>>>>>>>>>>
>>>>>>>>>> What could be the possible problem in this use case
>>>>>>>>>> with pessimistic concurrency and repeatable_read isolation level? This is
>>>>>>>>>> particularly important as this configuration is resulting in data loss.
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Regards,
>>>>>>>>>> Sumit Deshinge
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Regards,
>>>>>>> Sumit Deshinge
>>>>>>>
>>>>>>>
>>>>>
>>>>> --
>>>>>
>>>>> Sumit Deshinge
>>>>>
>>>>> R&D Engineer | Symantec Enterprise Division
>>>>>
>>>>> Broadcom Software
>>>>>
>>>>> Email: Sumit Deshinge <su...@broadcom.com>
>>>>>
>>>>>
>>>>> This electronic communication and the information and any files
>>>>> transmitted with it, or attached to it, are confidential and are intended
>>>>> solely for the use of the individual or entity to whom it is addressed and
>>>>> may contain information that is confidential, legally privileged, protected
>>>>> by privacy laws, or otherwise restricted from disclosure to anyone else. If
>>>>> you are not the intended recipient or the person responsible for delivering
>>>>> the e-mail to the intended recipient, you are hereby notified that any use,
>>>>> copying, distributing, dissemination, forwarding, printing, or copying of
>>>>> this e-mail is strictly prohibited. If you received this e-mail in error,
>>>>> please return the e-mail to the sender, delete it from your computer, and
>>>>> destroy any printed copy of it.
>>>>
>>>>
>>>
>>> --
>>> Regards,
>>> Sumit Deshinge
>>>
>>>
>
> --
> Regards,
> Sumit Deshinge
>
>

Re: Server cache reads less number of entries than number of entries put by client cache

Posted by Sumit Deshinge <su...@gmail.com>.
Yes, that I am sure, because the keys are generated using ignite uuid,
which internally is based on hostname, and all the clients are hosted on
machines with unique hostnames.

On Wed, Feb 2, 2022 at 3:23 PM Pavel Tupitsyn <pt...@apache.org> wrote:

> Are you sure that all entry keys are unique?
> E.g. if you do 5000 puts but some keys are the same, the result will be
> less than 5000 entries.
>
> On Wed, Feb 2, 2022 at 12:27 PM Sumit Deshinge <su...@gmail.com>
> wrote:
>
>> No, cache does not have entries. Somehow the number of entries returned
>> are less than the number of entries put by all thin clients.
>>
>> On Wed, Feb 2, 2022 at 1:33 PM Pavel Tupitsyn <pt...@apache.org>
>> wrote:
>>
>>> Do you mean that cache has some entries, but the iterator does not
>>> return them?
>>>
>>> On Wed, Feb 2, 2022 at 10:38 AM Sumit Deshinge <
>>> sumit.deshinge@broadcom.com> wrote:
>>>
>>>> Hi Pavel,
>>>>
>>>> I am trying to remove the data from one cache (on which I am iterating)
>>>> to another cache in transaction.
>>>> When the iterator says no further elements, I again try getting a new
>>>> iterator after few seconds, to check if there is any new data available.
>>>>
>>>> In this process, I am missing one or two entries. But if I remove the
>>>> transaction or add optmistic+serializable instead
>>>> of pessimistic+repeatable_read transaction type, then this loss of data is
>>>> not observed with the same steps mentioned.
>>>>
>>>>
>>>> On Wed, Feb 2, 2022 at 1:00 PM Pavel Tupitsyn <pt...@apache.org>
>>>> wrote:
>>>>
>>>>> > While iterating over the cache, data is removed from the cache
>>>>>
>>>>> Sumit, as I understand, you read data while you also remove it, so it
>>>>> is not clear what the expectation is.
>>>>>
>>>>> On Wed, Feb 2, 2022 at 10:28 AM Sumit Deshinge <
>>>>> sumit.deshinge@gmail.com> wrote:
>>>>>
>>>>>> Thank you Surinder and Pavel. I will give this approach a try.
>>>>>> But even in case of iterator, when I try refreshing the iterator once
>>>>>> it reached to last record, i.e. new iterator, it does not give all the
>>>>>> entries as described in the first email steps.
>>>>>>
>>>>>> On Fri, Jan 28, 2022 at 4:08 PM Pavel Tupitsyn <pt...@apache.org>
>>>>>> wrote:
>>>>>>
>>>>>>> Cache iterator does not guarantee that you'll see all entries if
>>>>>>> there are concurrent updates, I think you are facing a race condition.
>>>>>>> Please try ContinuousQuery as Surinder suggests, it will catch all
>>>>>>> data changes.
>>>>>>>
>>>>>>>
>>>>>>> On Fri, Jan 28, 2022 at 1:32 PM Surinder Mehra <re...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Just curious, why can't we use continuous query here with
>>>>>>>> "appropriate" event type to write to another cache. So your listener will
>>>>>>>> do below things
>>>>>>>> 1. Write entry to another cache
>>>>>>>> 2. remove entry from source cache
>>>>>>>>
>>>>>>>> Just an idea, please correct if I am wrong
>>>>>>>>
>>>>>>>> On Fri, Jan 28, 2022 at 3:49 PM Sumit Deshinge <
>>>>>>>> sumit.deshinge@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>> We are running apache ignite 2.11 with cache configuration
>>>>>>>>> FULL_SYNC and REPLICATED mode.
>>>>>>>>>
>>>>>>>>> Our use case is :
>>>>>>>>>
>>>>>>>>> 1. *Multiple thin clients are adding data* into a cache using
>>>>>>>>> putAll operation.
>>>>>>>>> 2. *Simultaneously the server is reading the data* using server
>>>>>>>>> cache iterator.
>>>>>>>>> 3. *While iterating over the cache, data is removed from the
>>>>>>>>> cache and added into new cache using a transaction*, i.e.
>>>>>>>>> transaction with remove and put operations. We have transaction *concurrency
>>>>>>>>> - pessimistic and isolation levels - repeatable_read*.
>>>>>>>>>
>>>>>>>>> But we are seeing few missing entries at server side, i.e. server
>>>>>>>>> is not able to read all the data put by client. E.g. in one of the run, all
>>>>>>>>> thin clients put 5000 entries, server is able to read only 4999 entries.
>>>>>>>>> Here we saw 1 entry was not read by server.
>>>>>>>>>
>>>>>>>>> *Another observation is that, if we remove the transaction in the
>>>>>>>>> second step above, or use optimistic transaction with serializable
>>>>>>>>> isolation level, then this issue is not observed*.
>>>>>>>>>
>>>>>>>>> What could be the possible problem in this use case
>>>>>>>>> with pessimistic concurrency and repeatable_read isolation level? This is
>>>>>>>>> particularly important as this configuration is resulting in data loss.
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Regards,
>>>>>>>>> Sumit Deshinge
>>>>>>>>>
>>>>>>>>>
>>>>>>
>>>>>> --
>>>>>> Regards,
>>>>>> Sumit Deshinge
>>>>>>
>>>>>>
>>>>
>>>> --
>>>>
>>>> Sumit Deshinge
>>>>
>>>> R&D Engineer | Symantec Enterprise Division
>>>>
>>>> Broadcom Software
>>>>
>>>> Email: Sumit Deshinge <su...@broadcom.com>
>>>>
>>>>
>>>> This electronic communication and the information and any files
>>>> transmitted with it, or attached to it, are confidential and are intended
>>>> solely for the use of the individual or entity to whom it is addressed and
>>>> may contain information that is confidential, legally privileged, protected
>>>> by privacy laws, or otherwise restricted from disclosure to anyone else. If
>>>> you are not the intended recipient or the person responsible for delivering
>>>> the e-mail to the intended recipient, you are hereby notified that any use,
>>>> copying, distributing, dissemination, forwarding, printing, or copying of
>>>> this e-mail is strictly prohibited. If you received this e-mail in error,
>>>> please return the e-mail to the sender, delete it from your computer, and
>>>> destroy any printed copy of it.
>>>
>>>
>>
>> --
>> Regards,
>> Sumit Deshinge
>>
>>

-- 
Regards,
Sumit Deshinge

Re: Server cache reads less number of entries than number of entries put by client cache

Posted by Pavel Tupitsyn <pt...@apache.org>.
Are you sure that all entry keys are unique?
E.g. if you do 5000 puts but some keys are the same, the result will be
less than 5000 entries.

On Wed, Feb 2, 2022 at 12:27 PM Sumit Deshinge <su...@gmail.com>
wrote:

> No, cache does not have entries. Somehow the number of entries returned
> are less than the number of entries put by all thin clients.
>
> On Wed, Feb 2, 2022 at 1:33 PM Pavel Tupitsyn <pt...@apache.org>
> wrote:
>
>> Do you mean that cache has some entries, but the iterator does not return
>> them?
>>
>> On Wed, Feb 2, 2022 at 10:38 AM Sumit Deshinge <
>> sumit.deshinge@broadcom.com> wrote:
>>
>>> Hi Pavel,
>>>
>>> I am trying to remove the data from one cache (on which I am iterating)
>>> to another cache in transaction.
>>> When the iterator says no further elements, I again try getting a new
>>> iterator after few seconds, to check if there is any new data available.
>>>
>>> In this process, I am missing one or two entries. But if I remove the
>>> transaction or add optmistic+serializable instead
>>> of pessimistic+repeatable_read transaction type, then this loss of data is
>>> not observed with the same steps mentioned.
>>>
>>>
>>> On Wed, Feb 2, 2022 at 1:00 PM Pavel Tupitsyn <pt...@apache.org>
>>> wrote:
>>>
>>>> > While iterating over the cache, data is removed from the cache
>>>>
>>>> Sumit, as I understand, you read data while you also remove it, so it
>>>> is not clear what the expectation is.
>>>>
>>>> On Wed, Feb 2, 2022 at 10:28 AM Sumit Deshinge <
>>>> sumit.deshinge@gmail.com> wrote:
>>>>
>>>>> Thank you Surinder and Pavel. I will give this approach a try.
>>>>> But even in case of iterator, when I try refreshing the iterator once
>>>>> it reached to last record, i.e. new iterator, it does not give all the
>>>>> entries as described in the first email steps.
>>>>>
>>>>> On Fri, Jan 28, 2022 at 4:08 PM Pavel Tupitsyn <pt...@apache.org>
>>>>> wrote:
>>>>>
>>>>>> Cache iterator does not guarantee that you'll see all entries if
>>>>>> there are concurrent updates, I think you are facing a race condition.
>>>>>> Please try ContinuousQuery as Surinder suggests, it will catch all
>>>>>> data changes.
>>>>>>
>>>>>>
>>>>>> On Fri, Jan 28, 2022 at 1:32 PM Surinder Mehra <re...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Just curious, why can't we use continuous query here with
>>>>>>> "appropriate" event type to write to another cache. So your listener will
>>>>>>> do below things
>>>>>>> 1. Write entry to another cache
>>>>>>> 2. remove entry from source cache
>>>>>>>
>>>>>>> Just an idea, please correct if I am wrong
>>>>>>>
>>>>>>> On Fri, Jan 28, 2022 at 3:49 PM Sumit Deshinge <
>>>>>>> sumit.deshinge@gmail.com> wrote:
>>>>>>>
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> We are running apache ignite 2.11 with cache configuration
>>>>>>>> FULL_SYNC and REPLICATED mode.
>>>>>>>>
>>>>>>>> Our use case is :
>>>>>>>>
>>>>>>>> 1. *Multiple thin clients are adding data* into a cache using
>>>>>>>> putAll operation.
>>>>>>>> 2. *Simultaneously the server is reading the data* using server
>>>>>>>> cache iterator.
>>>>>>>> 3. *While iterating over the cache, data is removed from the cache
>>>>>>>> and added into new cache using a transaction*, i.e. transaction
>>>>>>>> with remove and put operations. We have transaction *concurrency -
>>>>>>>> pessimistic and isolation levels - repeatable_read*.
>>>>>>>>
>>>>>>>> But we are seeing few missing entries at server side, i.e. server
>>>>>>>> is not able to read all the data put by client. E.g. in one of the run, all
>>>>>>>> thin clients put 5000 entries, server is able to read only 4999 entries.
>>>>>>>> Here we saw 1 entry was not read by server.
>>>>>>>>
>>>>>>>> *Another observation is that, if we remove the transaction in the
>>>>>>>> second step above, or use optimistic transaction with serializable
>>>>>>>> isolation level, then this issue is not observed*.
>>>>>>>>
>>>>>>>> What could be the possible problem in this use case
>>>>>>>> with pessimistic concurrency and repeatable_read isolation level? This is
>>>>>>>> particularly important as this configuration is resulting in data loss.
>>>>>>>>
>>>>>>>> --
>>>>>>>> Regards,
>>>>>>>> Sumit Deshinge
>>>>>>>>
>>>>>>>>
>>>>>
>>>>> --
>>>>> Regards,
>>>>> Sumit Deshinge
>>>>>
>>>>>
>>>
>>> --
>>>
>>> Sumit Deshinge
>>>
>>> R&D Engineer | Symantec Enterprise Division
>>>
>>> Broadcom Software
>>>
>>> Email: Sumit Deshinge <su...@broadcom.com>
>>>
>>>
>>> This electronic communication and the information and any files
>>> transmitted with it, or attached to it, are confidential and are intended
>>> solely for the use of the individual or entity to whom it is addressed and
>>> may contain information that is confidential, legally privileged, protected
>>> by privacy laws, or otherwise restricted from disclosure to anyone else. If
>>> you are not the intended recipient or the person responsible for delivering
>>> the e-mail to the intended recipient, you are hereby notified that any use,
>>> copying, distributing, dissemination, forwarding, printing, or copying of
>>> this e-mail is strictly prohibited. If you received this e-mail in error,
>>> please return the e-mail to the sender, delete it from your computer, and
>>> destroy any printed copy of it.
>>
>>
>
> --
> Regards,
> Sumit Deshinge
>
>

Re: Server cache reads less number of entries than number of entries put by client cache

Posted by Sumit Deshinge <su...@gmail.com>.
No, cache does not have entries. Somehow the number of entries returned are
less than the number of entries put by all thin clients.

On Wed, Feb 2, 2022 at 1:33 PM Pavel Tupitsyn <pt...@apache.org> wrote:

> Do you mean that cache has some entries, but the iterator does not return
> them?
>
> On Wed, Feb 2, 2022 at 10:38 AM Sumit Deshinge <
> sumit.deshinge@broadcom.com> wrote:
>
>> Hi Pavel,
>>
>> I am trying to remove the data from one cache (on which I am iterating)
>> to another cache in transaction.
>> When the iterator says no further elements, I again try getting a new
>> iterator after few seconds, to check if there is any new data available.
>>
>> In this process, I am missing one or two entries. But if I remove the
>> transaction or add optmistic+serializable instead
>> of pessimistic+repeatable_read transaction type, then this loss of data is
>> not observed with the same steps mentioned.
>>
>>
>> On Wed, Feb 2, 2022 at 1:00 PM Pavel Tupitsyn <pt...@apache.org>
>> wrote:
>>
>>> > While iterating over the cache, data is removed from the cache
>>>
>>> Sumit, as I understand, you read data while you also remove it, so it is
>>> not clear what the expectation is.
>>>
>>> On Wed, Feb 2, 2022 at 10:28 AM Sumit Deshinge <su...@gmail.com>
>>> wrote:
>>>
>>>> Thank you Surinder and Pavel. I will give this approach a try.
>>>> But even in case of iterator, when I try refreshing the iterator once
>>>> it reached to last record, i.e. new iterator, it does not give all the
>>>> entries as described in the first email steps.
>>>>
>>>> On Fri, Jan 28, 2022 at 4:08 PM Pavel Tupitsyn <pt...@apache.org>
>>>> wrote:
>>>>
>>>>> Cache iterator does not guarantee that you'll see all entries if there
>>>>> are concurrent updates, I think you are facing a race condition.
>>>>> Please try ContinuousQuery as Surinder suggests, it will catch all
>>>>> data changes.
>>>>>
>>>>>
>>>>> On Fri, Jan 28, 2022 at 1:32 PM Surinder Mehra <re...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Just curious, why can't we use continuous query here with
>>>>>> "appropriate" event type to write to another cache. So your listener will
>>>>>> do below things
>>>>>> 1. Write entry to another cache
>>>>>> 2. remove entry from source cache
>>>>>>
>>>>>> Just an idea, please correct if I am wrong
>>>>>>
>>>>>> On Fri, Jan 28, 2022 at 3:49 PM Sumit Deshinge <
>>>>>> sumit.deshinge@gmail.com> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> We are running apache ignite 2.11 with cache configuration FULL_SYNC
>>>>>>> and REPLICATED mode.
>>>>>>>
>>>>>>> Our use case is :
>>>>>>>
>>>>>>> 1. *Multiple thin clients are adding data* into a cache using
>>>>>>> putAll operation.
>>>>>>> 2. *Simultaneously the server is reading the data* using server
>>>>>>> cache iterator.
>>>>>>> 3. *While iterating over the cache, data is removed from the cache
>>>>>>> and added into new cache using a transaction*, i.e. transaction
>>>>>>> with remove and put operations. We have transaction *concurrency -
>>>>>>> pessimistic and isolation levels - repeatable_read*.
>>>>>>>
>>>>>>> But we are seeing few missing entries at server side, i.e. server is
>>>>>>> not able to read all the data put by client. E.g. in one of the run, all
>>>>>>> thin clients put 5000 entries, server is able to read only 4999 entries.
>>>>>>> Here we saw 1 entry was not read by server.
>>>>>>>
>>>>>>> *Another observation is that, if we remove the transaction in the
>>>>>>> second step above, or use optimistic transaction with serializable
>>>>>>> isolation level, then this issue is not observed*.
>>>>>>>
>>>>>>> What could be the possible problem in this use case
>>>>>>> with pessimistic concurrency and repeatable_read isolation level? This is
>>>>>>> particularly important as this configuration is resulting in data loss.
>>>>>>>
>>>>>>> --
>>>>>>> Regards,
>>>>>>> Sumit Deshinge
>>>>>>>
>>>>>>>
>>>>
>>>> --
>>>> Regards,
>>>> Sumit Deshinge
>>>>
>>>>
>>
>> --
>>
>> Sumit Deshinge
>>
>> R&D Engineer | Symantec Enterprise Division
>>
>> Broadcom Software
>>
>> Email: Sumit Deshinge <su...@broadcom.com>
>>
>>
>> This electronic communication and the information and any files
>> transmitted with it, or attached to it, are confidential and are intended
>> solely for the use of the individual or entity to whom it is addressed and
>> may contain information that is confidential, legally privileged, protected
>> by privacy laws, or otherwise restricted from disclosure to anyone else. If
>> you are not the intended recipient or the person responsible for delivering
>> the e-mail to the intended recipient, you are hereby notified that any use,
>> copying, distributing, dissemination, forwarding, printing, or copying of
>> this e-mail is strictly prohibited. If you received this e-mail in error,
>> please return the e-mail to the sender, delete it from your computer, and
>> destroy any printed copy of it.
>
>

-- 
Regards,
Sumit Deshinge

Re: Server cache reads less number of entries than number of entries put by client cache

Posted by Pavel Tupitsyn <pt...@apache.org>.
Do you mean that cache has some entries, but the iterator does not return
them?

On Wed, Feb 2, 2022 at 10:38 AM Sumit Deshinge <su...@broadcom.com>
wrote:

> Hi Pavel,
>
> I am trying to remove the data from one cache (on which I am iterating) to
> another cache in transaction.
> When the iterator says no further elements, I again try getting a new
> iterator after few seconds, to check if there is any new data available.
>
> In this process, I am missing one or two entries. But if I remove the
> transaction or add optmistic+serializable instead
> of pessimistic+repeatable_read transaction type, then this loss of data is
> not observed with the same steps mentioned.
>
>
> On Wed, Feb 2, 2022 at 1:00 PM Pavel Tupitsyn <pt...@apache.org>
> wrote:
>
>> > While iterating over the cache, data is removed from the cache
>>
>> Sumit, as I understand, you read data while you also remove it, so it is
>> not clear what the expectation is.
>>
>> On Wed, Feb 2, 2022 at 10:28 AM Sumit Deshinge <su...@gmail.com>
>> wrote:
>>
>>> Thank you Surinder and Pavel. I will give this approach a try.
>>> But even in case of iterator, when I try refreshing the iterator once it
>>> reached to last record, i.e. new iterator, it does not give all the entries
>>> as described in the first email steps.
>>>
>>> On Fri, Jan 28, 2022 at 4:08 PM Pavel Tupitsyn <pt...@apache.org>
>>> wrote:
>>>
>>>> Cache iterator does not guarantee that you'll see all entries if there
>>>> are concurrent updates, I think you are facing a race condition.
>>>> Please try ContinuousQuery as Surinder suggests, it will catch all data
>>>> changes.
>>>>
>>>>
>>>> On Fri, Jan 28, 2022 at 1:32 PM Surinder Mehra <re...@gmail.com>
>>>> wrote:
>>>>
>>>>> Just curious, why can't we use continuous query here with
>>>>> "appropriate" event type to write to another cache. So your listener will
>>>>> do below things
>>>>> 1. Write entry to another cache
>>>>> 2. remove entry from source cache
>>>>>
>>>>> Just an idea, please correct if I am wrong
>>>>>
>>>>> On Fri, Jan 28, 2022 at 3:49 PM Sumit Deshinge <
>>>>> sumit.deshinge@gmail.com> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> We are running apache ignite 2.11 with cache configuration FULL_SYNC
>>>>>> and REPLICATED mode.
>>>>>>
>>>>>> Our use case is :
>>>>>>
>>>>>> 1. *Multiple thin clients are adding data* into a cache using putAll
>>>>>> operation.
>>>>>> 2. *Simultaneously the server is reading the data* using server
>>>>>> cache iterator.
>>>>>> 3. *While iterating over the cache, data is removed from the cache
>>>>>> and added into new cache using a transaction*, i.e. transaction with
>>>>>> remove and put operations. We have transaction *concurrency -
>>>>>> pessimistic and isolation levels - repeatable_read*.
>>>>>>
>>>>>> But we are seeing few missing entries at server side, i.e. server is
>>>>>> not able to read all the data put by client. E.g. in one of the run, all
>>>>>> thin clients put 5000 entries, server is able to read only 4999 entries.
>>>>>> Here we saw 1 entry was not read by server.
>>>>>>
>>>>>> *Another observation is that, if we remove the transaction in the
>>>>>> second step above, or use optimistic transaction with serializable
>>>>>> isolation level, then this issue is not observed*.
>>>>>>
>>>>>> What could be the possible problem in this use case
>>>>>> with pessimistic concurrency and repeatable_read isolation level? This is
>>>>>> particularly important as this configuration is resulting in data loss.
>>>>>>
>>>>>> --
>>>>>> Regards,
>>>>>> Sumit Deshinge
>>>>>>
>>>>>>
>>>
>>> --
>>> Regards,
>>> Sumit Deshinge
>>>
>>>
>
> --
>
> Sumit Deshinge
>
> R&D Engineer | Symantec Enterprise Division
>
> Broadcom Software
>
> Email: Sumit Deshinge <su...@broadcom.com>
>
>
> This electronic communication and the information and any files
> transmitted with it, or attached to it, are confidential and are intended
> solely for the use of the individual or entity to whom it is addressed and
> may contain information that is confidential, legally privileged, protected
> by privacy laws, or otherwise restricted from disclosure to anyone else. If
> you are not the intended recipient or the person responsible for delivering
> the e-mail to the intended recipient, you are hereby notified that any use,
> copying, distributing, dissemination, forwarding, printing, or copying of
> this e-mail is strictly prohibited. If you received this e-mail in error,
> please return the e-mail to the sender, delete it from your computer, and
> destroy any printed copy of it.

Re: Server cache reads less number of entries than number of entries put by client cache

Posted by Sumit Deshinge <su...@broadcom.com>.
Hi Pavel,

I am trying to remove the data from one cache (on which I am iterating) to
another cache in transaction.
When the iterator says no further elements, I again try getting a new
iterator after few seconds, to check if there is any new data available.

In this process, I am missing one or two entries. But if I remove the
transaction or add optmistic+serializable instead
of pessimistic+repeatable_read transaction type, then this loss of data is
not observed with the same steps mentioned.


On Wed, Feb 2, 2022 at 1:00 PM Pavel Tupitsyn <pt...@apache.org> wrote:

> > While iterating over the cache, data is removed from the cache
>
> Sumit, as I understand, you read data while you also remove it, so it is
> not clear what the expectation is.
>
> On Wed, Feb 2, 2022 at 10:28 AM Sumit Deshinge <su...@gmail.com>
> wrote:
>
>> Thank you Surinder and Pavel. I will give this approach a try.
>> But even in case of iterator, when I try refreshing the iterator once it
>> reached to last record, i.e. new iterator, it does not give all the entries
>> as described in the first email steps.
>>
>> On Fri, Jan 28, 2022 at 4:08 PM Pavel Tupitsyn <pt...@apache.org>
>> wrote:
>>
>>> Cache iterator does not guarantee that you'll see all entries if there
>>> are concurrent updates, I think you are facing a race condition.
>>> Please try ContinuousQuery as Surinder suggests, it will catch all data
>>> changes.
>>>
>>>
>>> On Fri, Jan 28, 2022 at 1:32 PM Surinder Mehra <re...@gmail.com>
>>> wrote:
>>>
>>>> Just curious, why can't we use continuous query here with "appropriate"
>>>> event type to write to another cache. So your listener will do below things
>>>> 1. Write entry to another cache
>>>> 2. remove entry from source cache
>>>>
>>>> Just an idea, please correct if I am wrong
>>>>
>>>> On Fri, Jan 28, 2022 at 3:49 PM Sumit Deshinge <
>>>> sumit.deshinge@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> We are running apache ignite 2.11 with cache configuration FULL_SYNC
>>>>> and REPLICATED mode.
>>>>>
>>>>> Our use case is :
>>>>>
>>>>> 1. *Multiple thin clients are adding data* into a cache using putAll
>>>>> operation.
>>>>> 2. *Simultaneously the server is reading the data* using server cache
>>>>> iterator.
>>>>> 3. *While iterating over the cache, data is removed from the cache
>>>>> and added into new cache using a transaction*, i.e. transaction with
>>>>> remove and put operations. We have transaction *concurrency -
>>>>> pessimistic and isolation levels - repeatable_read*.
>>>>>
>>>>> But we are seeing few missing entries at server side, i.e. server is
>>>>> not able to read all the data put by client. E.g. in one of the run, all
>>>>> thin clients put 5000 entries, server is able to read only 4999 entries.
>>>>> Here we saw 1 entry was not read by server.
>>>>>
>>>>> *Another observation is that, if we remove the transaction in the
>>>>> second step above, or use optimistic transaction with serializable
>>>>> isolation level, then this issue is not observed*.
>>>>>
>>>>> What could be the possible problem in this use case
>>>>> with pessimistic concurrency and repeatable_read isolation level? This is
>>>>> particularly important as this configuration is resulting in data loss.
>>>>>
>>>>> --
>>>>> Regards,
>>>>> Sumit Deshinge
>>>>>
>>>>>
>>
>> --
>> Regards,
>> Sumit Deshinge
>>
>>

-- 

Sumit Deshinge

R&D Engineer | Symantec Enterprise Division

Broadcom Software

Email: Sumit Deshinge <su...@broadcom.com>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

Re: Server cache reads less number of entries than number of entries put by client cache

Posted by Pavel Tupitsyn <pt...@apache.org>.
> While iterating over the cache, data is removed from the cache

Sumit, as I understand, you read data while you also remove it, so it is
not clear what the expectation is.

On Wed, Feb 2, 2022 at 10:28 AM Sumit Deshinge <su...@gmail.com>
wrote:

> Thank you Surinder and Pavel. I will give this approach a try.
> But even in case of iterator, when I try refreshing the iterator once it
> reached to last record, i.e. new iterator, it does not give all the entries
> as described in the first email steps.
>
> On Fri, Jan 28, 2022 at 4:08 PM Pavel Tupitsyn <pt...@apache.org>
> wrote:
>
>> Cache iterator does not guarantee that you'll see all entries if there
>> are concurrent updates, I think you are facing a race condition.
>> Please try ContinuousQuery as Surinder suggests, it will catch all data
>> changes.
>>
>>
>> On Fri, Jan 28, 2022 at 1:32 PM Surinder Mehra <re...@gmail.com>
>> wrote:
>>
>>> Just curious, why can't we use continuous query here with "appropriate"
>>> event type to write to another cache. So your listener will do below things
>>> 1. Write entry to another cache
>>> 2. remove entry from source cache
>>>
>>> Just an idea, please correct if I am wrong
>>>
>>> On Fri, Jan 28, 2022 at 3:49 PM Sumit Deshinge <su...@gmail.com>
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> We are running apache ignite 2.11 with cache configuration FULL_SYNC
>>>> and REPLICATED mode.
>>>>
>>>> Our use case is :
>>>>
>>>> 1. *Multiple thin clients are adding data* into a cache using putAll
>>>> operation.
>>>> 2. *Simultaneously the server is reading the data* using server cache
>>>> iterator.
>>>> 3. *While iterating over the cache, data is removed from the cache and
>>>> added into new cache using a transaction*, i.e. transaction with
>>>> remove and put operations. We have transaction *concurrency -
>>>> pessimistic and isolation levels - repeatable_read*.
>>>>
>>>> But we are seeing few missing entries at server side, i.e. server is
>>>> not able to read all the data put by client. E.g. in one of the run, all
>>>> thin clients put 5000 entries, server is able to read only 4999 entries.
>>>> Here we saw 1 entry was not read by server.
>>>>
>>>> *Another observation is that, if we remove the transaction in the
>>>> second step above, or use optimistic transaction with serializable
>>>> isolation level, then this issue is not observed*.
>>>>
>>>> What could be the possible problem in this use case
>>>> with pessimistic concurrency and repeatable_read isolation level? This is
>>>> particularly important as this configuration is resulting in data loss.
>>>>
>>>> --
>>>> Regards,
>>>> Sumit Deshinge
>>>>
>>>>
>
> --
> Regards,
> Sumit Deshinge
>
>

Re: Server cache reads less number of entries than number of entries put by client cache

Posted by Sumit Deshinge <su...@gmail.com>.
Thank you Surinder and Pavel. I will give this approach a try.
But even in case of iterator, when I try refreshing the iterator once it
reached to last record, i.e. new iterator, it does not give all the entries
as described in the first email steps.

On Fri, Jan 28, 2022 at 4:08 PM Pavel Tupitsyn <pt...@apache.org> wrote:

> Cache iterator does not guarantee that you'll see all entries if there are
> concurrent updates, I think you are facing a race condition.
> Please try ContinuousQuery as Surinder suggests, it will catch all data
> changes.
>
>
> On Fri, Jan 28, 2022 at 1:32 PM Surinder Mehra <re...@gmail.com> wrote:
>
>> Just curious, why can't we use continuous query here with "appropriate"
>> event type to write to another cache. So your listener will do below things
>> 1. Write entry to another cache
>> 2. remove entry from source cache
>>
>> Just an idea, please correct if I am wrong
>>
>> On Fri, Jan 28, 2022 at 3:49 PM Sumit Deshinge <su...@gmail.com>
>> wrote:
>>
>>> Hi,
>>>
>>> We are running apache ignite 2.11 with cache configuration FULL_SYNC and
>>> REPLICATED mode.
>>>
>>> Our use case is :
>>>
>>> 1. *Multiple thin clients are adding data* into a cache using putAll
>>> operation.
>>> 2. *Simultaneously the server is reading the data* using server cache
>>> iterator.
>>> 3. *While iterating over the cache, data is removed from the cache and
>>> added into new cache using a transaction*, i.e. transaction with remove
>>> and put operations. We have transaction *concurrency - pessimistic and
>>> isolation levels - repeatable_read*.
>>>
>>> But we are seeing few missing entries at server side, i.e. server is not
>>> able to read all the data put by client. E.g. in one of the run, all thin
>>> clients put 5000 entries, server is able to read only 4999 entries. Here we
>>> saw 1 entry was not read by server.
>>>
>>> *Another observation is that, if we remove the transaction in the second
>>> step above, or use optimistic transaction with serializable isolation
>>> level, then this issue is not observed*.
>>>
>>> What could be the possible problem in this use case
>>> with pessimistic concurrency and repeatable_read isolation level? This is
>>> particularly important as this configuration is resulting in data loss.
>>>
>>> --
>>> Regards,
>>> Sumit Deshinge
>>>
>>>

-- 
Regards,
Sumit Deshinge

Re: Server cache reads less number of entries than number of entries put by client cache

Posted by Pavel Tupitsyn <pt...@apache.org>.
Cache iterator does not guarantee that you'll see all entries if there are
concurrent updates, I think you are facing a race condition.
Please try ContinuousQuery as Surinder suggests, it will catch all data
changes.


On Fri, Jan 28, 2022 at 1:32 PM Surinder Mehra <re...@gmail.com> wrote:

> Just curious, why can't we use continuous query here with "appropriate"
> event type to write to another cache. So your listener will do below things
> 1. Write entry to another cache
> 2. remove entry from source cache
>
> Just an idea, please correct if I am wrong
>
> On Fri, Jan 28, 2022 at 3:49 PM Sumit Deshinge <su...@gmail.com>
> wrote:
>
>> Hi,
>>
>> We are running apache ignite 2.11 with cache configuration FULL_SYNC and
>> REPLICATED mode.
>>
>> Our use case is :
>>
>> 1. *Multiple thin clients are adding data* into a cache using putAll
>> operation.
>> 2. *Simultaneously the server is reading the data* using server cache
>> iterator.
>> 3. *While iterating over the cache, data is removed from the cache and
>> added into new cache using a transaction*, i.e. transaction with remove
>> and put operations. We have transaction *concurrency - pessimistic and
>> isolation levels - repeatable_read*.
>>
>> But we are seeing few missing entries at server side, i.e. server is not
>> able to read all the data put by client. E.g. in one of the run, all thin
>> clients put 5000 entries, server is able to read only 4999 entries. Here we
>> saw 1 entry was not read by server.
>>
>> *Another observation is that, if we remove the transaction in the second
>> step above, or use optimistic transaction with serializable isolation
>> level, then this issue is not observed*.
>>
>> What could be the possible problem in this use case
>> with pessimistic concurrency and repeatable_read isolation level? This is
>> particularly important as this configuration is resulting in data loss.
>>
>> --
>> Regards,
>> Sumit Deshinge
>>
>>

Re: Server cache reads less number of entries than number of entries put by client cache

Posted by Surinder Mehra <re...@gmail.com>.
Just curious, why can't we use continuous query here with "appropriate"
event type to write to another cache. So your listener will do below things
1. Write entry to another cache
2. remove entry from source cache

Just an idea, please correct if I am wrong

On Fri, Jan 28, 2022 at 3:49 PM Sumit Deshinge <su...@gmail.com>
wrote:

> Hi,
>
> We are running apache ignite 2.11 with cache configuration FULL_SYNC and
> REPLICATED mode.
>
> Our use case is :
>
> 1. *Multiple thin clients are adding data* into a cache using putAll
> operation.
> 2. *Simultaneously the server is reading the data* using server cache
> iterator.
> 3. *While iterating over the cache, data is removed from the cache and
> added into new cache using a transaction*, i.e. transaction with remove
> and put operations. We have transaction *concurrency - pessimistic and
> isolation levels - repeatable_read*.
>
> But we are seeing few missing entries at server side, i.e. server is not
> able to read all the data put by client. E.g. in one of the run, all thin
> clients put 5000 entries, server is able to read only 4999 entries. Here we
> saw 1 entry was not read by server.
>
> *Another observation is that, if we remove the transaction in the second
> step above, or use optimistic transaction with serializable isolation
> level, then this issue is not observed*.
>
> What could be the possible problem in this use case
> with pessimistic concurrency and repeatable_read isolation level? This is
> particularly important as this configuration is resulting in data loss.
>
> --
> Regards,
> Sumit Deshinge
>
>