You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ignite.apache.org by "Maxim Muzafarov (Jira)" <ji...@apache.org> on 2021/10/15 11:00:00 UTC

[jira] [Commented] (IGNITE-1605) Provide stronger data loss check

    [ https://issues.apache.org/jira/browse/IGNITE-1605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17429234#comment-17429234 ] 

Maxim Muzafarov commented on IGNITE-1605:
-----------------------------------------

Please refer to the discussion partitions loss policy handling
https://issues.apache.org/jira/browse/IGNITE-13003

> Provide stronger data loss check
> --------------------------------
>
>                 Key: IGNITE-1605
>                 URL: https://issues.apache.org/jira/browse/IGNITE-1605
>             Project: Ignite
>          Issue Type: Task
>            Reporter: Yakov Zhdanov
>            Priority: Major
>              Labels: important
>
> Need to provide stronger data loss check.
> Currently node can fire event - EVT_CACHE_REBALANCE_PART_DATA_LOST
> However, this is not enough since if there is strong requirement on application behavior on data loss e.g. further cache updates should throw exception - this requirement cannot currently be met even with use of cache interceptor.
> Suggestions:
> * Introduce CacheDataLossPolicy enum: FAIL_OPS, NOOP and put it to configuration
> * If node fires PART_LOST_EVT then any update to lost partition will throw (or will not throw) exception according to DataLossPolicy
> * ForceKeysRequest should be completed with exception (if plc == FAIL) if all nodes to request from are gone. So, all gets/puts/txs should fail.
> * Add a public API method in order to allow a recovery from a failed state.
> Another solution is to detect partition loss at the time partition exchange completes. Since we hold topology lock during the exchange, we can easily check that there are no owners for a partition and act as a topology validator in case FAIL policy is configured. There is one thing needed to be carefully analyzed: demand worker should not park partition as owning in case last owner leaves grid before the corresponding exchange completes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)