You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ozone.apache.org by "mingchao zhao (Jira)" <ji...@apache.org> on 2020/10/12 06:55:00 UTC

[jira] [Issue Comment Deleted] (HDDS-4308) Fix issue with quota update

     [ https://issues.apache.org/jira/browse/HDDS-4308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

mingchao zhao updated HDDS-4308:
--------------------------------
    Comment: was deleted

(was: Hi [~bharat] Can we keep updating the usedBytes of volumeArgs in memory and persist it in the takeSnapshot(currently, this action is executed periodically)? 
Just want to make sure it's feasible. I am not familiar with snapshot and ratis. If feasible, I am very willing to discuss here.)

> Fix issue with quota update
> ---------------------------
>
>                 Key: HDDS-4308
>                 URL: https://issues.apache.org/jira/browse/HDDS-4308
>             Project: Hadoop Distributed Data Store
>          Issue Type: Bug
>            Reporter: Bharat Viswanadham
>            Priority: Blocker
>
> Currently volumeArgs using getCacheValue and put the same object in doubleBuffer, this might cause issue.
> Let's take the below scenario:
> InitialVolumeArgs quotaBytes -> 10000
> 1. T1 -> Update VolumeArgs, and subtracting 1000 and put this updated volumeArgs to DoubleBuffer.
> 2. T2-> Update VolumeArgs, and subtracting 2000 and has not still updated to double buffer.
> *Now at the end of flushing these transactions, our DB should have 7000 as bytes used.*
> Now T1 is picked by double Buffer and when it commits, and as it uses cached Object put into doubleBuffer, it flushes to DB with the updated value from T2(As it is a cache object) and update DB with bytesUsed as 7000.
> And now OM has restarted, and only DB has transactions till T1. (We get this info from TransactionInfo Table(https://issues.apache.org/jira/browse/HDDS-3685)
> Now T2 is again replayed, as it is not committed to DB, now DB will be again subtracted with 2000, and now DB will have 5000.
> But after T2, the value should be 7000, so we have DB in an incorrect state.
> Issue here:
> 1. As we use a cached object and put the same cached object into double buffer this can cause this kind of issue. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org