You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ozone.apache.org by "Bharat Viswanadham (Jira)" <ji...@apache.org> on 2020/10/05 17:28:00 UTC

[jira] [Created] (HDDS-4308) Fix issue with quota update

Bharat Viswanadham created HDDS-4308:
----------------------------------------

             Summary: Fix issue with quota update
                 Key: HDDS-4308
                 URL: https://issues.apache.org/jira/browse/HDDS-4308
             Project: Hadoop Distributed Data Store
          Issue Type: Bug
            Reporter: Bharat Viswanadham


Currently volumeArgs using getCacheValue and put the same object in doubleBuffer, this might cause issue.

Let's take the below scenario:

InitialVolumeArgs quotaBytes -> 10000
1. T1 -> Update VolumeArgs, and subtracting 1000 and put this updated volumeArgs to DoubleBuffer.
2. T2-> Update VolumeArgs, and subtracting 2000 and has not still updated to double buffer.

*Now at the end of flushing these transactions, our DB should have 7000 as bytes used.*

Now T1 is picked by double Buffer and when it commits, and as it uses cached Object put into doubleBuffer, it flushes to DB with the processed value from T2(As it is a cache object) and update DB with bytesUsed as 7000.

And now OM has restarted, and only DB has transactions till T1. (We get this info from TransactionInfo Table(https://issues.apache.org/jira/browse/HDDS-3685)

Now T2 is again replayed, as it is not committed to DB, now DB will be again subtracted with 2000, and now DB will have 5000.

Issue here:
1. As we use a cached object and put the same cached object into double buffer this can cause this kind of issue. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org