You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ozone.apache.org by "sumitagrawl (via GitHub)" <gi...@apache.org> on 2023/08/25 05:27:45 UTC

[GitHub] [ozone] sumitagrawl commented on a diff in pull request #5207: HDDS-8977. Ratis crash if a lot of directories deleted at once

sumitagrawl commented on code in PR #5207:
URL: https://github.com/apache/ozone/pull/5207#discussion_r1305155976


##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/service/SnapshotDeletingService.java:
##########
@@ -89,6 +91,7 @@ public class SnapshotDeletingService extends AbstractKeyDeletingService {
   // from the same table and can send deletion requests for same snapshot
   // multiple times.
   private static final int SNAPSHOT_DELETING_CORE_POOL_SIZE = 1;
+  private static final int MIN_ERR_LIMIT_PER_TASK = 1000;

Review Comment:
   32MB is limit set default, but even 6000* (4KB as approx size taken) => 24MB crosses, then we are going with 1000, i.e. 32MB/1000 => 32KB.
   We can not do calculation for size till its serialize, and will have impact like binary calculation till we achieve the meeting criteria. This kind of thing is not required, and even if 1000 entry, limit is not met, user need increase buffer size of Ratis.
   
   So this logic is not done, as its not simple to identify remaining buf directly without serializing.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org