You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@accumulo.apache.org by GitBox <gi...@apache.org> on 2021/11/18 18:53:46 UTC

[GitHub] [accumulo-testing] DomGarguilo commented on pull request #166: Add deletes to continuous ingest

DomGarguilo commented on pull request #166:
URL: https://github.com/apache/accumulo-testing/pull/166#issuecomment-973161204


   > Have you done any manual testing of this?
   
   I have done some testing. Configuring CI to write a full set of 1,000,000 nodes at depth 25 allows the deletes to occur. It seems that all nodes written are then deleted once a compaction happens (I manually compacted).
   
   Something I found while doing that testing is, it is not possible to have all written entries deleted. For example, if I set the entries property to **25,000,000**, then the code will exit the loop ([here](https://github.com/apache/accumulo-testing/blob/ae207b1bd7a855a8abec2fe42c2559d2bf26405b/src/main/java/org/apache/accumulo/testing/continuous/ContinuousIngest.java#L197-L198)) before reaching the portion that initiates the deletes. If I set the entries to **25,000,001**, then the deletes **will** happen, but another 1M entries will be written before then next check ([here](https://github.com/apache/accumulo-testing/blob/ae207b1bd7a855a8abec2fe42c2559d2bf26405b/src/main/java/org/apache/accumulo/testing/continuous/ContinuousIngest.java#L179-L180)) will trigger an exit, leaving the total entries at 1M. I don't think this is a big deal, just thought it was interesting and also don't see a clean way to allow this to happen.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@accumulo.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org