You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@accumulo.apache.org by "Jeffrey Manno (Jira)" <ji...@apache.org> on 2021/09/13 16:18:00 UTC

[jira] [Resolved] (ACCUMULO-2205) Add compaction filter to continuous ingest

     [ https://issues.apache.org/jira/browse/ACCUMULO-2205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jeffrey Manno resolved ACCUMULO-2205.
-------------------------------------
    Resolution: Duplicate

Duplicated by https://github.com/apache/accumulo-testing/issues/20

> Add compaction filter to continuous ingest
> ------------------------------------------
>
>                 Key: ACCUMULO-2205
>                 URL: https://issues.apache.org/jira/browse/ACCUMULO-2205
>             Project: Accumulo
>          Issue Type: Sub-task
>            Reporter: Keith Turner
>            Priority: Major
>
> It would be useful run a compaction that deletes all of the nodes written by a given ingest client (each ingest client writes a uuid that this filter could use).  This would probably be best done after verification( or on a clone in parallel to verification).  For example could do the following steps in testing.
> # run ingest for a time period
> # stop ingest
> # verify
> # run compaction filter to delete data written by one or more ingest clients
> # verify
> Its possible that ingest clients can over write each others nodes, but it seems like this would not cause a problem.  Below is one example where this does not cause a problem
>  # ingest client A writes 2:A->3:A->5:A->6:A->7:A
>  # ingest client B writes 12:B->13:B->5:B->16:B->17:B
>  # every thing written by B is deleted
> In the above case,  {{2:A->3:A}} and  {{6:A->7:A}} would be the only thing left.  There are not pointers to undefined nodes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)