You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@accumulo.apache.org by "Keith Turner (JIRA)" <ji...@apache.org> on 2014/01/16 16:50:21 UTC

[jira] [Created] (ACCUMULO-2205) Add compaction filter to continuous ingest

Keith Turner created ACCUMULO-2205:
--------------------------------------

             Summary: Add compaction filter to continuous ingest
                 Key: ACCUMULO-2205
                 URL: https://issues.apache.org/jira/browse/ACCUMULO-2205
             Project: Accumulo
          Issue Type: Sub-task
            Reporter: Keith Turner


It would be useful run a compaction that deletes all of the nodes written by a given ingest client (each ingest client writes a uuid that this filter could use).  This would probably be best done after verification( or on a clone in parallel to verification).  For example could do the following steps in testing.

# run ingest for a time period
# stop ingest
# verify
# run compaction filter to delete data written by one or more ingest clients
# verify

Its possible that ingest clients can over write each others nodes, but it seems like this would not cause a problem.  Below is one example where this does not cause a problem

 # ingest client A writes 2:A->3:A->5:A->6:A->7:A
 # ingest client B writes 12:B->13:B->5:B->16:B->17:B
 # every thing written by B is deleted

In the above case,  {{2:A->3:A}} and  {{6:A->7:A}} would be the only thing left.  There are not pointers to undefined nodes.




--
This message was sent by Atlassian JIRA
(v6.1.5#6160)