You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "Tejas Patil (JIRA)" <ji...@apache.org> on 2013/07/04 04:01:21 UTC

[jira] [Updated] (KAFKA-559) Garbage collect old consumer metadata entries

     [ https://issues.apache.org/jira/browse/KAFKA-559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Tejas Patil updated KAFKA-559:
------------------------------

    Attachment: KAFKA-559.v1.patch

Attached KAFKA-559.v1.patch above. 
- It has been developed to work do cleanup with "group-id" as input from user instead of "topic". 
- An additional "dry-run" feature is provided so that people can see what all znodes would get deleted w/o actually deleting them.
                
> Garbage collect old consumer metadata entries
> ---------------------------------------------
>
>                 Key: KAFKA-559
>                 URL: https://issues.apache.org/jira/browse/KAFKA-559
>             Project: Kafka
>          Issue Type: New Feature
>            Reporter: Jay Kreps
>            Assignee: Tejas Patil
>              Labels: project
>         Attachments: KAFKA-559.v1.patch
>
>
> Many use cases involve tranient consumers. These consumers create entries under their consumer group in zk and maintain offsets there as well. There is currently no way to delete these entries. It would be good to have a tool that did something like
>   bin/delete-obsolete-consumer-groups.sh [--topic t1] --since [date] --zookeeper [zk_connect]
> This would scan through consumer group entries and delete any that had no offset update since the given date.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira