You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by Xiangyuan LI <fl...@gmail.com> on 2023/07/20 10:02:20 UTC

Re: [DISCUSS] KIP-694: Support Reducing Partitions for Topics

Does the progress of the KIP have any news? I think this feature is
reasonable

Guoqiang Shu <sh...@gmail.com> 于2021年3月9日周二 22:31写道:

>
> Thanks Guozhang for the comments! Again sorry for the very late response.
> We took time to further verify the implementation internally and rebased
> the proposal on top of our perceived KIP-500 architecture. Embedded please
> find some detailed reply.
>
> On 2020/12/15 05:25:13, Guozhang Wang <wa...@gmail.com> wrote:
> > Hello George,
> >
> > Thanks for submitting this KIP. On the high-level I think I agree that
> > handling keyed messages is a very complicated issue and maybe we can just
> > start with the easy scenario that does not involve them. Pushing the
> burden
> > to admin users to determine if it is really safe to delete partitions
> (i.e.
> > there should be no key-ed messages, OR message keys are never used in the
> > partitioner). Regarding the detailed proposal, I have some clarification
> > questions / comments above:
> >
> > 1) Compatibility wise, we need to clarify when talking to the old
> versioned
> > clients who do not recognize the added `status` field, what the brokers
> > should return for those read-only / offline topics. My guess is that, for
> > old versioned brokers we would not include the status field, but would
> > exclude / include the partitions for producers / consumers accordingly.
>
> [GS] We added a compatibility session to the KIP.  We start with a naive
> implementation given that we have reasonable control to internal Kafka use
> cases. We suggest the admin explicit turns on the global feature flag
> delete.topic.partition.enable, and hence limit the Metadata version. The
> lower version client will get a LEADER_NOT_AVAILABLE response. Clients with
> higher version will not have problem as the 'mode' filed has default
> ReadWrite.
>
> >
> > 2) In the upcoming release, with KIP-500/691 we will have a
> zookeeper-free
> > quorum-based controller mechanism. So I'd suggest we "rebase" the
> proposal
> > on top of that given the timing of this KIP, i.e. consider moving those
> > zk-paths as for how the new controller could handle the requests. I'd
> > recommend incorporating the proposal with KIP-691.
>
> [GS] Indeed. We rewrite the proposal based on KIP-500, with zk related
> metadata changes replaced with the perspective new approach. Kindly take a
> look please.
>
> >
> > 3) For the read-only partitions, who's responsible for finally switching
> > the status of those partitions to offline when the retention period
> passed?
> > I cannot tell for sure from the diagram in the KIP. Note that in practice
> > that retention period could be days or even weeks. Also, could we still
> > force delete a read-only partition before its retention period?
> >
>
> [GS] We added a scheduled task that periodically checks the topic that
> passes the retention period. Admittedly this proposal is suited in
> environment like ours where topics have shorter retention. yes we support
> force delete a read-only partition at any time.
>
> > 4) Another thing to consider is how deleting partitions work with adding
> > partitions. For example, if a topic has 3 partitions with 0/1 online and
> 2
> > read-only, and a new admin request is received to add a new partition to
> > that topic, how would this be handled?
> >
>
> [GS] For sake simplicity we mark any topic with pending changes and forbid
> further add and delayed removal.
>
> >
> > Guozhang
> >
> >
> > On Tue, Dec 8, 2020 at 8:05 AM georgeshu(舒国强) <ge...@tencent.com>
> wrote:
> >
> > > Hello,
> > >
> > > We write up a KIP based on a straightforward mechanism implemented and
> > > tested in order to solve a practical issue in production.
> > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-694%3A+Support+Reducing+Partitions+for+Topics
> > > Look forward to hearing feedback and suggestions.
> > >
> > > Thanks!
> > >
> > >
> > >
> >
> > --
> > -- Guozhang
> >
>